Title of practice: A Recurrent Neural Network ApproachtoImage Captioning in Braille for Blind-Deaf People
Author/developer: Sameia Zaman; M. Abid Abrar; M. Muntasir Hassan; A.N.M. Nafiul Islam
Language: English
Description of good practice:
The BUET EEE TEAM presented a deep learning model that automatically generates highly descriptive image captions with the goal of helping visually impaired people better understand their environments. In additionally, the described hardware model was also very robust and cheap. Our prototype shows great potential in translating images in real time i.e. it decreases the computation required so that it can be implemented not only fast but also on cheaper hardware.Generating sentences from videos directly canal so be explored in the future.The prototype hardware we built for this project can be made smaller in size so that it can be portable,user friendly,and marketable for the masses.
Country where the practice is developed: Bangladesh
URL to the material: https://ieeexplore.ieee.org/document/9065144
Relevant file:
Type of practice: Research, Publication
Group(s) targeted by the material: Students
Administrative staff
Teaching staff
Policy makers
The level of Creative Commons license:No licensing infromation available
Can the practice be reused?: No
What is the payment model for this material?:
What is the cost of using this material?:
What barriers does it help to overcome?: Technological
Is there anything else you would like to add about this submitted good practice material?:
Tags:
Accessibility

Background Colour Background Colour

Font Face Font Face

Font Kerning Font Kerning

Font Size Font Size

1

Image Visibility Image Visibility

Letter Spacing Letter Spacing

0

Line Height Line Height

1.2

Link Highlight Link Highlight

Text Alignment Text Alignment

Text Colour Text Colour