This experimental feature would provide “synthetic vision” using audio, by processing an image or video into an audio equivalent – with brightness represented by loudness, height by pitch, etc. Although research is being done in this area, this feature currently requires too much training to be widely usable by people who are blind.
Discussion by Disabilities
- For users who are able to learn and understand the audio equivalent of images, this can increase awareness of surroundings. However, the learning curve is currently too great for any such tools to be widely accepted.
These products are not necessarily endorsed by RtF, but represent the range of available options.
Free, not necessarily open source
These products are free to use, but may have strict restrictions on viewing and modifying source code.
Related Research and Papers
- Sonification of Form and Movement for Visual-Impaired Users – Jorge Simao, Pedro Campos (2007)
- A Framework For Designing Image Sonification Methods – Stanford – Woon Seung Yeo and Jonathan Berger (2005)
- An Approach for Image Sonification – Suresh Matta, Dinesh K Kumar, Xinghuo Yu, Mark Burry (2004)
- An Experimental System for Auditory Image Representations – Peter B. L. Meijer (1992)
Contributions & Discussion
Any corrections, suggestions, or additions to this page? Please let us know by emailing Contribution@RaisingTheFloor.net with [MasterList] in the subject line.
You can also join the discussion on the Access Feature Master List Google Group.