According to a fact sheet released by the World Health Organization in 2014, 285 million people in the world are visually impaired. Of those, 39 million are blind, and the remaining 246 million have low vision. Two-thirds of this huge group of people will become smartphone owners in less than five years, so Aipoly is helping ensure that those smartphones will be put to excellent use.
There already exist various accessibility features that make the basics of using a smartphone better suited for those with sight impairments. Applications like Be My Eyes are leveraging fast internet speeds, video chat and volunteers to provide remote, personal assistance to the blind. Aipoly takes things a technical step further by helping such the visually impaired get a read on their environment (or specific objects around them), using artificial intelligence.
The app (currently on iOS, arriving soon on Android) leverages artificial neural networks – machine learning models fashioned after biological neural networks – to recognize various components in an image, and then describe them. The resulting technology falls within a field known as computer or machine vision, and Aipoly hopes to develop it to a level of sophistication that allows them to essentially replicate the very basic operations of the human eye and its role in cognitive functions.
Luckily for users, the process of using the app is a lot less complicated than explaining how it works – they point their camera at the object or environment they want described to them and then automatically have that description read to them through the iOS’s VoiceOver feature.
What happens under the hood is that the app sends the picture up to the cloud where neural networks developed by Aipoly partner Teradeep break the image down into different sections and run it against a reverse image search. The app then combines the nouns and adjectives it recognizes in the image to produce audio descriptions like “green shirt” or “smooth countertop.” Since it can recognize colours, the app is also a handy tool for the colourblind.
Teradeep has trained its AI to recognize more than 10 million images, so it’s not easy to bamboozle the app. But then again the natural world is filled with all manners of weird paraphernalia, so users themselves (or likely an acquaintance) can train the app by taking pictures and adding their own audio descriptions. For example, users may want to teach the app to recognize their cane, so that if they aren’t able to find it, the app can spot it for them.
Aipoly showed up to CES 2016 this past January with some brand new features along with updates to existing ones to showcase at the event.
The app now has an intelligent torch which switches on automatically under low-light conditions; a speed regulator has been added to control the pace at which the descriptions are made; non-English accents have been made more accurate and the instructions screen has been made easier to navigate as part of the transition to version 1.1 of the app.
An important feature the team behind Aipoly plans to introduce soon is the ability to understand and explain complex scenes, like “A dog playing with a ball”. Reducing the time the app takes to identify objects and describe them (latency) is also something the team is constantly working on.
Given the nature of neural networks and deep learning, the app gets better by itself since it is constantly learning and making connections between the various pieces of information it is fed. Soon apps like Aipoly will be able to take over where Braille leaves off as a way for the visually challenged to learn about their surroundings and perhaps even consume media.