Talov, accessible tech for everyone.
Across the world, 760 million people with disabilities (470M deaf & 290M blind people) face big challenges, no correct access to education and decent work, affecting their INDEPENDENCE.
Less than 5% goes to school, some technical aids can cost up to USD 100K and, according to the UN, the combined global annual cost of unaddressed deafness & blindness is USD 1400 Billion.
Our solution is two apps that combine the power of AI with the daily use of the smartphone to give people with hearing and visual impairment more independence for their daily life activities. Both apps work in real-time, without the Internet, and in up to 48 languages:
SpeakLiz for deaf people: camera-based sign language detection, surrounding sounds identification, human voices transcription.
Vision for blind people: surrounding world audio description, helps in maps navigation, identify objects, distances, money bills of many currencies, read multilingual texts, identify colors, and more.
Across the world, 760 million people with disabilities (470M deaf & 290M blind), face big challenges, no correct access to education and decent work, affecting their INDEPENDENCE. Less than 5% goes to school, some hearing & visual aids can cost up to USD 100K and the combined global annual cost of unaddressed deafness & blindness is USD 1400 Billion.
Since our technology is already being used in 86 countries, our approach is definitely global, we currently have more than 24K downloads, and from them 1,7K are paying users. Despite the fact that the pandemic affected mainly in an economic way to many people we have grown, slowly but still growing in our B2C structure. However, we believe in the power of organizations to create a major culture change when it comes to accessibility, that’s why we are now trying to reach more users through a B2B structure. In that way, we can globally reach more people, often in territories where we currently are, but we need to strengthen our presence due to their better conditions of respect and inclusion for people with disabilities, like Europe, North America, Asia-Pacific.
Our apps both have as the main technical core, pattern recognition through artificial intelligence. First, we consider all the urgent needs of the community of people with disabilities to build our own datasets according to those needs. Once collected, that data is processed and labeled to enter neural network structures.
Primarily we use languages like C, Python, for the training process, and Swift, Kotlin or Java for mobile deployment. The testing stage it’s done from many perspectives, first comes the performance and technical viability, and then comes the main part: testing by real users to determine when an Artificial Intelligence model works and when it doesn't. After their feedback, if there’s a problem we return to the beginning (dataset quality) and start the process again.
But, when models are working well, it’s time to include them through updates. Models frequently used/planned to use in our apps are:
SpeakLiz (for deaf people): sound patterns recognition, multilingual speech recognition, emotion estimation from speech, camera-based sign language movements recognition.
Vision (for blind people): general objects detection, distance estimation, money bills recognition (specific & highly detailed object recognition), GPS pattern recognition to help in maps navigation, color identification, places recognition (context prediction).
The first step is to visit the federations or associations of people with disabilities. We usually make workshops where they can teach us about their unsolved challenges in daily life.
In the case of deaf people, the urgent needs are around better communication levels with hearing people and understanding what's happening around them in terms of sounds and conversations. In the case of blind people, the challenges are related to identifying many objects, text reading, and more about their surroundings.
It's usual to have tons of information, and we start classifying the most urgent or high priority. With that new structure, we usually start prototyping possible solutions. In this stage, it's crucial the help from people with hearing or visual impairment to assist us in the hard & real testing of those prototypes. The participation of deaf & blind people in this process is a key factor to learn and understand their needs and concerns, and it's usual that many suggestions, corrections, and improvements are generated in this stage. That’s why in our small core team (5 people), two of its members are actually people with disabilities. Diana is a blind person that manages our general communications and social media due to her background in Social Communication, but also she manages the R&D and testing of products for people with visual impairment. And Erick is a deaf person who manages our visual and design aspects due to his background in photography and design, but also he manages the R&D and testing of products for people with hearing impairment
Once the prototypes are validated by the deaf & blind people communities, we have a better level of understanding of their needs and we can deliver better updates or improvements to our technology platforms.
- Support teachers and educational institutions with teaching and learning methodologies, tools, and resources that help develop future skills for students
When it comes to education access, especially in this pandemic situation, the accessibility of the tools that teachers and educators have is a critical issue. This aspect causes many people with disabilities to experience an even lower rate of learning or access to quality education, in this situation our technology can build bridges to close those gaps while using it not only for educational purposes for better accessing information but also for daily life purposes in general. We strongly believe in the deep & positive impact Talov can have on worldwide education with the correct support of organizations like MIT.
- Pilot: An organization deploying a tested product, service, or business model in at least one community.
Pilot stage was selected because we already have +24K free users and approximately 2K paying users (our business model is a combination of freemium + subscription), they are distributed in 86 countries right now, due to our up to 48 languages compatibility.