Solution Overview & Team Lead Details

Solution name.

Sign Talk

Provide a one-line summary of your solution.

Sign Talk is a web app that aims to bridge the digital communication divide between the ASL community and non-ASL users.

What specific problem are you trying to solve?

Sign Talk’s primary goal is to solve the digital communication divide between ASL and non-ASL users. ASL users often struggle to get their points across to non-ASL speakers when using web communication platforms such as Google Meets, Zoom, Skype, and others. These platforms were not designed with ASL users in mind and therefore lack features, such as translators, to allow for digital ASL communication. This has made it difficult for the ASL community to engage in productive collaboration and meaningful conversations during the Covid-19 pandemic. Approximately 11 million people in the United States consider themselves deaf or have serious difficulty hearing. Every day, an average of 300 million people use Zoom, Google Meets, and other applications, yet less than 4% of Americans know ASL. Due to this, people who use ASL as their primary form of communication must have an interpreter join their video calls with them, which can severely limit their ability to attend virtual events and meetings as well as communicate smoothly and directly. Sign Talk was created to address this issue and serves as an ASL-to-English translator.

Elevator pitch

What is your solution?

Our solution is an ASL-translating web application called Sign Talk. The purpose of Sign Talk is to bridge the digital communication divide between the ASL (American Sign Language) community and non-ASL users. To do this, our app detects sign language poses and displays them as text so those who use ASL can seamlessly communicate with non-ASL users in digital settings. Our app can be used as a web application and launched and used in zoom meetings. Currently, the app can detect fifty ASL poses, allowing the user to engage in basic conversations without the help of an interpreter. When the user opens the app and begins to use their webcam, the app takes pictures of the poses the user is performing and compares them to the views of poses in our dataset. It then displays the English translation of the detected pose and the prediction's accuracy (from 0.00 to 1.00).

Who does your solution serve? In what ways will the solution impact their lives?

Our solution, Sign Talk, helps the ASL community to better communicate with the non-ASL community by acting as a translator from ASL to English on web applications like Zoom and Google Meets. Thus, it better fosters more productive collaboration and meaningful conversations so that the ASL community can feel more included in our growingly digitalized world.

How are you and your team well-positioned to deliver this solution?

Our team has conducted interviews with people who use ASL to understand better the challenges this community is facing. Additionally, we won the 2022 Congressional App Challenge with our app design. By working together with actual members of the ASL community, as well as receiving support from our local congresswoman, Zoe Lofgren, we are well-positioned to take our app to the next level and deliver a successful solution. Once we launch our app in the coming months, we will look into establishing strategic partnerships with companies such as Zoom, Google, and other major video conferencing platforms, in order to further expand the scope of our project.

What steps have you taken to understand the needs of the population you want to serve?

To better understand the needs of the ASL community, our team has worked closely with the ASL community by conducting interviews, watching documentaries, and reading first-hand accounts of their struggles to understand better the challenges this community is facing.

Which aspects of the Challenge does your solution most closely address?

Improving healthcare access and health outcomes; and reducing and ultimately eliminating health disparities (Health)

What is your solution’s stage of development?

Prototype: A venture or organization building and testing its product, service, or business model

In what city, town, or region is your solution team located?

San Jose, CA, USA

Who is the Team Lead for your solution?

Aparna Bhaskar

More About Your Solution

What makes your solution innovative?

Each day, an average of 300 million people use virtual conferencing applications to conduct everything from social calls to work meetings. As technology progresses, this number will only increase. Despite the many benefits digital communication has over traditional forms of communication, many disparities arise when we digitize communication. The needs of the deaf and hard-of-hearing community have been largely overlooked in the development of video conferencing platforms. After witnessing first-hand the struggles one of our deaf classmates experienced with online learning and video calls, we felt inclined to see if there was anything we could do to help. We interviewed our classmate to learn more about her experiences as someone hard of hearing and were surprised by how much we take for granted when it comes to communicating with others. For many individuals who rely on ASL to communicate, interpreters are necessary for speaking with other students who do not know ASL, preventing private conversations from occurring and subsequently harming deaf students' social connections. Although platforms such as Zoom have some features, such as live transcripts and the elements to spotlight sign language interpreters, no option exists to allow ASL users to communicate directly in ASL without the involvement of an interpreter. Additionally, while ASL translation technology does exist, non of it has been developed to the point of allowing ASL users to communicate seamlessly in digital spaces. Major video conferencing platforms such as Zoom and Google Meets do not offer features other than subtitles to make communication easier for deaf individuals. On top of this, we learned about the cost associated with ASL interpreters and how some schools and events often forgo hiring interpreters due to their high prices. Despite the clear need for improvements to the capabilities of ASL apps and features, a quick search in the app store reveals that no application translates ASL directly into English. Sign Talk changes this by allowing people who use it to communicate directly with those who don't understand it, without the need for an interpreter or expensive fees. In doing so, we enable ASL users to engage in the digital world and ensure their voices are heard. 

What are your impact goals for the next year, and how will you achieve them?

For the next year, we hope to expand our dataset to include all 40,000+ words in ASL and expand to other sign languages such as Chinese Sign Language (CSL), Spanish Sign Language (LSE), and French Sign Language (LSF). Additionally, all our detections are outputted in English, but we would like to expand to output the detected ASL values as words in different languages. Moreover, we would like to make the outputted texts also have the option of voice translation to provide a similar feeling to a typical one-on-one conversation. Yet another feature that we would include is compatibility with more conferencing platforms such as Skype, WhatsApp, FaceTime, and others. Ultimately, our goal with Sign Talk is to revolutionize the way that deaf and hard-of-hearing people around the world communicate and are understood. 

Describe the core technology that powers your solution.

The core technology that powers our solution is JS React. Most of our coding was done in Visual Code Studio and Google Colab. We host everything in JS React. Without JS React, the user accessibility of our app would be lowered. Ultimately, our app is dependent on many different technologies, all of which are needed for it to function to its optimal level. Our app was created using Javascript, CSS, and Python. Additionally, it utilizes packages and libraries such as Tensorflow and JS React. First, we created a trained machine learning detection model with python, which used TensorFlow to detect key sign language poses. Once our model was created, we converted our model into a tensorflow.js version and hosted the model in IBM’s Cloud Object Storage space. We use react.js to run the front end of our application, and we did all of our front-end development using Figma and Canva. In the actual model, images are captured using the webcam and strung together at a rapid rate, creating the illusion of continuous movement/mirroring of the user's poses. Once each picture is captured, it is compared to similar images in our neural network to determine which of our 50+ sign languages it is most similar to. Once this is done, the classified pose name, such as “Hello”, “Bye”, or whatever pose the user is doing, is displayed on the screen along with the accuracy of the prediction In the future, we hope to expand the scope of our app by including all words in the ASL language and other sign languages and allowing text translations into languages other than English. 

Please select the technologies currently used in your solution:

  • Artificial Intelligence / Machine Learning
  • Crowd Sourced Service / Social Networks
  • Imaging and Sensor Technology
  • Software and Mobile Applications

In which countries do you currently operate?

  • United States

How many people does your solution currently serve, and how many do you plan to serve in the next year? If you haven’t yet launched your solution, tell us how many people you plan to serve in the next year.

Once we launch in late February, we plan to serve the American Sign Language community. Specifically, we will be serving members of the community who use services such as Zoom and Google Meets. By the end of the year, we hope to serve several sign language communities across regions and forms of sign language and offer integrations into platforms such as WhatsApp, Facetime, Skype, and others.

What barriers currently exist for you to accomplish your goals in the next year?

Marketing our app may be challenging and prevent us from accomplishing our goals. We plan to address this challenge by utilizing social media such as TikTok and Instagram to promote our app. Additionally, we will partner with ASL advocacy organizations and influencers to raise awareness about our solution. One of the more immediate challenges facing our app is debugging and fine-tuning app features before we launch in February. We are working to address this challenge by leaving lots of spare time between when we expect to finish and the launch to account for unexpected errors which may arise.

Your Team

How many people work on your solution team?

2 part-time workers (Co-founded by Sofia Penttila and Aparna Bhaskar, both of whom are high school students)

How long have you been working on your solution?

4 months

What organizations do you currently partner with, if any? How are you working with them?

We are currently developing partnerships with Stanford’s Computer Science and Machine Learning department to help us meet our future goals in terms of technology development. We also aim to partner with companies in the video conferencing space, such as Zoom and Google. 

Business Model

What is your business model?

We would like to implement a non-profit or sustainable business model which centers our users and impacts above financial or other considerations. As we gear up for our launch, we are looking to become an official non-profit organization or B-Coroporation. The purpose of our business is to serve the ASL community. Due to this, our top priority is ensuring that our app functions in a way that serves the community's needs. Any profits we earn through advertising on the app or other means will go directly back into building the app or be donated to organizations that advocate for disability rights. Currently, our business is run by two people however, as we scale, we are looking to bring more people on board to help us develop the app to the next level. 

What is your path to financial sustainability?

So far, we haven’t spent money to create our web application. However, since we are planning to make it compatible with web applications like Facetime, Skype, Google Meets, and others, we plan on making up the money by getting funds from a fiscal sponsor once we become a non-profit organization and offering a freemium model where users who want added features can pay a minimal additional fee. Income will go towards building additional new features and other non-profit organizations supporting people with disabilities. If we decide not to become a non-profit, we will begin submitting applications to join start-up accelerates and start talking with possible VC investors.

Solution Team

 
    Back
to Top