Description as a Tweet:

Odin is a project to assist those who are deaf or hearing impaired by using a combination of sensors and sound placement algorithms to project the information onto AR glasses. It then detect and classify the sound, notifying the user where the sound is coming from and what it is.

Inspiration:

We saw a detrimental lack of infrastructure for the Deaf and hearing impaired, especially in under developed communities, and felt we could leverage tech to help accommodate for their needs. We also had a bunch of nerds on our team that were ready for a challenge and do something that has never been done.

What it does:

Project Odin takes tons of information from sensors that the user will wear and compiles it into a visual language to help translate the sounds of the world through AR glasses.

How we built it:

This project was built through the use of Web Technologies, Socket Programming, and ARCore technology. The sensor itself was connected to a Raspberry Pi, which served as the "brains" of the sensor. The Pi was able to send data through a TCP connection to a running Express server, which then processed the data and relayed that information to the mobile app. At each of these stages, data had to be transformed according to necessities. For example, noisy microphone data from the Pi had to be high-pass filtered, and processed to extract X Y Z vector components for the direction of the sound.
In addition, we were able to use audio clips gathered from the microphones, and use IBM's Audio Classification API to classify the types of noise that the sensor was hearing.

Technologies we used:

  • Javascript
  • Node.js
  • Express
  • Java
  • Python
  • Raspberry Pi
  • AI/Machine Learning
  • Other Hardware

Challenges we ran into:

The first challenge we faced was coming up with an algorithm to calculate the location vector of the sound source. In addition, porting data from the source to the different computing platforms was a very time-intensive process.
Probably the biggest challenge we faced was the fact that none of the team members had experience with AR development, especially on Android. However, we took on the herculean task of updating an AR model based off of data fetched from the Android device from an API that we created. Getting over this challenge was a great victory for our team.

Accomplishments we're proud of:

Our team is very proud of how we leveraged various techonologies to create a suite of products that can serve the hard-of-hearing community in the future. While technology is always being improved to help people with disabilities, sometimes, enhancing the capabilities of one of the senses proves more useful than simply enhancing or amplifying the affected sense.

What we've learned:

Our team had various experience levels in each of the technologies we used. For some of us, Android development was a very new experience, and for others, Socket programming and data over IP was new. Technology-wise, each of our team members were able to expand their knowledge base in each of the technologies we used, because the problem that we set to solve required in-depth understanding of these technologies.

What's next:

We definitely want to continue expanding on some of our location algorithms. The time of 36 hours to develop mathematical algorithms, as well as integrate them in a GUI experience proved to be difficult, and we had to sacrifice the accuracy of some positional data in order to make a more complete product.
Another area that could be improved is the user experience. While the prototyping board proved to be quite resourceful, this project would be most helpful to the hearing impaired community if it were in a more usable format. For example, a heads-up display, sun glasses with a microphone array built-in, etc.
In our project, there were a lot of separate components (as is common in the prototyping phase), we hope to be able to merge all the processes into one local product, for ease of use.

Built with:

We used the Matrix Creator, an amazing Hat for the Raspberry Pi. This hat allowed us to gather data from over 36 sensors, including 8 microphones, a barometer, a thermometer, and much, much more. We also used a Raspberry Pi 4, a Cisco Router for data transfer between the Pi, various machines, and the android phone. For the software, we used Electron to make a seamless experience for displaying Microphone-based location calculation data.

Prizes we're going for:

  • Best AR/VR Hack
  • Best Hardware Hack
  • Best Mobile App
  • Best Web App
  • Best Documentation
  • Best STEM Hack
  • Best AI/ML Hack

Prizes Won

3nd Place
Best AR/VR Hack

Team Members

Hannan Rhodes
Bill Ray
Keerthan Ekbote
Robert Tacescu

Table Number

Table 62

View on Github