Sight Impaired Mobility Assistance

Our goal is to create a hardware accessory to give the visually impaired a greater level of autonomy. From finding a glass of water to navigating a busy street, we hope to provide ample audio feedback to give the user a complete "image" of what's around them.

Problem Description
Our problem is two part. The first involves creating a software system which represents the real world and gives the user audio feedback. The second part of the problem is applying this to the real world, receiving input from a depth camera and allowing someone who is visually impaired to navigate a given environment which previously would have been impossible.


 * 1) Use or design a 3D environment which represents the real world
 * 2) Test systems of audio feedback to the user and determine which is most understandable.
 * 3) Package our software so that it is easily distributed.
 * 4) Port the software to work with our depth camera hardware and perform testing in the real world.

Design
This simulation requires a laptop and some studio quality headphones. The user will not be able to see the environment, and may in fact be blind folded. However, the headphone they are wearing will provide them acoustic feedback of their surroundings in the simulation. The user will be able to move in four directions in the simulation, using arrow keys to move, and the mouse to change direction. They will try to navigate out of a maze. Further on in this project, we will replace the mouse with a head tracking system. This will allow the user to change their direction based on where their head is facing, much like virtual reality technology. INSERT IMAGES OF OUR BLENDER ENVIORNMENT, THE ONE WITH THE TABLE

Concept Development
We are currently determining what tools to use for the front and back end of the project. For the front end 3D simulator, we have a simple navigable 3D world implemented in Blender 3D. We are also considering using the popular 3D game engine Unity. Our main criteria for the 3D engine are ease of use, ability to procure depth data from the camera, and distributability to end users.

For the back-end of the project, we are evaluating ways to process the received depth data and relay it to the wearer as sound. One of the main goals of our project is to determine how to optimally relay the maximum amount of depth data to our users while allowing them to locate important features of their surroundings and distinguish as much detail as possible, without overwhelming the user.

We are currently experimenting with OpenAL for use as a sound generation environment.