Sight Impaired Mobility Assistance

Our goal is to create a hardware accessory to give the visually impaired a greater level of autonomy. From finding a glass of water to navigating a busy street, we hope to provide ample audio feedback to give the user a complete "image" of what's around them.

Problem Description
Our problem is two part. Part one involves creating a virtual environment implemented in software. This environment is intended to represent the real world and give the user audio feedback. Part two of the problem is applying this knowledge to a real-world environment. This entails receiving input from a depth camera and allowing someone who is visually impaired to navigate a given environment which previously would have been impossible.


 * 1) Use or design a 3D environment which represents the real world
 * 2) Test systems of audio feedback to the user and determine which is most understandable.
 * 3) Package our software so that it is easily distributed.
 * 4) Port the software to work with our depth camera hardware and perform testing in the real world.

Concept Development
We are currently determining what tools to use for the front and back end of the project. For the front end 3D simulator, we have a simple navigable 3D world implemented in the popular 3D game engine Unity. Our main criteria for the 3D engine are ease of use, ability to procure depth data from the camera, and distributability to end users.

For the back-end of the project, we are evaluating ways to process the received depth data and relay it to the wearer as sound. One of the main goals of our project is to determine how to optimally relay the maximum amount of depth data to our users while allowing them to locate important features of their surroundings and distinguish as much detail as possible, without overwhelming the user.

We are currently experimenting with OpenAL for use as a sound generation environment.

Audio Feedback


One of our biggest challenges is how we will represent the virtual environment through sound. We went through multiple iterations. One was simple, pitch shows height, and volume shows depth. However, we learned that the human ear has a hard time with just hearing one pitch or volume, and gauging what level it is. We found that it is best to show volume and pitch in relation to a constant volume and pitch. This way, the user can easily hear the differences. We are also ambitious that we can represent angular and actual sizes of faraway objects. So far, our design uses pitch for height, sound pulses for depth, volume for angular size, and timbre for actual size. For testing, we plan on making more than one way to represent the environment.

Environment
For our simulated environment, we plan on having several different scenes for testing. Currently, we have a map that randomly generates rooms, hallways, and walls of varying heights. This way, we can offer a unique experience every time. We also plan on developing a simple room where the user stays stagnant, and we move several different objects of differing shape and sizes around the player. Then we can ask them to gauge each objects distance and direction to the player. Lastly, we programmed some complex objects in the environment. For example, the user can pick up a green bottle from a table, and set it down elsewhere. Also objects, like cars, which locate and move towards the player. We plan on seeing if we can have a user avoid objects or find and move objects as way of testing our system.

Design


In order to work out the best method for translating spacial data into auditory feedback, we are developing a simulator instead of an actual headset. This program will allow the user to navigate around a virtual environment with both visual and audio feedback. The simulator is split into two major sections: a front end which manages the virtual world and creates packets of spacial data, and a back end which consumes packets of spacial data and generates audio feedback. Splitting the simulator in this manner allows for the back end to be used, with minimal alteration, with real data from the physical world in a future iteration of the project.

Front End
The front end of the simulator is responsible for the virtual environment and collecting sensor data. We are using the Unity engine, as it provides us with a simple and robust tool for managing a 3D virtual world. Custom rendering shaders are then used to get depth data from the visible scene, which is then placed in shared memory for use by the back end.

Back End
The back end of the simulator is responsible for generating aural feedback from the depth data captured in the front end. It is built in C++, using the OpenAL library for sound generation and the OpenCV library for processing of the depth data.

Document Archive
All meeting minutes, client discussion notes, and meeting agendas can be found on our team Google Drive.