Sight Impaired Mobility Assistance
Sight Impaired Mobility Assistance | |
---|---|
Simulation enviornment | |
Team Name | SONAR |
Sponsor | Daniel Schneider |
Faculty Advisor | |
Mentor | |
Duration | Fall 2016 - Spring 2017 |
Our goal is to create software that represents a virtual environment to the user through audio feedback. The result should be that the user should have a complete “image” of their soundings, based on what they hear. Thus, they should be able to navigate rooms, pick up objects, and complete goals completely through listening.
Problem Description
The current methods in place to assist the visually impaired in navigating their environment are inadequate. We seek to address this issue by translating visual data into audio data to allow users to comprehend the world around them through sound. If we can prove that this can be done in a simulation, then future groups can prove that this process could work in the real world. The steps to achieve this simulation are the following
- Design a 3D environment which represents the real world
- Test systems of audio feedback to the user and determine which is most understandable.
- Package our software so that it is easily distributed.
Project Details
Hardware
Headphones | Depth Camera | Headtracking System |
---|---|---|
Audio Technica ATH-m50x | Intel RealSense | Wii Remote |
|
|
|
Software
Game Engine | 3D Sound Engine | Depth Data Interpreter |
---|---|---|
Unity 3D Engine | OpenAL Sound Library | OpenCV Library |
|
|
|
Design
The design for our simulation includes a front end, and a back end. The front end which manages the virtual world, takes user input, and creates packets of spacial data. The back end consumes packets of spatial data, and generates audio feedback, which is then relayed to the user. Splitting the simulator in this manner allows for the back end to be used, with minimal alteration, with real data from the physical world in a future iteration of the project.
Front End
The front end of the simulator is responsible for the virtual environment and collecting sensor data. We are using the Unity engine, as it provides us with a simple and robust tool for managing a 3D virtual world. A Unity tool of depth mapping is used to get depth data from the visible scene, which is then placed in shared memory for use by the back end. The front end also includes a head tracking system. This will allow the user to look around their enviornment by moving their head. Our current headtracking system uses a Wii-Remote hat that the user wears. We chose this model as its cheap, and most users who may download our software may already have a Wii-Remote
Back End
The back end of the simulator is responsible for generating aural feedback from the depth data captured in the front end. It is built in C++, using the OpenAL library for sound generation and the OpenCV library for processing of the depth data.
Virtual Environment
For our simulated environment, we plan on having several different scenes for testing. Currently, we have a map that randomly generates rooms, hallways, and walls of varying heights. This way, we can offer a unique experience every time. We also plan on developing a simple room where the user stays stagnant, and we move several different objects of differing shape and sizes around the player. Then we can ask them to gauge each objects distance and direction to the player. Lastly, we programmed some complex objects in the environment. For example, the user can pick up a green bottle from a table, and set it down elsewhere. Also objects, like cars, which locate and move towards the player. We plan on seeing if we can have a user avoid objects or find and move objects as way of testing our system.
Audio Feedback Model
One of our biggest challenges is how we will represent the virtual environment through sound. We plan on having several different models of how we represent the environment with audio feedback. During testing, we will test all these models and see which one works best for users. Our main model is shown to the right. It represents height with pitch, depth with sound pulses, the angular size of a viewed object with volume, and actual size of that object with timbre.
Alternative Designs
We considered several different designs before settling on our current one. They are listed below.
Front End
We considered several different game engines for the front end. The Unreal Engine was considered, as its a powerful game engine with a powerful 3D sound engine. However, we found that the sound engine wasn't powerful enough, and the game engine was very tasking so that not all our team members could run it. We also considered using Blender as it was easy to use. However, we wanted something with more tools included. So we settled on the Unity Engine.
A major issue with our front end was how we were to grab environment depth data in the Unity engine. First, we tried ray casting. This was very slow so we tried other methods. Next, we tried a complex method of reading moving textures on the map. Finally, we settled on a built in Unity function of depth mapping. This was much faster and easier than the previous two methods.
Back End
The back end needed a powerful 3D sound engine in order for us to give accurate audio feedback. First, we tried the 3D sound engine included in Unity. It was lacking in certain tools we needed though. We are now currently using OpenAL
Audio Feedback Models
Our first step in creating Audio Feedback models was doing some research. We found a similar study to ours at the University Of St. Andrews. For their study, they represented height with pitch and depth with volume. This model is seen to the right. Original St.Andrews study
We then consulted a professor who specialized in audio perception of the brain. We learned from him that this was not an optimal model, as the human ear has a hard time distinguishing pitch and volume just by itself. In order to interpret pitch, one must compare it to a different pitch. Same with volume. So we designed a sound pulsing model. This model has a constant volume and pitch, followed by the differing sound pulse that represents the environment. This way, the user can relate the two, and hear their environment more clearly.
Team SONAR
Picture | Bio | Discipline |
---|---|---|
Matt Daniel: Matt is a citizen from Nampa, Idaho. He is a member of FarmHouse Fraternity and the Student Alumni Relations Board on campus. | Computer Science | |
Mason Fabel is a senior in computer science at the University of Idaho. Growing up in Portland, Oregon, he was introduced to computers through LEGO Mindstorms, and later FIRST robotics. Outside of technology, Mason enjoys playing guitar, reading about history, taking long walks, and a variety of board and card games. | Computer Science | |
Eric Marsh: From Boise, Idaho. Eric started programming his freshman year at the University of Idaho. Since then, he has made several video games and projects. He likes to make stir frys, read music reviews, and play Team Fortress 2. | Computer Science | |
Colin Pate: Colin is an Electrical Engineering major from Kirkland, WA. He enjoys working on engineering projects and going to Costco in his free time. | Electrical Engineering | |
John Snevily is a senior in Computer Science at the University of Idaho. He enjoys rugby, Unity projects and anything made with Python/Pygame. | Computer Science |
Document Archive
All meeting minutes, client discussion notes, and meeting agendas can be found on our team Google Drive.