Sightless Navigation and Perception (S.N.A.P)

From Mindworks
Jump to navigation Jump to search
Sightless Navigation and Perception
Main Menu created in Unity
Team Name SONAR
Sponsor Daniel Schneider
Faculty Advisor
Mentor
  • Bruce Bolden
  • Duration Fall 2017 - Spring 2018

    Team SONAR’s goal is to invent a device for the visually impaired that can create a highly detailed, acoustic picture of their surroundings granting them the ability to efficiently navigate their environment.

    Problem Description[edit | edit source]

    SNAP leverages modern robotic vision systems to produce augmented echolocation used for sightless perception of the surrounding environment. This system aims to provide those who are visually impaired with a means of perceiving their environment in real-time, and at a resolution never before accomplished.

    The success of SNAP relies heavily on our innate ability to locate objects in 3D space. This ability called “Sound Localization”, is achieved through binaural hearing. Much like binocular vision, which grants us depth perception, binaural hearing lets us compare incoming sounds as it is heard by each ear to triangulate the origin.

    Hardware[edit | edit source]

    Headphones Depth Camera
    Audio Technica ATH-m50x Intel RealSense
    • Low Cost
    • Studio Quality
    • Frequency Responce: 15 - 28,000 Hz
    • Realsense r200 depth camera
    • Small, low power, light weight camera

    Software[edit | edit source]

    Game Engine 3D Sound Engine Depth Data Interpreter
    Unity 3D Engine OpenAL Sound Library OpenCV Library
    • Free
    • Easy to learn
    • Included depth grabbing tools
    • Free
    • Contains the tools we require
    • Uses C++, which we are familiar with
    • Free
    • Works well with OpenAL
    • Also uses C++

    Project Learning[edit | edit source]

    OpenAL[edit | edit source]

    Since our project deals heavily with being able to manipulate audio, we needed a powerful Audio library, which is why we are using OpenAL. OpenAL is a 3D audio library which will allow us to communicate to the user how far away objects are via sound.

    OpenCV[edit | edit source]

    OpenCV is a real-time computer vision library that will convert our depth data we receive from Unity or our Intel RealSense hardware into actual "Mat" matrix objects that will communicate with our multiple audio sources from OpenAL.

    Unity[edit | edit source]

    Unity will allow us to perform tests using our Visual Audio Engine so we don't have to rely on working with our Intel RealSense hardware at all times. Using a first-person camera will allow us to gather frames and pass them to our Visual Audio Engine as depth images. Our Visual Audio Engine will be able to work seamlessly between our Unity tests and the physical hardware.

    Map Creation[edit | edit source]

    One important component to our Unity test bed is our unique testing maps. Our goal to complete our project with two different testing maps that each provide a separate challenge to the user. In order to create these maps, there was a learning curve in how to create 3-D environments in Unity that implement dynamic models and a basic physics engine.

    Main Menu[edit | edit source]

    Along with our testing maps, we wanted the main menu and user interface to be done through Unity. To incorporate this, there was a large amount of research on how to allow users to switch between different scenes, load in different visual audio configuration files, and store logging information from completed tests.

    Documentation/Learning Current Codebase[edit | edit source]

    The previous group's "backend" file has little documentation. When looking through the "backend" file we were confused on how certain functions work. On top of learning about the previous groups own function's, we are also trying to learn about how certain OpenAL and OpenCV functions work. We decided it would be best if we took some time to look through the "backend" file and start documenting what we think each function does. So far, we have a write up of what some of the functions in the "backend" file do. We meet with Collin who was a student who worked on the project last year. He walked us through how the "backend" file operates and answered some of our questions.

    Agile Tools[edit | edit source]

    Our client Dan Schneider plans on having future capstone groups continue to work on the project. He even eventually wants to create a company based on this software. For that reason we want the project to be set up the right way. That is, the future groups will be able to quickly and easily get up to speed and understand the intricacies of the system well enough to make meaningful contributions. For this to be possible we will be using a few different tools that are available on Github.

    Agile Tools
    Agile Tools







    Project Components[edit | edit source]

    Project Components

    Visual Audio Engine[edit | edit source]

    Allow OpenAL to be configurable. Maximize source resolution. Optimize shared memory block communication. Keep everything compatible with hardware.

    Configuration[edit | edit source]

    Create intuitive configuration menu with multiple configurable visual audio components. Create a config file format that can be saved, loaded and shared.

    Logging Component[edit | edit source]

    Keep info from each test session such as number of collisions, time taken to complete a test, and map used for testing. Create logging menu with filter system to quickly access past logging data.

    Menu System[edit | edit source]

    Handle switching between unity scenes and submenus. File selection dialogs for loading config files. Logging menu to view logging data. Map settings menu.

    Character Controller/Headset Simulator[edit | edit source]

    Robust first person controller that can simulate human navigation. Keep headset in sync with hardware.

    Installation and Distribution[edit | edit source]

    Installation and Distribution

    Everything needed to run the simulation must be packaged together in one download file. Must include an easy installation\build script that minimizes the need for user input and installs all required dll files. (Essentially a single “Install” button.) Ideally we would have a GUI installation process. Must Find an easy way to host and distribute the Installation file.








    Design Goals[edit | edit source]

    Visual Audio Engine Design[edit | edit source]

    Current State of the Code[edit | edit source]

    The previous group created what they called "backend" in c++ using OpenCV to analyze a depth map frame and used OpenAL to convert the depth map frame into 3D audio sound sources around the user.

    Depth Image Top View Side View
    Depth Image
    Top View
    Side View


    Areas of Improvement[edit | edit source]

    New Name[edit | edit source]

    We made the decision to rename The Backend to the Visual Audio Engine (VAE) to avoid confusion with common web development terms 'frontend' and 'backend'.

    More Documentation[edit | edit source]

    Very little of the code is actually documented and much of it is nontrivial and hard to figure out. This leads to a very steep learning curve when new developers attempt to improve upon it.

    Modular Class Structure[edit | edit source]

    One of our core requirements for this project is to be able to easily modify every aspect of the visual audio algorithms that we use to figure out the best possible way to translate visual information into audio information. For this to be possible we need the VAE functionality to be modular. To accomplish this we plan on breaking the algorithms and functionality out of the "main" function and into a more modular class hierarchy that we can utilize to accomplish the level of customization that our client requires. BackEndUML.jpg

    Some key aspects are the InputModule that allows for either a camera or shared memory space to get frame data. The OpenALModule that abstracts out all of the complicated openAL boilerplate to allow for generating multiple sound algorithms quickly. The ConfigModule uses a c++ JSON library to read in configs in JSON format. And finally the SoundAlgorithm which is the basic building block for an algorithm we can inherit from it to create all the different sound algorithms we need.

    Unity[edit | edit source]

    Maps[edit | edit source]

    We will be creating two distinct testing maps for our users, where each one provides a different obstacle to replicate a different type of real world environment. Our first map we have designated as our "Random Hallway Map," because the main focus for the user is to navigate through a hallway of random objects. Our second map will incorporate moving objects, to allow for users to see how well a specific audio configuration works with dynamic objects, such as people walking.

    Configuration Menu

    Configuration Menu


    Configurations[edit | edit source]

    The configuration menu allows users to edit the test bed visual audio configurations. Users can save their configurations to their computer for future use. The configuration files are saved in JSON format, which allows for easy readability and distribution.

    Configuration Menu
    More Configurations


    Logs[edit | edit source]

    The logs menu allows users to view their past test results. There is a filter on the right side where you can filter through logs based on map type.

    Logs Menu










    Map Settings[edit | edit source]

    Map settings allows users to alter variables about the map such as how many random obstacles will spawn and the sizes of these random obstacles.

    Map Settings Menu










    Project Timeline[edit | edit source]

    Timeline Semester 1
    Timeline Semester 2

    Team SONAR[edit | edit source]

    Picture Bio Discipline
    Dustin Fox
    Dustin Fox: Dustin hails from the beautiful city of Emmett, Idaho. He is currently a member of the Phi Gamma Delta Fraternity on campus. In his free time, Dustin enjoys Skiing and playing guitar. He has a passion for technology and artificial intelligence. Computer Science
    Dylan Carlson
    Dylan Carlson: Dylan was born in the temperate city of Plano, Texas. He later moved to attend high school and college at the University of Idaho. Dylan is currently a member of the Phi Delta Theta Fraternity on campus. Computer Science
    Andrew Rose
    Andrew Rose: Andrew grew up in Nampa, Idaho. He came to the University of Idaho in 2014 and is majoring in Computer Science. In his free time he enjoys listening to music and laughing with friends. Computer Science

    Document Archive[edit | edit source]

    Presentations

    Expo

    Github Repository