Raspberry Pi Image Processing

From Mindworks
Jump to navigation Jump to search
Raspberry Pi
Sponsors Biological & Chemical Engineering Department
Team Name PiAi
Duration Fall 2020 - Spring 2021
Faculty Adviser Bruce Bolden
Mentor Dev Shrestha
Client Dev Shrestha
Team Members
  • Tori Gehring
  • Jon Gift
  • Isabel Hinkle
  • Oshan Karki

One of the biggest expenses in agriculture is the application of pesticides in crop management. Weeds are a constant threat to farmers’ livelihoods, and if farmers aren’t vigilant for even a single season, weeds can and will take over. A single crop duster can spray up to 40,000 acres of land during the summer, and the cost of the fuel, pesticides, and time required to operate these machines can be astronomical.

Team Pi AI has spent the past year tackling this problem, and our goal is simple: replace these bulky, manually operated planes with a more efficient alternative. Our solution aims to target all the flaws of modern pesticide management in a fast, affordable, and consistent manner. In order to do this, we had to replace the cumbersome crop-dusting planes with a nimble and efficient drone, and the operator had to be replaced with a computer. Combing these two concepts, we can fly our drone over fields, identify and mark locations with weeds, and return home without using a single drop of non-renewable energy or a single second of an operator’s time. For larger fields, additional drones can be incorporated, and having the ability to locate weeds with pin-point accuracy reduces the amount of pesticides sprayed on healthy crops, saving farmers money and reducing the amount of chemicals in both our foods and our fields.

Our solution works by utilizing revolutionary machine-learning techniques, modern drone controls, and extremely powerful microcontrollers that handle the identification of the weeds themselves. To top it off, our solution costs less than $1200 to implement. For the same price as a crop duster, you could purchase hundreds of our drones, and when you add in the amount of money saved on fuel, your savings skyrocket.

Preliminary testing of our system shows that each unit has a flight time of about 27 minutes before they need to be recharged, and the detection system has a precision and recall of 89% and 41% respectively, with an average frame rate of 12-13 per second.

Problem Definition[edit | edit source]

The advent of high processing power and better algorithms has allowed artificial intelligence algorithms such as deep neural networks to do amazing things, such as face recognition and understanding human language in real-time. However, for many applications, like small unmanned aerial systems (UAS), there is no access to high computing power to do computationally heavy tasks such as image processing.

Because of load limitations, the only option for these UAS is to carry a lightweight computer such as a Raspberry Pi (RPi). One of the applications in agriculture is to identify crop form. However, the Raspberry Pi does not have a good graphics processing unit (GPU) that can implement this computationally heavy task. A small GPU such as Intel’s Neural Compute Stick (NCS) can be used to implement a pre-trained neural network for a niche application such as agriculture.

Our team’s task is to develop an integrated platform for the Raspberry Pi and Intel’s Neural Compute stick to do a small but well-known task. Specifically, the team will:

  1. Develop a benchmark to compare the processing speed with RPi-NCS vs standard processor such as Intel i-7 processor and standard graphic card.
  2. Implement a pre-trained artificial neural network or to solve object recognition problem by integrating Raspberry pi and Intel’s NCS2.
  3. Use standard image processing library such as OpenCV to stitch and process images in real time.
  4. Compare and document the processing time using RPi-NCS and other GPUs to do the same task.

Background[edit | edit source]

Most modern farmers use small, Kerosene-fueled airplanes to spray pesticides onto their fields. Our goal is to instead, use a small computing system atop a drone to detect weeds and trigger a spraying mechanism in order to save farmers time and money. If successful, modern farmers won’t need to use small airplanes that are dangerous, use fossil fuels, take up space, and pollute the atmosphere. Our weed spraying drone will be easier to store, cost-effective, more accurate, and require brief human interaction. This could improve the quantity as well as quality of foods. Our object detection will allow us to target exact locations that need spraying, so it will reduce the amount of toxic chemicals going into the soil. Our project will be using a Raspberry Pi to explore the benefits of using smaller computers to do computationally heavy tasks with computer weight limits. 

2020 PiAi Final Poster 2.png

Deliverables[edit | edit source]

  • Develop a benchmark for speed comparisons between the Raspberry Pi with the NCS2, and a standard computer with a modern processor and graphics card.
  • Implement a pre-trained artificial neural network with the Raspberry Pi.
  • Use a standard image processing library to process images in real-time.
  • Compare processing times between the computer and the Raspberry Pi.

Value Proposition Statement[edit | edit source]

Most modern farmers use small, Kerosene-fueled airplanes to spray pesticides onto their fields. Our goal is to instead, use a small computing system atop a drone to detect weeds and trigger a spraying mechanism in order to save farmers time and money. If successful, modern farmers won’t need to use small airplanes that are dangerous, use fossil fuels, take up space, and pollute the atmosphere. Our weed spraying drone will be easier to store, cost-effective, more accurate, and require brief human interaction. This could improve the quantity as well as quality of foods. Our object detection will allow us to target exact locations that need spraying, so it will reduce the amount of toxic chemicals going into the soil. Our project will be using a raspberry pi to explore the benefits of using smaller computers to do computationally heavy tasks with computer weight limits.

Design Considerations[edit | edit source]

Our considerations can be summarized as follows:

  • Develop a sufficiently light-weight and low-power system.
  • Train a neural network to accurately identify different types of weeds, specifically dandelions, clovers, and grass.
  • Develop a program to interface with the Raspberry Pi remotely.
  • Ensure our model runs quick enough for real-time computations.

Using the ArduCAM[edit | edit source]

Our initial design was based around the ArduCAM library, which is designed to work with a variety of systems. However, the library is not a native component of the Raspbian operating system; in order to get the ArduCAM working with the Raspberry Pi, individual drivers needed to be installed. This installation process also included uninstalling the original camera drivers that are a part of Raspbian. As a result, we could not easily interchange these components, and the severity of this issue was compounded when documentation for the NCS2 was not compatible with the ArduCAM. Another massive issue with the ArduCAM library is that the drivers do not work with modern versions of Raspbian – in order to get our development environment working, we had to roll back our installation of Raspbian to an older version. This set our progress back significantly, and ultimately, we found it was easiest to simply transfer working Raspbian installations between team members rather than to try to get each install working individually.

This incompatibility with ArduCAM also led to problems with sample Python scripts. Some libraries were incompatible with the commands we used to get the ArduCAM working, and this, in part, influenced our decision to consider C++ as a possible development environment over Python. Additionally, since the ArduCAM had already been purchased, we did not have enough native Raspberry Pi cameras available to swap back to the original hardware for every single team member, and so only some of us could work with the cameras at any given time.

As a solution, we suggest sticking with native Raspberry Pi-developed hardware, and to maintain identical working environments between team members. Checking version compatibility before starting development is the best way to reduce the amount of time spent debugging projects like this, and it also ensures that, if one team member is running into a bug, everyone else will also encounter it, and collaborative debugging is often less frustrating than individual debugging.

Using Python for the Model Driver Script[edit | edit source]

We considered and were heavily favorable of using Python for this project. Not only is Python one of the most widely used programming languages today, but it is also very versatile. However, we were not successful in running any of the Open Model Zoo demos that used Python. There were several reasons, but ultimately, we decided that in order to be successful using Python, we would have had to write a script from scratch.

By the time we made this realization, we had decided as a team that it was too late in the semester to write a custom script from scratch. There was not enough documentation on how to break-down the Python scripts and how each component worked together with the NCS2 to run the model.

TensorFlow[edit | edit source]

TensorFlow was chosen as the platform to train our custom object detection model due to several reasons. First, TensorFlow is an open-source platform with ample documentation and a variety of pretrained models to choose from. Because TensorFlow had a variety of pretrained models to choose from, we would be able to try different models in the case that one wasn’t working. TensorFlow has pretrained models such as SSD MobileNet, SSD ResNet, Faster RCNN, and Mask RCNN. All these models have their own balance between speed and mean average precision. We wanted to use TensorFlow because this would allow us to quickly pivot and retrain our custom model using another model architecture if need be. We investigated using YOLO as an option for training out model but decided against this platform because it only supports the YOLO architecture and seemed to have higher overhead for setup. Another reason TensorFlow was chosen, is due to its compatibility with NVIDIA GPU’s. We needed a platform that was compatible with NVIDIA and had documentation on how to use a GPU for training, because an NVIDIA GPU was our only GPU resource at the time. Finally, we chose TensorFlow because one of our team members had experience with it and it comes with a lot of extra features such as TensorBoard for viewing training stats, Python scripts to calculate and view model metrics, and Jupyter Notebook scripts to easily visualize and communicate results. All these features are included out of the box with TensorFlow. YOLO does not have features like this. In the end we concluded that TensorFlow would be the lightest lift and the most flexible choice for this project.

High Quality Raspberry Pi Camera[edit | edit source]

Our final decision for the camera component resulted in using the Raspberry Pi High Quality camera. The decision to use this component was based on two big factors: it works like the normal Pi camera, and there is no need for extra drivers to be installed, unlike the ArduCAM.

The high-quality camera has many of the same functions as the normal Pi camera. Most importantly, you can control the camera with the pre-determined commands such as “raspistill” and “raspivid” etc. When using Python and the picamera library, the high-quality camera will also be detected while using the built-in functions.

As opposed to the ArduCAM, the high-quality camera does not require a specific kernel version to be running for the camera to work. As stated in the previous section, there was only a small window of kernel versions to run on the Raspberry Pi that the ArduCAM could operate on. In using the high-quality camera, that need for an old kernel version perishes, and we can keep the most up-to-date system.

Another benefit to using the high-quality camera is the opportunity to use interchangeable lenses. The high-quality camera comes equipped with a 12.3MP Sony IMX477 sensor and an adjustable back focus that supports C and CS-mount lenses. Depending on the height requirement for the drone flight, there is a wide variety of lens choices that can be considered.

Model Driver Script/Language Choice[edit | edit source]

The model driver script that is included in our final design was written in C++ and created for the Open model Zoo demos by OpenCV. There were several issues we faced when considering which language and script demo to use as the model driver. Ultimately, the team decided on a C++ Multi-Channel Face Detection demo from OpenCV. We were successful in building this demo and running it with the optimized TensorFlow model as well as the Raspberry Pi High Quality camera.

Project Learning[edit | edit source]

  • Utilize pre-trained models on the TensorFlow Github, which allows us to get a prototype up and running quicker.
  • Incorporating the NCS2 with the Raspberry Pi is well-documented and supported by the Raspberry Pi operating system.
Training Progress
Training Histograms
Training Distributions
Model Recognition
Demo Stills

Quickstart Guide[edit | edit source]

Foreward[edit | edit source]

This guide will walk you through the entirety of the steps we took to make this project work. Over the course of this guide, you will learn to set up TensorFlow, install Ubuntu, install the Raspbian operating system on a Raspberry Pi, train a custom machine learning model, optimize the model for the NCS2, and finally run the model on your Raspberry Pi with the help of the NCS2.

One of the biggest issues with this project was the immense dependence on 3rd-party libraries. Between TensorFlow, OpenVINO, and the Raspbian operating system, we ran into numerous dependency issues due to libraries being updated, halting development until we could find a version of the offending package that worked with the project. TensorFlow itself is the worst example of this, where the newest version of the library is actually less supported than the older versions. Our suggestion is to install the exact same versions of libraries that we have when listed, which will guarantee that the project will work for you even if the libraries are older. If exact versions cannot be found, the closer to the target version the better.

Initial Setup[edit | edit source]

Requirements[edit | edit source]

  • This section will cover hardware and software requirements and where to get them.
  • Raspberry Pi 4
    • Micro SD card for the operating system, the larger the better.
    • External battery or power source.
  • Intel Neural Compute Stick 2
  • Raspberry Pi Camera Module
  • Computer with Windows installed, for TensorFlow usage.
    • NVIDIA graphics card suggested for faster training, TensorFlow GPU is NOT compatible with AMD cards.
  • Computer with Linux installed for OpenVINO model optimizer usage.

Installing TensorFlow on Windows[edit | edit source]

  • This section will describe setting up our version of TensorFlow on Windows.
  • The first component necessary for this process to work is Anaconda. Anaconda will allow you to create separate Python development environments, and more importantly, install the specific versions of libraries that we need for our training purposes. It is not recommended to proceed without utilizing Anaconda.
  • Once Anaconda is installed, open the start menu and search for "Anaconda Prompt". Run this program as administrator.
  • In this prompt, run the command "conda create --name tfgpu". This will create a new Anaconda environment named tfgpu. The name doesn't matter, but for future training you'll need to remember it in order to enable your development environment.
  • Type "conda activate tfgpu" to open your development environment. The far left side of the prompt should now list (tfgpu) instead of (base).
  • Next, we'll install TensorFlow.
    • If training with your CPU, run TODO.
    • Otherwise, run the command "conda install -c conda-forge tensorflow-gpu=1.15".
    • Note: yIt's absolutely vital that you install version 1.15, or you risk issues with optimizing the model later on.
  • TensorFlow will install a variety of libraries, if any fail to install you'll need to find the exact version that TensorFlow 1.15 requires.

Setting up the Linux Installation[edit | edit source]

Setting up the Raspberry Pi Installation[edit | edit source]

  • This section will teach you how to download and install Raspbian, and also to enable the camera module on the Raspberry Pi.

Training a Custom Model[edit | edit source]

  • Gathering Training Data
    • Find 100+ images of each object class
    • Use labelImg to annotate each image with bounding boxes around the objects
    • Split the images and labels into a training (80%) and testing set (20%)
  • Training a Model
    • Use a pretrained model from TensorFlow model zoo
    • Edit the .config file to point to the training and testing data as well as the .pbtxt file with class names
    • Run the training script (train.py) with the path to the config file
    • Train until the loss reaches an accepted (low) value
  • Freezing a Trained Model
    • Run export_inference_graph.py with the path to the training dir, config file, and the desired model.ckpt file
    • Frozen model will be in the form of a .pb file
    • Use the frozen model in object detection tutorial scripts to test the model performance on various images

Setting up the Model Optimizer[edit | edit source]

Raspberry Pi and NCS2 Integration[edit | edit source]

  • Firstly, on the Raspberry Pi, create a directory where you'd like to store the sample programs and models.
  • Next, open a terminal window and run "cd directory"
  • In this directory, run "git clone https://github.com/openvinotoolkit/open_model_zoo.git"
  • From here, we will build the models manually as the build script sometimes fails on the Pi. Type "cd open_model_zoo/demos"
  • Next, run "mkdir builds" and "cd builds"
  • From here, run "cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" .."
  • Finally, simple call "make" to build the models. This process takes some time and will build every single C++ demo.
  • The demo we are most interested in is the multi channel face detection demo as it works best on the Pi for our model. To access this model, type "cd armv7/Release"
  • You may want to transfer your .xml and .bin files to this folder to make running the script easier.
  • To run the demo, insert the NCS2 stick into your Pi, make sure you have a compatible camera attached to the Pi, and run "./multi_channel_face_detection_demo -m your.xml -i 0 -d MYRIAD"
    • NOTE: You may be inclined to run this script with "-i /dev/video0" as per OpenVINO's documentation, but this is incorrect and cannot read the camera correctly.
  • After running, a separate window should open showing the execution of your model. To exit, simply hit the escape key.

Final Design[edit | edit source]

System Diagram
Pictured left is the complete system diagram for our final design. Our system functions as follows: first, the pi is attached to the host drone. Next, the operator turns on the battery attached to the Pi. After waiting for the Pi to boot up, the operator connects to it via remote access in order to run the detection script. While the script is running, its output can be viewed by the operator. As for the drone itself, its accelerometer and other onboard systems maintain stable flight while the operator controls where the drone goes. The height of the drone can be adjusted by the operator as well. When operation is concluded, the drone can be landed and the Pi turned off safely.


202 PiAi Final Design.png

Our final design (pictured right) possesses the ability to perform real-time object detection at an acceptable speed, the solution itself is fairly durable, all of the components minus the drone weigh in at less than two pounds, and the cost per unit is about $1,123.27.

Validation[edit | edit source]

DFMEA
Requirement Test Test Subject Target Date Result Recommendation
Rpi communicates with NCS2 stick and neural network Sync Rpi and NCS2 with camera to evaluate functionality Physical prototype 11/18/20 Success Maintain consistent drivers
Detect any kind of object using Rpi, NCS2, and Camera. Using any object and Machine Learning model with Rpi integration. Fully functional prototype 12/01/20 Success Dependent on model quality and number of objects
Evaluating multiple training and testing images to ensure 90% accuracy in detection Train various models to determine if further improvements are necessary Physical prototype 12/01/20 Success Replace test images with actual drone footage in the future
Correctly identify various weeds Integrate fully trained model with 90% accuracy and Rpi model to ensure benchmark is met Fully functional prototype 01/30/21 Partial success Depends on the weed and the quality of the image, optimized model is also less accurate in general
Rpi model is correct weight for drone Create a platform for model to attach to the drone and verify correct usage of drone with model attached Physical prototype 2/22/2021 Success Rpi was attached to drone with tape, no issues with weight or mobility
Durable in certain weather conditions Physically test outdoor conditions against model attached to drone to confirm functionality Physical prototype 2/22/2021 N/A Was not tested in inclement weather
90% accuracy/benchmark Run object detection model on various trained images and determine further model development Fully functional prototype 3/1/2021 Success Model was tested on local machine for benchmarks as well as Rpi
Evaluate speed of image recognition by RPI and RPI + NCS2 during flight Compare speeds of RPI and RPI with NCS2, compare power consumption for how long the RPI can operate on a battery. Fully functional prototype 3/15/2021 Success Rpi normally: ~3 fps. Rpi with NCS2: ~13 fps. Rpi can operate for about 90 minutes.
Check for over heating. Run Object detection model for 30 minutes. Fully functional prototype 4/5/2021 Partial success Overheating occurs with NCS2 if model runs too long
Bounding box is within +/- 4 inches of target Run object detection model on physical test subject Fully functional prototype 4/15/2021 Partial success If box is drawn this is the case, but the script itself has issues.

Team Members[edit | edit source]

2020 PiAi VictoriaGehring.jpg

Victoria Gehring[edit | edit source]

Computer Science Student

Hometown: Meridian, ID

Hobbies/Interests: My professional interests include object detection, machine learning, and cyber security. In my free time I enjoy volleyball, camping, and PC building.

Plan for the Future: My plan is to further my understanding of machine learning and object detection to optimize and automate processes.

Email: gehr1898@vandals.uidaho.edu

2020 PiAi JonGift.jpg

Jon Gift[edit | edit source]

Computer Science Student

Hometown: Bend, OR

Hobbies/Interests: I enjoy working with Python and C#, and I have a significant amount of experience with both Raspberry Pi's and the Unity engine. I also like to dance, rock climb, and play guitar.

Plan for the Future: My goal is to work at Intel in Portland and eventually teach computer science someday.

Email: gift7380@vandals.uidaho.edu

2020 PiAi IsabelHinkle.jpg

Isabel Hinkle[edit | edit source]

Computer Science Student

Hometown: Coeur d’Alene, Idaho

Hobbies/Interests: My professional interests include all things cybersecurity. I am most interested in the topic of Digital Forensics and filesystem analysis. In my free time I like to create art, listen to music, and hang out with my friends.

Plan for the Future: My goal is to work for a Federal Executive agency pursuing a career in a cybersecurity-related field.

Email: hink0402@vandals.uidaho.edu

2020 PiAi OshanKarki.jpg

Oshan Karki[edit | edit source]

Computer Science Student

Hometown: Kathmandu, Nepal

Hobbies/Interests: I like everything related to AI and Machine Learning. In leisure, I also like to play soccer and go fishing.

Plan for the Future: My goal is work as a Machine Learning Engineer.

Email: kark6037@vandals.uidaho.edu

Additional Documentation[edit | edit source]

Project Schedule

Gantt Chart

Meeting Minutes

Meeting Minutes

Presentations

Snapshot

Meeting Agendas

Agendas

Github

Pi-Ai Github