Build a TensorFlow Lite Object Detection Model on the Raspberry Pi
Introduction
Ecologists and scientists use computers and cameras to capture data about flora and fauna activity. It is important to understand the animal and plant demographics in order to preserve nature. The captured data, with the help of GIS, is used to plan human and industrial activities around the forest to protect and preserve biodiversity.
We can achieve similar task through our own implementation of object detection.
The items you may need:
- Raspberry Pi (I am using 3B+, but you can use 4 or 5)
- Camera module
- Battery (optional)
- Case (optional)
- SD card
Setting up Raspberry Pi
Setting up TensorFlow lite is much easier than regular TF.
First you would need to install OS unto your SD card, I would recommend Raspberry OS build (you can use lite version (headless), you can google how to do so).
Step 1: Update your Raspberry Pi OS
sudo apt-get update
sudo apt-get dist-upgrade
Also enable your camera interface.
Reboot you Raspberry Pi, after making the changes.
Step 2: Download the git repository
git clone https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi.git
Step 3: Rename the TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi to tflite1 as it is too long to work with.
mv TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi tflite1
cd tflite1
We’ll work in this /home/pi/tflite1 directory for the rest of the guide. Next up is to create a virtual environment called “tflite1-env”.
We are using a virtual environment for this guide because it prevents any conflicts between versions of package libraries that may already be installed on your Pi. Keeping TensorFlow installed in its own environment allows us to avoid version conflicts.
Install virtualenv by issuing:
sudo pip3 install virtualenv
Then, create the “tflite1-env” virtual environment by issuing:
python3 -m venv tflite1-env
This will create a folder called tflite1-env inside the tflite1 directory. The tflite1-env folder will hold all the package libraries for this environment. Next, activate the environment by issuing:
source tflite1-env/bin/activate
You’ll need to issue the source tflite1-env/bin/activate
command from inside the /home/pi/tflite1 directory to reactivate the environment every time you open a new terminal window. You can tell when the environment is active by checking if (tflite1-env) appears before the path in your command prompt, as shown in the screenshot below.
At this point, here’s what your tflite1 directory should look like if you issue ls
.
Step 4: Install TensorFlow Lite dependencies and OpenCV
Next, we’ll install TensorFlow, OpenCV, and all the dependencies needed for both packages. OpenCV is not needed to run TensorFlow Lite, but the object detection scripts in this repository use it to grab images and draw detection results on them.
#Install tensorflow lite
pip install tflite-runtime
Install OpenCV
#Install OpenCV
pip install opencv-python==3.4.11.41
Step 5: Set up TensorFlow Lite detection model
Google provides a sample quantized SSDLite-MobileNet-v2 object detection model which is trained off the MSCOCO dataset and converted to run on TensorFlow Lite. It can detect and identify 80 different common objects, such as people, cars, cups, etc.
Download the sample model (which can be found on the Object Detection page of the official TensorFlow website) by issuing:
wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
Unzip it to a folder called “Sample_TFLite_model” by issuing (this command automatically creates the folder):
unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d Sample_TFLite_model
The sample model is ready to go.
Step 6: Run the TensorFlow Lite model
Run the real-time webcam detection script by issuing the following command from inside the /home/pi/tflite1 directory. (Before running the command, make sure the tflite1-env environment is active by checking that (tflite1-env) appears in front of the command prompt.) The TFLite_detection_webcam.py script will work with either a Picamera or a USB webcam.
python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model
If your model folder has a different name than “Sample_TFLite_model”, use that name instead. For example, I would use --modeldir=BirdSquirrelRaccoon_TFLite_model
to run my custom bird, squirrel, and raccoon detection model.
After a few moments of initializing, a window will appear showing the webcam feed. Detected objects will have bounding boxes and labels displayed on them in real time.
Thanks to Evan for his amazing work TFlite on Raspberry Pi guide to make this possible.
Let me know if you are interested in more research focused topics, I am currently working with forest preservation research group to monitor animal activity through GIS and would love to share how GIS is used to help protecting reserves.