Intro
This week’s assignment for myself is a little bit different from the most other people of the class. I originally wanted to buy my own Arduino Nano 33 BLE Sense, as I wanted to keep my own instead of returning the board back to ER after the class. However under the insane COVID-19 circumstance, I couldn’t get one in time from Amazon. Instead I asked for Yining’s permission and guidance for pivoting from the original assignment, which is developed upon the Arduino, to a Raspberry Pi instead.
My task would be
- Setup the Pi
- Touch screen
- Follow a tutorial to make the Pi to do objection recognition with a Camera
Equipment



Setting up the Pi
Utilizing my previous experience with the Pi Zero W, I finished flashing the distro into the SD card with Etcher quickly and easily.

I don’t have a computer monitor with me. As I moved to San Francisco to live with my uncle and aunt, I tried my best to not mess up their TV set up, while finishing what I have to accomplish for the assignment.

As I desperately wanted to connect my little touch screen instead of having the Pi occupying the TV, I immediately started installing driver to use the touch screen.

By following some blogs online, I struggled a bit to debug what was wrong with the screen, as it wasn’t showing anything.

After some more searching and debugging, I think using this tool solved my problem. I should come back to setting up the screen later, as the installing the driver definitely causes the board to misbehave.



Make the Pi run a TensorFlow model (Part 1)
Originally I followed this tutorial recommended by Yining.

Frankly working on a small screen was a terrible idea, I ended up connection a bluetooth keyboard to both the Pi and the iPad. I worked with two shells at the same time to save time, working with one locally, and one from the iPad using an SSH app, called Termius.
I followed the video and installed all the required packages and files needed. It took quite a long time.
Unluckily, it didn’t work out as easy as I expected.

Tried to solve the problem by downgrading OpenCV. Which led to more problem.

I decided to pivot. And I found this guide which is also doing objection classification on a Pi, but with TensorFlow Lite this time. Anyhow it is still good to understand how the original TensorFlow is put together in an Linux environment. Then necessities are
- TensorFlow (of course)
- and a model
- LibAtlas
- Automatically Tuned Linear Algebra Software
- Some more dependencies
- pillow
- python imaging library
- lxml
- python XML and HTML library
- jupyter
- a python programming environment
- matplotlib
- python library for statistics / data visualization
- cython
- C-Extensions for python
- pillow
- OpenCV
- used solely for displaying images
- a lot of image, video codec packages needed
- Protobuf
- API for interpreting Google’s Protocol Buffer data formal
Make the Pi run a TensorFlow LITE model (Part 2)
Instead of just TensorFlow, I am now working with TensorFlow Lite, which is more light weight and theoretically the Pi would run faster.
As I know that I can easily train a model with Teachable Machine, I didn’t follow the model training section, and went straight to “Run TensorFlow Lite Object Detection Models”, on the Pi, without Google Coral TPU.
It was much more simpler and less time consuming than getting the original tensor flow to run on the Pi. There were essentially less than 10 steps/commands to start running the project.
// update and upgrade before setting things up
sudo apt-get update
sudo apt-get dist-upgrade
// clone the project from the tutor
git clone https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi.git
// rename the folder and navigate to the folder
mv TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi tflite1
cd tflite1
// install virtual environment tool to prevent conflicts between versions of package libraries
sudo pip3 install virtualenv
// create virtual environment
python3 -m venv tflite1-env
// start virtual environment
source tflite1-env/bin/activate
// install all the dependencies with the shell script prepared by the tutor
bash get_pi_requirements.sh
// download the mobilenet model from google
wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
// unzip the model
unzip coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d Sample_TFLite_model
// final step, run the python code with the model
python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model

Despite the fact that it works and shows image, the image is completely meaningless to look at, I have to switch back to a using a TV.
Unluckily, I couldn’t properly switch the video to output with HDMI with installing another driver. I ended up do a clean Raspbian install again, and start from scratch…


// take a screenshot after 5 seconds, very nifty tool
scrot -d 5
Trying to understand the Python code
I didn’t code any python project before, and I think I should study and hopefully be acquainted to understand python with just my shallow knowledge acquainted form some basic video tutorials.
Some key components to highlight
- use CV2
- get video stream
- draw frames and text around objects
- use of interpreter from tflite_runtime
- an interpreter takes the model
- for allocating tensors
- get input (features)
- get output (labels)
- `floating_model = (input_details[0][‘dtype’] == np.float32)`
- 32bit floating point tensor type
- single-precision
// the actual heavy lifting
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
So far the python is pretty legible, and I can foresee that the hurdle that I have to overcome is to deal with coding problems with the right library and the right function.
Next Step
Understand how python code controls GPIO pins and what libraries do I need to control different sensors and parts. (I am only experienced in using Node.js to do PComp things on the Pi).