Machine Learning for PComp WK 2: Whack-no-Mole

Intro

“Whack-no-Mole” is a physical computing project parodying a classic arcade game “Whac A Mole”, powered by an easy to use machine learning tool Teachable Machine, p5.js and Arduino.

Idea

Inspired by the class “Designing the Absurd” taught by Pedro Oliveira and also the class “Hacking the Browser” taught by Cory Forsyth, I have been recently making absurd projects, some times not as functional driven, or even works against the user. I wanted to extend the idea of a useless box, and the idea suddenly stroke we when I first saw the possibility that Teachable machine provides.

A useless box
A scene from Silicon Valley, “Not Hotdog”

Whenever the user lifts up a hammer and is ready to hit, the camera of the computer recognize that there’s a hammer, then a random mole will show its head above the ground. When the user strikes, the computer would detect that the users has removed the hammer away from the camera, so it will control the mole to hide under the ground. “Technically”, the user will never hit a mole, if the computer is fast enough to detect the absence of the hammer.

For the physical computing side of the project, a “mole” would be made of a solenoid/stepper motor with gears, plus a microswitch, in order to raise the mole above the surface, and also detect a hit (despite the fact the a hit is rather impossible).

Development

Teachable Machine

First, I started an “Image Project” from the home page.

I tried to just include only one class for the hammer to train the model. However, it didn’t work well, as the model will always infer that a hammer is being detected.

Then added the second class “No Hammer” back. The model works better this time. However, sometimes there are false trigger for “Hammer” even the hammer is not being captured by the camera.

Finally, I added even more samples for training. And I was satisfied with the overall accuracy of the model.

Once the mode is uploaded, it is ready for later use.

p5.js

https://editor.p5js.org/jasontsemf/sketches/HQ8TowuhU

By replacing the myImageModelURL and portName with my own URL and port name based on Yining’s p5 sketch, I managed to get the correct result displayed on p5.

The information stored as a byte is then sent via serial and available for the Arduino to retrieve.

Arduino

I first tried to print the byte being read from the serial.

Once I ensure that the Arduino successfully got the data, I coded the rest of the logic.

#define LED 12
int incomingByte = 0; // for incoming serial data

void setup() {
  Serial.begin(9600);
  pinMode(LED, OUTPUT);
}

void loop() {
  if (Serial.available() > 0) {
    incomingByte = Serial.read();
    if(incomingByte == 1){
      // hammer
      digitalWrite(LED, HIGH);
    }else{
      // no hammer
      digitalWrite(LED, LOW);
    }
  }
}

Prototype Demo

Limitation

I kind of hit the limitation of such image model when I have only two possible labels to be classified. I guess anything which does not look like “no hammer” will be treated as “hammer”. That explains why my hand without a hammer in the picture would be inferred as “hammer”.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s