home..

Snow Plow

computer vision robotics

Introduction

For my high school’s STEM capstone project, me and my group decided to prototype and design an autonomous snowplow. We chose this idea due to the fact that there were no readily available products to solve this problem. Other robots were too costly (upwards of $5,000!), and snow removal services still resulted in hard, rough human labor in a task that can be easily automated.

What we tried

Originally, we were deciding between a snowblower and snowplow, and we ended up on a snowplow due to its simplicity and easy troubleshooting. However, when brainstorming our algorithms we were presented with more challenges, including our methods of communication and control.

Our solution

We eventually settled on using a Raspberry Pi to video AprilTags attached on the top of the robot, and determine its pose from the camera data. We would then calculate the power we needed to send to the left and right motors and send the data over to the arduino on the robot via MQTT.

Hindsight

While our solution did end up working, both camera quality and computing quality created physical roadblocks in the challenge. If we were to start over from scratch, I wonder if using an nRF module to transmit data over radio would create better results, or even completely remodeling our design, and instead mounting a LiDAR sensor to create a single-body system.

Highlights

Computer Vision

One of our biggest accomplishments were to optimize (to an extent) the camera data, multithreading the camera and detection thread, nearly tripling our camera FPS. Additionally, we chose to mount two AprilTags on the robot, allowing us to not only determine the position of the robot, but also its angle.

Robot Navigation

While we originally opted for the more complex and dynamic equations

\[\omega = k_{angular} * \Delta\theta\] \[v = v_{desired} - k_{linear}\Delta\theta\] \[v_{left} = v - \omega\] \[v_{right} = v + \omega\]

Where \(\omega\) and \(v\) are angular and linear speed respectively, and \(\Delta\theta = \theta_{desired} - \theta_{robot}\). We planned to tune both \(k_{angular}\) and \(k_{linear}\) through physical testing, and these equations would result in a smooth robot → point path plan.

However, time got the best of us and we ended up ingoring the linear velocity of the equation, and ended up making the robot turn to the point, and then move forward. While it worked, I wished we had more time to see this path plan in action.

MQTT and Data Transfer

Data transfer was a huge issue in our project, as we were unsure which system we wanted to use. We decided on MQTT at the end, and made the Raspberry Pi the publisher and Arduino the subscriber. However, after the presentation, as I was working on the project a little more, I ported the program over to ROS, and it ran a lot better. In hindsight, I wish I would’ve known about ROS, but MQTT worked well enough.

We decided to transfer only two numbers to reduce computational power for the Arudino. Essentially, the Raspberry Pi was the “Big Boss”, where it did the bulk of the calculations, and the Arduino was “Little Boss”, where it just commanded the motors to do based on the values the Pi gave it.

While the two numbers weren’t that much data, it still took time to send over wireless, and this delay caused some failures at certain points. I wonder if using an nRF radio module would have solved that issue or not.

Conclusion

Overall, I enjoyed the project, and I really learned a lot about not just computer vision and path planning, but about multi-robot systems and a bunch of small bits about robotics. Additionally, I’m grateful for the team I worked with, as I wouldn’t have even come close to a finished project if it was just me. Bouncing ideas and discussing the project was, in my opinion, even more valuable than developing. As I enter my next year at the University of Michigan, I’m looking forward to taking these values with me and growing even more!

presentation picture

© 2025 albert chen   •  Powered by Soopr   •  Theme  Moonwalk