The world’s leading publication for data science, AI, and ML professionals.

Tuning Parameters

Motion Detection has tuning Parameters

DOING DATA SCIENCE FROM SCRATCH TASK BY TASK

If you have been reading my column here on Towards Data Science, you will know that I am on a mission. I wanted to count the number of cars passing my house using Computer Vision and Motion Detection. My last article explored motion detection events, and we even did a Line Chart visual from Scratch. After several days of data collection, I ended up with an average of 200 short clips each day. A small script creates a movie from the video files with the daily film having about 50minutes of viewing. It wasn’t any fun watching the trees moving, and the grass growing. There was a lot of false-positive detections and wanted to describe in this post how I went about fixing those. Motion Detection, like Machine Learning Algorithms, has tuning parameters.

Tuning in the Doorbell

Tuning out about 200 false-positives turns out to be not as simple as a brute force grid search using sklearn. Will Koehrsen wrote about Hyperparameter Tuning the Random Forest in Python and gave us all a great explanation of the topic.

from sklearn.model_selection import RandomizedSearchCV

RandomizedSearchCV is an elegant approach, but I just cannot see how such a technique could help me with Motion Detection from real-time cameras out in the wild. I added a little photo below of one of my Cameras in action.

Broadly speaking, I use three different strategies with my Doorbell. Those are:-

  • A PIR approach. Continuously loop, read the PIR sensor and use that to trigger a video capture with the PiCamera module. This approach is provided by the PiHut, and they have sample code available. There are some disadvantages to this model, and those PIR devices can throw a lot of false-positives. Python code is blocking, and the solution could be slow with high frame rates in a busy PIR detection zone.
while True:  
  if GPIO.input(PIR_PIN):    
     print("Motion Detected!") # and go ahead and record the video
time.sleep(2)   #a delay of 2seconds
print("Ready")   
time.sleep(1) #another delay 
  • Install the motion-project library and configure that yourself by hand. Trying to configure a complex service on a small screen using a text editor isn’t going to be ideal, and I found it difficult to really engage with the topic.
sudo nano /etc/motion/motion.conf
  • Install MotionEyeOS and use a friendly User Interface to help with tuning. Heretofore I wasn’t a fan of this approach. In my mind, this required me to dedicate a machine or machine(s) to just running MotionEye. I hate the lock-in effect of some User Interfaces, so I naturally shy away from these types of solutions.

My 200, false-positives, per day, are driven by my self-installation of the Motion-project library and accepting the defaults. Far worse than the false-positives I was getting was that I also missed the true-positives that I should have been getting. Having watched several days of videos, I was always left frustrated because of control events ( me driving away and returning ) which would never show up in the video stream. How could I see a blade of grass blowing in the wind? But miss the fact that a massive black car left the driveway and returned. It was frustrating not to see events that I created to test the system. But how to tune things up? Read on! Doing Data Science from scratch is involved and there are no shortcuts.

Evaluating the PIR approach

Since I use Raspberry Pi computers I can easily create an image, on a micro SD card, to reflect any desired mode of operation. If I wish the doorbell to work with a PIR I can simply insert my PIR image. Otherwise I can install a MotionEyeOs card and just boot up. I do love the small Raspberry Pi computer for this reason. It is extremely flexible.

My approach here was:-

  • Clone the PiHut repository and that provides a small Python script. I changed the script to save the videos in a specific folder.
  • Create a service definition based on that script. I generally use a previous service as a template.
  • Copy the service file: sudo cp doorbell.service /etc/system/systemd
  • Enable the service: sudo systemctl enable doorbell.service
  • Position the doorbell for an experiment.
  • Reboot and leave for 12 hours

Findings

The system ran for 12 hours without issues. Using a PIR sensor has a limited detection zone. For the 12 hours, there were only ten videos which is a tremendous improvement on 200 false-positives. The Python script is pretty basic, and it failed to capture control events introduced to demonstrate robustness. Similar to my frustrations with the Motion-project library, deliberate attempts to trigger the system showed massive blind spots, and it was easy to spoof the camera.

time.sleep(2) #nature and events can happen in 2 seconds and therefore those triggers will be missed.  This is the blind spot

Reviewing the ten short video clips, again combining those short clips into a single movie a script, demonstrated the benefits of the PIR system. Now there were no images of trees blowing in the wind or even light changes. Using the PIR provides a much more specific perspective, but you do need a well-focused PIR zone, and you likely need a couple of PIR sensors covering various angles giving good coverage. You cannot afford to use time.sleep() or other wait mechanisms if you want to detect movement events correctly. Thread blocking scripts is also the wrong choice. At least I now have some comfort in that I understand why a sensor could fail to see specific incidents. Introducing time delays between detection events is not a good idea. Perhaps with some refinements to the script, we could deploy the PIR approach to counting cars. We would want to avoid privacy issues and distracting drivers from their focus on safe driving.

Evaluating MotionEyeOS

MotionEyeOs is an image which is available for small single-board computers such as Asus Thinker or Raspberry Pi. "motionEyeOS is a Linux distribution that turns a single-board computer into a video surveillance system. The OS is based on BuildRoot and uses motion as a backend and motionEye for the frontend." from the creator. MotionEyeOS, therefore, extends my work with the motion library but adds a pretty decent front end-user interface. Using the UI we can now explore the tuning parameters.

Here is a screenshot of the main detection parameters.

Another example of the parameters available

The exciting thing is the easy way to create both a Privacy and Detection mask using the User Interface.

With the system running for a full three days, I have far fewer detection events. There are only 8, 10 and 6 incidences for December 15 through December 17 based on the doorbell in the natural place – at the front door. Integrating MotionEyeOS into the project seems like a good option. As illustrated in the screenshot below, there are plenty of ways to interact with the images and videos.

Closing

There was a lot of false-positive detections in my Traffic counting experiment. I set out to describe my method for fixing those. Motion Detection, like Machine Learning Algorithms, has tuning parameters. Indeed motion detection uses algorithms that also have tuning parameters. After several days of tweaking the settings, I can see some more positive results. There are now far fewer false-positives, and more of the control events are getting picked up (true-positives). A single Camera system will have blind spots and will miss incidents when the ‘motion gap’ is set too large. My previous setting was 30 seconds, and with that setting, I missed every second vehicle going up and down.

Thankfully, when we omit parameters that control the MotionEyeOS system and those camera quality tuning vectors, there aren’t that many parameters to tune. I hope you enjoyed reading this article, and the column, I am on a mission to learn Computer Vision, count traffic passing, and generally keep my skills fresh and sharp.


Related Articles