How Fast Is The Camera Rotating At The Moment When The Train Is 1km From The Camer
Last updated on July viii, 2021.
That son of a bitch. I knew he took my final beer.
These are words a man should never, ever have to say. But I muttered them to myself in an exasperated sigh of disgust as I closed the door to my fridge.
You lot see, I had but spent over 12 hours writing content for the upcoming PyImageSearch Gurus form. My encephalon was fried, practically leaking out my ears similar half cooked scrambled eggs. And after calling it quits for the night, all I wanted was to exercise relax and sentry my all-fourth dimension favorite movie, Jurassic Park, while sipping an ice common cold Finestkind IPA from Smuttynose, a brewery I have become quite fond of as of late.
Only that son of a bitch James had come over last night and drank my terminal beer.
Well, allegedly .
I couldn't really evidence anything. In reality, I didn't actually come across him beverage the beer as my face was buried in my laptop, fingers floating to a higher place the keyboard, feverishly pounding out tutorials and manufactures. But I had a feeling he was the culprit. He is my merely (ex-)friend who drinks IPAs.
So I did what whatsoever human being would practise.
I mounted a Raspberry Pi to the top of my kitchen cabinets to automatically detect if he tried to pull that beer stealing shit once again:
Excessive?
Possibly.
But I take my beer seriously. And if James tries to steal my beer once more, I'll grab him redhanded.
- Update July 2021: Added new sections on alternative groundwork subtraction and motion detection algorithms nosotros can employ with OpenCV.
Looking for the source code to this post?
Jump Correct To The Downloads SectionA 2-role serial on move detection
This is the showtime post in a ii office serial on building a motility detection and tracking system for habitation surveillance.
The residual of this commodity will detail how to build a basic motility detection and tracking organisation for dwelling surveillance using computer vision techniques. This example will piece of work with both pre-recorded videos and live streams from your webcam; however, we'll be developing this arrangement on our laptops/desktops.
In the 2d postal service in this serial I'll show you how to update the code to work with your Raspberry Pi and photographic camera board — and how to extend your home surveillance system to capture any detected motility and upload it to your personal Dropbox.
And maybe at the finish of all this nosotros tin catch James crimson handed…
A little fleck about background subtraction
Background subtraction is disquisitional in many computer vision applications. Nosotros use information technology to count the number of cars passing through a cost booth. Nosotros use it to count the number of people walking in and out of a shop.
And we use it for move detection.
Earlier we get started coding in this mail, let me say that there are many, many means to perform motion detection, tracking, and analysis in OpenCV. Some are very uncomplicated. And others are very complicated. The two primary methods are forms of Gaussian Mixture Model-based foreground and background division:
- An improved adaptive background mixture model for existent-time tracking with shadow detection by KaewTraKulPong et al., available through the
cv2.BackgroundSubtractorMOG
function. - Improved adaptive Gaussian mixture model for background subtraction past Zivkovic, and Efficient Adaptive Density Estimation per Prototype Pixel for the Job of Background Subtraction, also by Zivkovic, available through the
cv2.BackgroundSubtractorMOG2
function.
And in newer versions of OpenCV we take Bayesian (probability) based foreground and background segmentation, implemented from Godbehere et al.'s 2012 paper, Visual Tracking of Human Visitors nether Variable-Lighting Conditions for a Responsive Audio Art Installation. We can find this implementation in the cv2.createBackgroundSubtractorGMG
function (we'll be waiting for OpenCV iii to fully play with this function though).
All of these methods are concerned with segmenting the background from the foreground (and they even provide mechanisms for the states to discern betwixt actual movement and just shadowing and small lighting changes)!
Then why is this and then of import? And why do we care what pixels belong to the foreground and what pixels are part of the groundwork?
Well, in movement detection, nosotros tend to make the post-obit assumption:
The background of our video stream is largely static and unchanging over consecutive frames of a video. Therefore, if we can model the background, we monitor information technology for substantial changes. If there is a substantial change, nosotros can find information technology — this change normally corresponds to motion on our video.
Now evidently in the existent-world this assumption can easily fail. Due to shadowing, reflections, lighting atmospheric condition, and whatsoever other possible modify in the surroundings, our background can look quite different in various frames of a video. And if the background appears to exist different, information technology can throw our algorithms off. That'southward why the most successful background subtraction/foreground detection systems utilize fixed mounted cameras and in controlled lighting conditions.
The methods I mentioned to a higher place, while very powerful, are also computationally expensive. And since our end goal is to deploy this system to a Raspberry Pi at the end of this 2 part serial, it'southward best that nosotros stick to elementary approaches. We'll return to these more powerful methods in hereafter blog posts, but for the time existence we are going to keep information technology simple and efficient.
In the residue of this weblog post, I'k going to detail (arguably) the about basic motion detection and tracking organization you tin can build. It won't be perfect, but it will be able to run on a Pi and still deliver good results.
Basic move detection and tracking with Python and OpenCV
Alright, are y'all set to help me develop a dwelling surveillance organization to catch that beer stealing jackass?
Open upwards a editor, create a new file, proper noun information technology motion_detector.py
, and let's become coding:
# import the necessary packages from imutils.video import VideoStream import argparse import datetime import imutils import time import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-v", "--video", help="path to the video file") ap.add_argument("-a", "--min-expanse", blazon=int, default=500, help="minimum surface area size") args = vars(ap.parse_args()) # if the video argument is None, so nosotros are reading from webcam if args.get("video", None) is None: vs = VideoStream(src=0).beginning() time.sleep(two.0) # otherwise, we are reading from a video file else: vs = cv2.VideoCapture(args["video"]) # initialize the first frame in the video stream firstFrame = None
Lines 2-7 import our necessary packages. All of these should await pretty familiar, except perhaps the imutils
packet, which is a fix of convenience functions that I accept created to make basic image processing tasks easier. If you lot do non already have imutils installed on your system, you can install it via pip: pip install imutils
.
Next up, we'll parse our control line arguments on Lines x-thirteen. We'll define ii switches here. The beginning, --video
, is optional. It simply defines a path to a pre-recorded video file that we tin can detect motion in. If you practice not supply a path to a video file, and so OpenCV volition utilize your webcam to detect move.
We'll too define --min-expanse
, which is the minimum size (in pixels) for a region of an prototype to be considered bodily "motion". Equally I'll hash out later in this tutorial, we'll often discover small regions of an paradigm that have inverse essentially, likely due to noise or changes in lighting atmospheric condition. In reality, these small regions are non actual move at all — so nosotros'll define a minimum size of a region to combat and filter out these false-positives.
Lines xvi-22 handle grabbing a reference to our vs
object. In the case that a video file path is not supplied (Lines xvi-18), we'll grab a reference to the webcam and wait for it to warm up. And if a video file is supplied, then we'll create a pointer to information technology on Lines 21 and 22.
Lastly, we'll end this lawmaking snippet by defining a variable chosen firstFrame
.
Any guesses every bit to what firstFrame
is?
If you lot guessed that it stores the first frame of the video file/webcam stream, you're right.
Supposition: The outset frame of our video file volition contain no motion and just groundwork — therefore, we can model the background of our video stream using just the beginning frame of the video.
Obviously we are making a pretty big supposition here. Only once more, our goal is to run this system on a Raspberry Pi, and then we can't get as well complicated. And equally you'll see in the results section of this post, we are able to easily detect motion while tracking a person as they walk around the room.
# loop over the frames of the video while True: # grab the current frame and initialize the occupied/unoccupied # text frame = vs.read() frame = frame if args.get("video", None) is None else frame[1] text = "Unoccupied" # if the frame could not be grabbed, then nosotros have reached the stop # of the video if frame is None: break # resize the frame, convert it to grayscale, and blur it frame = imutils.resize(frame, width=500) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) grayness = cv2.GaussianBlur(gray, (21, 21), 0) # if the offset frame is None, initialize it if firstFrame is None: firstFrame = grayness continue
So now that we take a reference to our video file/webcam stream, we tin offset looping over each of the frames on Line 28.
A call to vs.read()
on Line 31 returns a frame that we ensure we are grabbing properly on Line 32.
We'll also ascertain a string named text
and initialize information technology to indicate that the room nosotros are monitoring is "Unoccupied". If in that location is indeed activeness in the room, we tin can update this string.
And in the instance that a frame is not successfully read from the video file, we'll intermission from the loop on Lines 37 and 38.
At present we can outset processing our frame and preparing information technology for motion assay (Lines 41-43). We'll first resize information technology down to have a width of 500 pixels — there is no need to process the large, raw images straight from the video stream. Nosotros'll also catechumen the paradigm to grayscale since color has no bearing on our motion detection algorithm. Finally, we'll utilise Gaussian blurring to smoothen our images.
Information technology's important to empathise that even consecutive frames of a video stream will not be identical!
Due to tiny variations in the digital camera sensors, no 2 frames will be 100% the same — some pixels will nearly certainly have different intensity values. That said, we need to account for this and apply Gaussian smoothing to boilerplate pixel intensities beyond an 21 x 21 region (Line 43). This helps shine out high frequency noise that could throw our motion detection algorithm off.
Equally I mentioned above, nosotros demand to model the background of our image somehow. Over again, we'll make the assumption that the first frame of the video stream contains no move and is a good example of what our background looks like. If the firstFrame
is not initialized, we'll shop information technology for reference and continue on to processing the adjacent frame of the video stream (Lines 46-48).
Here's an example of the first frame of an example video:
The above frame satisfies the assumption that the starting time frame of the video is simply the static background — no motion is taking place.
Given this static background image, we're now set up to actually perform motion detection and tracking:
# compute the absolute departure between the electric current frame and # first frame frameDelta = cv2.absdiff(firstFrame, grayness) thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1] # dilate the thresholded prototype to fill in holes, then detect contours # on thresholded prototype thresh = cv2.dilate(thresh, None, iterations=2) cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) # loop over the contours for c in cnts: # if the contour is too pocket-size, ignore it if cv2.contourArea(c) < args["min_area"]: continue # compute the bounding box for the contour, draw it on the frame, # and update the text (x, y, w, h) = cv2.boundingRect(c) cv2.rectangle(frame, (ten, y), (x + w, y + h), (0, 255, 0), 2) text = "Occupied"
Now that we have our background modeled via the firstFrame
variable, we can utilise it to compute the difference between the initial frame and subsequent new frames from the video stream.
Calculating the difference between two frames is a elementary subtraction, where we have the absolute value of their respective pixel intensity differences (Line 52):
delta = |background_model – current_frame|
An example of a frame delta can be seen beneath:
Detect how the background of the image is clearly black. However, regions that comprise motion (such every bit the region of myself walking through the room) is much lighter. This implies that larger frame deltas indicate that motion is taking place in the epitome.
Nosotros'll then threshold the frameDelta
on Line 53 to reveal regions of the image that only accept significant changes in pixel intensity values. If the delta is less than 25, we discard the pixel and set information technology to black (i.e. background). If the delta is greater than 25, nosotros'll set it to white (i.e. foreground). An instance of our thresholded delta prototype tin be seen below:
Over again, note that the background of the paradigm is black, whereas the foreground (and where the motion is taking place) is white.
Given this thresholded image, information technology's elementary to apply contour detection to to discover the outlines of these white regions (Lines 58-threescore).
Nosotros start looping over each of the contours on Line 63, where we'll filter the minor, irrelevant contours on Line 65 and 66.
If the contour area is larger than our supplied --min-area
, we'll draw the bounding box surrounding the foreground and motion region on Lines 70 and 71. We'll too update our text
condition string to indicate that the room is "Occupied".
# draw the text and timestamp on the frame cv2.putText(frame, "Room Status: {}".format(text), (x, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.v, (0, 0, 255), 2) cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%K:%S%p"), (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1) # bear witness the frame and record if the user presses a key cv2.imshow("Security Feed", frame) cv2.imshow("Thresh", thresh) cv2.imshow("Frame Delta", frameDelta) cardinal = cv2.waitKey(1) & 0xFF # if the `q` cardinal is pressed, break from the lop if key == ord("q"): intermission # cleanup the photographic camera and close any open up windows vs.stop() if args.get("video", None) is None else vs.release() cv2.destroyAllWindows()
The remainder of this example merely wraps everything up. We draw the room condition on the image in the top-left corner, followed by a timestamp (to make it feel like "existent" security footage) on the bottom-left.
Lines 81-83 display the results of our piece of work, allowing us to visualize if whatsoever motility was detected in our video, along with the frame delta and thresholded paradigm so we can debug our script.
Note: If you download the code to this post and intend to apply it to your own video files, you lot'll likely demand to tune the values for cv2.threshold
and the --min-area
argument to obtain the all-time results for your lighting conditions.
Finally, Lines 91 and 92 cleanup and release the video stream pointer.
Results
Obviously I want to make sure that our motility detection organisation is working before James, the beer stealer, pays me a visit once more — nosotros'll relieve that for Part 2 of this serial. To test out our motion detection system using Python and OpenCV, I have created two video files.
The get-go, example_01.mp4
monitors the front door of my apartment and detects when the door opens. The 2d, example_02.mp4
was captured using a Raspberry Pi mounted to my kitchen cabinets. Information technology looks down on the kitchen and living room, detecting movement as people move and walk around.
Permit's requite our simple detector a endeavor. Open up upwards a final and execute the following command:
$ python motion_detector.py --video videos/example_01.mp4
Below is a .gif of a few nevertheless frames from the motion detection:
Notice how that no motion is detected until the door opens — then we are able to detect myself walking through the door. You can see the full video here:
Now, what about when I mount the camera such that it's looking down on the kitchen and living room? Let'due south find out. Simply issue the following command:
$ python motion_detector.py --video videos/example_02.mp4
A sampling of the results from the second video file can exist seen below:
And again, here is the total vide of our movement detection results:
And so as you can meet, our movement detection system is performing fairly well despite how simplistic it is! Nosotros are able to detect as I am entering and leaving a room without a trouble.
However, to exist realistic, the results are far from perfect. We get multiple bounding boxes even though in that location is only 1 person moving effectually the room — this is far from ideal. And we can clearly meet that small changes to the lighting, such as shadows and reflections on the wall, trigger false-positive motion detections.
To combat this, nosotros tin can lean on the more powerful background subtractions methods in OpenCV which tin can really account for shadowing and small amounts of reflection (I'll be roofing the more than advanced groundwork subtraction/foreground detection methods in future blog posts).
Only for the concurrently, consider our end goal.
This organisation, while adult on our laptop/desktop systems, is meant to be deployed to a Raspberry Pi where the computational resource are very limited. Because of this, we need to keep our motility detection methods simple and fast. An unfortunate downside to this is that our motion detection system is not perfect, but information technology still does a fairly skilful job for this particular project.
Finally, if y'all want to perform motility detection on your own raw video stream from your webcam, just leave off the --video
switch:
$ python motion_detector.py
Culling motion detection algorithms in OpenCV
The motion detection algorithm we implemented here today, while simple, is unfortunately very sensitive to any changes in the input frames.
This is primarily due to the fact that we are grabbing the very first frame from our camera sensor, treating it as our background, and then comparing the background to every subsequent frame, looking for whatever changes. If a modify is detected, we tape it as motion.
However, this method can speedily fall apart if you are working with varying lighting conditions.
For instance, suppose you are monitoring the garage outside your house for intruders. Since your garage is exterior, lighting conditions volition alter due to rain, clouds, the motion of the sun, night, etc.
If yous were to cull a unmarried static frame and care for it as your groundwork in such a condition, then information technology's likely that within hours (and maybe even minutes, depending on the situation) that the brightness of the entire outdoor scene would change, and thus cause fake-positive move detections.
The way you get around this trouble is to maintain a rolling average of the past N frames and treat this "averaged frame" equally your background. You and so compare the averaged set of frames to the current frame, looking for substantial differences.
The following tutorial will teach you how to implement the method I merely discussed.
Alternatively, OpenCV implements a number of background subtraction algorithms that you tin can apply:
- OpenCV: How to Employ Background Subtraction Methods
- Background Subtraction with OpenCV and BGS Libraries
What'south next? I recommend PyImageSearch Academy.
Form data:
35+ total classes • 39h 44m video • Last updated: Apr 2022
★★★★★ 4.84 (128 Ratings) • 13,800+ Students Enrolled
I strongly believe that if you had the right teacher yous could master computer vision and deep learning.
Practise y'all recollect learning computer vision and deep learning has to exist time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in figurer science?
That's non the case.
All you need to master reckoner vision and deep learning is for someone to explicate things to you in simple, intuitive terms. And that's exactly what I do. My mission is to change education and how circuitous Artificial Intelligence topics are taught.
If you're serious about learning computer vision, your next cease should be PyImageSearch Academy, the about comprehensive computer vision, deep learning, and OpenCV grade online today. Here you'll learn how to successfully and confidently utilize computer vision to your piece of work, research, and projects. Bring together me in computer vision mastery.
Inside PyImageSearch University you lot'll notice:
- ✓ 35+ courses on essential computer vision, deep learning, and OpenCV topics
- ✓ 35+ Certificates of Completion
- ✓ 39+ hours of on-demand video
- ✓ Brand new courses released regularly , ensuring you tin continue upward with state-of-the-art techniques
- ✓ Pre-configured Jupyter Notebooks in Google Colab
- ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
- ✓ Access to centralized code repos for all 450+ tutorials on PyImageSearch
- ✓ Easy i-click downloads for lawmaking, datasets, pre-trained models, etc.
- ✓ Access on mobile, laptop, desktop, etc.
Click here to join PyImageSearch Academy
Summary
In this blog post we found out that my friend James is a beer stealer. What an asshole.
And in order to catch him cerise handed, we have decided to build a motion detection and tracking system using Python and OpenCV. While basic, this system is capable of taking video streams and analyzing them for move while obtaining fairly reasonable results given the limitations of the method we utilized.
The cease goal if this system is to deploy it to a Raspberry Pi, so we did not leverage some of the more than advanced groundwork subtraction methods in OpenCV. Instead, we relied on a unproblematic even so reasonably constructive assumption — that the kickoff frame of our video stream contains the background we desire to model and nothing more.
Under this supposition we were able to perform background subtraction, find movement in our images, and describe a bounding box surrounding the region of the image that contains motion.
In the second part of this serial on motion detection, we'll be updating this code to run on the Raspberry Pi.
We'll too be integrating with the Dropbox API, assuasive united states of america to monitor our home surveillance organization and receive real-time updates whenever our arrangement detects motility.
Stay tuned!
Download the Source Code and FREE 17-page Resources Guide
Enter your email accost below to get a .zip of the lawmaking and a Gratuitous 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside y'all'll find my hand-picked tutorials, books, courses, and libraries to help y'all primary CV and DL!
Source: https://pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
Posted by: stitesisfuld.blogspot.com
0 Response to "How Fast Is The Camera Rotating At The Moment When The Train Is 1km From The Camer"
Post a Comment