In this developer code pattern, we will deploy an application that leverages neural networking models to analyze Real-Time Streaming Protocol (RTSP) video streams using OpenCV and Darknet.
There are many surveillance cameras that have been installed, but they cannot be closely monitored throughout the day. Since events are more likely to occur while the operator is not watching, many significant events go undetected, even when they are recorded. Users can’t be expected to view many hours of video footage, especially if they’re not sure what they’re looking for.
This project aims to alleviate this problem by using deep-learning algorithms to detect movement, and identify objects in a video feed. These algorithms can be applied to live streams and previously recorded video. After each video frame has been analyzed, the labeled screenshot and corresponding metadata are also uploaded to a Cloudant® database. This allows for an operator to invoke complex queries and run analytics against the collected data. Example queries include selecting all screenshots in which a person was detected at camera 3 on the previous Monday or getting total count of cars detected last Saturday.
When you have completed this code pattern, you will understand how to:
- Connect to a RTSP video stream via Python and OpenCV
- Use OpenCV and NumPy to process video frames and determine when significant motion has occurred
- Identify objects in a photograph or video using a pre-built deep-learning model
- Connect a motion detection script to a RTSP stream or video file
- If motion is detected, capture screenshot and forward to Node.js server hosted locally or in IBM Cloud container service
- Analyze screenshot using Darknet/YOLO object detection algorithm
- Upload labeled screenshot and associated metadata (time, camera channel) to Cloudant database
Please see detailed instructions in the README.