Compare commits

...

53 Commits

Author SHA1 Message Date
Blake Blackshear
e1c4aa94f4 track and report all detected object types 2019-12-14 15:18:21 -06:00
Blake Blackshear
5c01720567 Update README.md 2019-12-12 08:08:32 -06:00
Blake Blackshear
262f45c8bc Update sponsorship option 2019-12-11 06:35:17 -06:00
tubalainen
22bb17b2fd Filename updated but not the reference 2019-12-09 06:01:27 -06:00
Blake Blackshear
3a3afe14bf change the ffmpeg config for global defaults and overrides 2019-12-08 16:03:23 -06:00
Blake Blackshear
01f058a482 clarify optional properties 2019-12-08 16:03:23 -06:00
Blake Blackshear
d899ef158e fix datestamp positioning 2019-12-08 16:03:23 -06:00
Blake Blackshear
39d64f7ba7 add health check and handle bad camera names 2019-12-08 16:03:23 -06:00
Blake Blackshear
f148eb5a7b add some comments for regions 2019-12-08 16:03:23 -06:00
Blake Blackshear
297e2f1c0c allow mqtt client_id to be set for multi frigate setups 2019-12-08 16:03:23 -06:00
Blake Blackshear
e818744d81 print the frame time on the image 2019-12-08 08:55:54 -06:00
Blake Blackshear
ceedfae993 add max person area 2019-12-08 07:17:18 -06:00
Blake Blackshear
e13563770d allow full customization of input 2019-12-08 07:06:52 -06:00
Blake Blackshear
a659019d1a move config example 2019-12-08 07:06:52 -06:00
blakeblackshear
ba71927d53 allow setting custom output params and setting the log level for ffmpeg 2019-08-25 08:54:19 -05:00
blakeblackshear
04fed31eac increase watchdog timeout to 10 seconds 2019-08-25 08:54:19 -05:00
blakeblackshear
ebaa8fac01 tweak input params and gracefully kill ffmpeg 2019-08-25 08:54:19 -05:00
blakeblackshear
2ec45cd1b6 send the best person frame over mqtt for faster updates in homeassistant 2019-08-25 08:54:19 -05:00
blakeblackshear
700bd1e3ef use a thread to capture frames from the subprocess so it can be killed properly 2019-07-30 19:11:22 -05:00
Alexis Birkill
c9e9f7a735 Fix comparison of object x-coord against mask (#52) 2019-07-30 19:11:22 -05:00
blakeblackshear
aea4dc8724 a few fixes 2019-07-30 19:11:22 -05:00
blakeblackshear
12d5007b90 add required packages for VAAPI 2019-07-30 19:11:22 -05:00
blakeblackshear
8970e73f75 comment formatting and comment out mask in example config 2019-07-30 19:11:22 -05:00
blakeblackshear
1ba006b24f add some comments to the sample config 2019-07-30 19:11:22 -05:00
blakeblackshear
4a58f16637 tweak the label position 2019-07-30 19:11:22 -05:00
blakeblackshear
436b876b24 add support for ffmpeg hwaccel params and better mask handling 2019-07-30 19:11:22 -05:00
blakeblackshear
a770ab7f69 specify a client id for frigate 2019-07-30 19:11:22 -05:00
blakeblackshear
806acaf445 update dockerignore and debug option 2019-07-30 19:11:22 -05:00
Kyle Niewiada
c653567cc1 Add area labels to bounding boxes (#47)
* Add object size to the bounding box

Remove script from Dockerfile

Fix framerate command

Move default value for framerate

update dockerfile

dockerfile changes

Add person_area label to surrounding box


Update dockerfile


ffmpeg config bug


Add `person_area` label to `best_person` frame


Resolve debug view showing area label for non-persons


Add object size to the bounding box


Add object size to the bounding box

* Move object area outside of conditional to work with all object types
2019-07-30 19:11:22 -05:00
blakeblackshear
8fee8f86a2 take_frame config example 2019-07-30 19:11:22 -05:00
blakeblackshear
59a4b0e650 add ability to process every nth frame 2019-07-30 19:11:22 -05:00
blakeblackshear
834a3df0bc added missing scripts 2019-07-30 19:11:22 -05:00
blakeblackshear
c41b104997 extra ffmpeg params to reduce latency 2019-07-30 19:11:22 -05:00
blakeblackshear
7028b05856 add a benchmark script 2019-07-30 19:11:22 -05:00
blakeblackshear
2d22a04391 reduce verbosity of ffmpeg 2019-07-30 19:11:22 -05:00
blakeblackshear
baa587028b use a regular subprocess for ffmpeg, refactor bounding box drawing 2019-07-30 19:11:22 -05:00
blakeblackshear
2b51dc3e5b experimental: running ffmpeg directly and capturing raw frames 2019-07-30 19:11:22 -05:00
blakeblackshear
9f8278ea8f working odroid build, still needs hwaccel 2019-07-30 19:11:22 -05:00
Blake Blackshear
56b9c754f5 Update README.md 2019-06-18 06:19:13 -07:00
Blake Blackshear
5c4f5ef3f0 Create FUNDING.yml 2019-06-18 06:15:05 -07:00
Blake Blackshear
8c924896c5 Merge pull request #36 from drcrimzon/patch-1
Add MQTT connection error handling
2019-05-15 07:10:53 -05:00
Mike Wilkinson
2c2f0044b9 Remove error redundant check 2019-05-14 11:09:57 -04:00
Mike Wilkinson
874e9085a7 Add MQTT connection error handling 2019-05-14 08:34:14 -04:00
Blake Blackshear
e791d6646b Merge pull request #34 from blakeblackshear/watchdog
0.1.2
2019-05-11 07:43:09 -05:00
blakeblackshear
3019b0218c make the threshold configurable per region. fixes #31 2019-05-11 07:39:27 -05:00
blakeblackshear
6900e140d5 add a watchdog to the capture process to detect silent failures. fixes #27 2019-05-11 07:16:15 -05:00
Blake Blackshear
911c1b2bfa Merge pull request #32 from tubalainen/patch-2
Clarification on username and password for MQTT
2019-05-11 07:14:19 -05:00
Blake Blackshear
f4587462cf Merge pull request #33 from tubalainen/patch-3
Update of the home assistant integration example
2019-05-11 07:14:01 -05:00
tubalainen
cac1faa8ac Update of the home assistant integration example
sensor to binary_sensor
device_class type "moving" does not exist, update to "motion"
2019-05-10 16:47:40 +02:00
tubalainen
9525bae5a3 Clarification on username and password for MQTT 2019-05-10 16:36:22 +02:00
blakeblackshear
dbcfd109f6 fix missing import 2019-05-10 06:19:39 -05:00
Blake Blackshear
f95d8b6210 Merge pull request #26 from blakeblackshear/mask
add the ability to mask the standing location of a person
2019-05-01 06:43:32 -05:00
blakeblackshear
4dacf02ef9 add the ability to mask the standing location of a person 2019-04-30 20:35:22 -05:00
18 changed files with 750 additions and 332 deletions

View File

@@ -1 +1,6 @@
README.md
README.md
diagram.png
.gitignore
debug
config/
*.pyc

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: blakeblackshear

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
*.pyc
debug
.vscode
config/config.yml

View File

@@ -1,71 +1,70 @@
FROM ubuntu:16.04
FROM ubuntu:18.04
# Install system packages
RUN apt-get -qq update && apt-get -qq install --no-install-recommends -y python3 \
python3-dev \
python-pil \
python-lxml \
python-tk \
ARG DEVICE
# Install packages for apt repo
RUN apt-get -qq update && apt-get -qq install --no-install-recommends -y \
apt-transport-https \
ca-certificates \
curl \
wget \
gnupg-agent \
dirmngr \
software-properties-common \
&& rm -rf /var/lib/apt/lists/*
COPY scripts/install_odroid_repo.sh .
RUN if [ "$DEVICE" = "odroid" ]; then \
sh /install_odroid_repo.sh; \
fi
RUN apt-get -qq update && apt-get -qq install --no-install-recommends -y \
python3 \
# OpenCV dependencies
ffmpeg \
build-essential \
cmake \
git \
libgtk2.0-dev \
pkg-config \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libtbb2 \
libtbb-dev \
cmake \
unzip \
pkg-config \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libdc1394-22-dev \
x11-apps \
wget \
vim \
ffmpeg \
unzip \
libusb-1.0-0-dev \
python3-setuptools \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libgtk-3-dev \
libatlas-base-dev \
gfortran \
python3-dev \
# Coral USB Python API Dependencies
libusb-1.0-0 \
python3-pip \
python3-pil \
python3-numpy \
zlib1g-dev \
libgoogle-glog-dev \
swig \
libunwind-dev \
libc++-dev \
libc++abi-dev \
build-essential \
libc++1 \
libc++abi1 \
libunwind8 \
libgcc1 \
# VAAPI drivers for Intel hardware accel
libva-drm2 libva2 i965-va-driver vainfo \
&& rm -rf /var/lib/apt/lists/*
# Install core packages
RUN wget -q -O /tmp/get-pip.py --no-check-certificate https://bootstrap.pypa.io/get-pip.py && python3 /tmp/get-pip.py
RUN pip install -U pip \
numpy \
pillow \
matplotlib \
notebook \
Flask \
imutils \
paho-mqtt \
PyYAML
# Install tensorflow models object detection
RUN GIT_SSL_NO_VERIFY=true git clone -q https://github.com/tensorflow/models /usr/local/lib/python3.5/dist-packages/tensorflow/models
RUN wget -q -P /usr/local/src/ --no-check-certificate https://github.com/google/protobuf/releases/download/v3.5.1/protobuf-python-3.5.1.tar.gz
# Download & build protobuf-python
RUN cd /usr/local/src/ \
&& tar xf protobuf-python-3.5.1.tar.gz \
&& rm protobuf-python-3.5.1.tar.gz \
&& cd /usr/local/src/protobuf-3.5.1/ \
&& ./configure \
&& make \
&& make install \
&& ldconfig \
&& rm -rf /usr/local/src/protobuf-3.5.1/
# Download & build OpenCV
# TODO: use multistage build to reduce image size:
# https://medium.com/@denismakogon/pain-and-gain-running-opencv-application-with-golang-and-docker-on-alpine-3-7-435aa11c7aec
# https://www.merixstudio.com/blog/docker-multi-stage-builds-python-development/
RUN wget -q -P /usr/local/src/ --no-check-certificate https://github.com/opencv/opencv/archive/4.0.1.zip
RUN cd /usr/local/src/ \
&& unzip 4.0.1.zip \
@@ -76,32 +75,35 @@ RUN cd /usr/local/src/ \
&& cmake -D CMAKE_INSTALL_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local/ .. \
&& make -j4 \
&& make install \
&& ldconfig \
&& rm -rf /usr/local/src/opencv-4.0.1
# Download and install EdgeTPU libraries
RUN wget -q -O edgetpu_api.tar.gz --no-check-certificate http://storage.googleapis.com/cloud-iot-edge-pretrained-models/edgetpu_api.tar.gz
# Download and install EdgeTPU libraries for Coral
RUN wget https://dl.google.com/coral/edgetpu_api/edgetpu_api_latest.tar.gz -O edgetpu_api.tar.gz --trust-server-names \
&& tar xzf edgetpu_api.tar.gz
RUN tar xzf edgetpu_api.tar.gz \
&& cd python-tflite-source \
&& cp -p libedgetpu/libedgetpu_x86_64.so /lib/x86_64-linux-gnu/libedgetpu.so \
&& cp edgetpu/swig/compiled_so/_edgetpu_cpp_wrapper_x86_64.so edgetpu/swig/_edgetpu_cpp_wrapper.so \
&& cp edgetpu/swig/compiled_so/edgetpu_cpp_wrapper.py edgetpu/swig/ \
&& python3 setup.py develop --user
COPY scripts/install_edgetpu_api.sh edgetpu_api/install.sh
RUN cd edgetpu_api \
&& /bin/bash install.sh
# Copy a python 3.6 version
RUN cd /usr/local/lib/python3.6/dist-packages/edgetpu/swig/ \
&& ln -s _edgetpu_cpp_wrapper.cpython-35m-arm-linux-gnueabihf.so _edgetpu_cpp_wrapper.cpython-36m-arm-linux-gnueabihf.so
# symlink the model and labels
RUN wget https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite -O mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --trust-server-names
RUN wget https://dl.google.com/coral/canned_models/coco_labels.txt -O coco_labels.txt --trust-server-names
RUN ln -s mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite /frozen_inference_graph.pb
RUN ln -s /coco_labels.txt /label_map.pbtext
# Minimize image size
RUN (apt-get autoremove -y; \
apt-get autoclean -y)
# symlink the model and labels
RUN ln -s /python-tflite-source/edgetpu/test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite /frozen_inference_graph.pb
RUN ln -s /python-tflite-source/edgetpu/test_data/coco_labels.txt /label_map.pbtext
# Set TF object detection available
ENV PYTHONPATH "$PYTHONPATH:/usr/local/lib/python3.5/dist-packages/tensorflow/models/research:/usr/local/lib/python3.5/dist-packages/tensorflow/models/research/slim"
RUN cd /usr/local/lib/python3.5/dist-packages/tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=.
WORKDIR /opt/frigate/
ADD frigate frigate/
COPY detect_objects.py .
COPY benchmark.py .
CMD ["python3", "-u", "detect_objects.py"]

View File

@@ -1,7 +1,7 @@
# Frigate - Realtime Object Detection for RTSP Cameras
# Frigate - Realtime Object Detection for IP Cameras
**Note:** This version requires the use of a [Google Coral USB Accelerator](https://coral.withgoogle.com/products/accelerator/)
Uses OpenCV and Tensorflow to perform realtime object detection locally for RTSP cameras. Designed for integration with HomeAssistant or others via MQTT.
Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras. Designed for integration with HomeAssistant or others via MQTT.
- Leverages multiprocessing and threads heavily with an emphasis on realtime over processing every frame
- Allows you to define specific regions (squares) in the image to look for objects
@@ -30,8 +30,9 @@ docker run --rm \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
-v <path_to_config_dir>:/config:ro \
-v /etc/localtime:/etc/localtime:ro \
-p 5000:5000 \
-e RTSP_PASSWORD='password' \
-e FRIGATE_RTSP_PASSWORD='password' \
frigate:latest
```
@@ -44,35 +45,58 @@ Example docker-compose:
image: frigate:latest
volumes:
- /dev/bus/usb:/dev/bus/usb
- /etc/localtime:/etc/localtime:ro
- <path_to_config>:/config
ports:
- "5000:5000"
environment:
RTSP_PASSWORD: "password"
FRIGATE_RTSP_PASSWORD: "password"
```
A `config.yml` file must exist in the `config` directory. See example [here](config/config.yml).
A `config.yml` file must exist in the `config` directory. See example [here](config/config.example.yml) and device specific info can be found [here](docs/DEVICES.md).
Access the mjpeg stream at `http://localhost:5000/<camera_name>` and the best person snapshot at `http://localhost:5000/<camera_name>/best_person.jpg`
Access the mjpeg stream at `http://localhost:5000/<camera_name>` and the best snapshot for any object type with at `http://localhost:5000/<camera_name>/<object_name>/best.jpg`
## Integration with HomeAssistant
```
camera:
- name: Camera Last Person
platform: generic
still_image_url: http://<ip>:5000/<camera_name>/best_person.jpg
platform: mqtt
topic: frigate/<camera_name>/person/snapshot
- name: Camera Last Car
platform: mqtt
topic: frigate/<camera_name>/car/snapshot
sensor:
binary_sensor:
- name: Camera Person
platform: mqtt
state_topic: "frigate/<camera_name>/objects"
value_template: '{{ value_json.person }}'
device_class: moving
state_topic: "frigate/<camera_name>/person"
device_class: motion
availability_topic: "frigate/available"
automation:
- alias: Alert me if a person is detected while armed away
trigger:
platform: state
entity_id: binary_sensor.camera_person
from: 'off'
to: 'on'
condition:
- condition: state
entity_id: alarm_control_panel.home_alarm
state: armed_away
action:
- service: notify.user_telegram
data:
message: "A person was detected."
data:
photo:
- url: http://<ip>:5000/<camera_name>/person/best.jpg
caption: A person was detected.
```
## Tips
- Lower the framerate of the RTSP feed on the camera to reduce the CPU usage for capturing the feed
- Lower the framerate of the video feed on the camera to reduce the CPU usage for capturing the feed
## Future improvements
- [x] Remove motion detection for now

20
benchmark.py Normal file
View File

@@ -0,0 +1,20 @@
import statistics
import numpy as np
from edgetpu.detection.engine import DetectionEngine
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = '/frozen_inference_graph.pb'
# Load the edgetpu engine and labels
engine = DetectionEngine(PATH_TO_CKPT)
frame = np.zeros((300,300,3), np.uint8)
flattened_frame = np.expand_dims(frame, axis=0).flatten()
detection_times = []
for x in range(0, 1000):
objects = engine.DetectWithInputTensor(flattened_frame, threshold=0.1, top_k=3)
detection_times.append(engine.get_inference_time())
print("Average inference time: " + str(statistics.mean(detection_times)))

BIN
config/back-mask.bmp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

128
config/config.example.yml Normal file
View File

@@ -0,0 +1,128 @@
web_port: 5000
mqtt:
host: mqtt.server.com
topic_prefix: frigate
# client_id: frigate # Optional -- set to override default client id of 'frigate' if running multiple instances
# user: username # Optional -- Uncomment for use
# password: password # Optional -- Uncomment for use
#################
# Default ffmpeg args. Optional and can be overwritten per camera.
# Should work with most RTSP cameras that send h264 video
# Built from the properties below with:
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
#################
# ffmpeg:
# global_args:
# - -hide_banner
# - -loglevel
# - panic
# hwaccel_args: []
# input_args:
# - -avoid_negative_ts
# - make_zero
# - -fflags
# - nobuffer
# - -flags
# - low_delay
# - -strict
# - experimental
# - -fflags
# - +genpts+discardcorrupt
# - -vsync
# - drop
# - -rtsp_transport
# - tcp
# - -stimeout
# - '5000000'
# - -use_wallclock_as_timestamps
# - '1'
# output_args:
# - -vf
# - mpdecimate
# - -f
# - rawvideo
# - -pix_fmt
# - rgb24
####################
# Global object configuration. Applies to all cameras and regions
# unless overridden at the camera/region levels.
# Keys must be valid labels. By default, the model uses coco (https://dl.google.com/coral/canned_models/coco_labels.txt).
# All labels from the model are reported over MQTT. These values are used to filter out false positives.
####################
objects:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
cameras:
back:
ffmpeg:
################
# Source passed to ffmpeg after the -i parameter. Supports anything compatible with OpenCV and FFmpeg.
# Environment variables that begin with 'FRIGATE_' may be referenced in {}
################
input: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
#################
# These values will override default values for just this camera
#################
# global_args: []
# hwaccel_args: []
# input_args: []
# output_args: []
################
## Optional mask. Must be the same dimensions as your video feed.
## The mask works by looking at the bottom center of the bounding box for the detected
## person in the image. If that pixel in the mask is a black pixel, it ignores it as a
## false positive. In my mask, the grass and driveway visible from my backdoor camera
## are white. The garage doors, sky, and trees (anywhere it would be impossible for a
## person to stand) are black.
################
# mask: back-mask.bmp
################
# Allows you to limit the framerate within frigate for cameras that do not support
# custom framerates. A value of 1 tells frigate to look at every frame, 2 every 2nd frame,
# 3 every 3rd frame, etc.
################
take_frame: 1
objects:
person:
min_area: 5000
max_area: 100000
threshold: 0.5
################
# size: size of the region in pixels
# x_offset/y_offset: position of the upper left corner of your region (top left of image is 0,0)
# min_person_area (optional): minimum width*height of the bounding box for the detected person
# max_person_area (optional): maximum width*height of the bounding box for the detected person
# threshold (optional): The minimum decimal percentage (50% hit = 0.5) for the confidence from tensorflow
# Tips: All regions are resized to 300x300 before detection because the model is trained on that size.
# Resizing regions takes CPU power. Ideally, all regions should be as close to 300x300 as possible.
# Defining a region that goes outside the bounds of the image will result in errors.
################
regions:
- size: 350
x_offset: 0
y_offset: 300
objects:
car:
threshold: 0.2
- size: 400
x_offset: 350
y_offset: 250
objects:
person:
min_area: 2000
- size: 400
x_offset: 750
y_offset: 250
objects:
person:
min_area: 2000

View File

@@ -1,49 +0,0 @@
web_port: 5000
mqtt:
host: mqtt.server.com
topic_prefix: frigate
cameras:
back:
rtsp:
user: viewer
host: 10.0.10.10
port: 554
# values that begin with a "$" will be replaced with environment variable
password: $RTSP_PASSWORD
path: /cam/realmonitor?channel=1&subtype=2
regions:
- size: 350
x_offset: 0
y_offset: 300
min_person_area: 5000
- size: 400
x_offset: 350
y_offset: 250
min_person_area: 2000
- size: 400
x_offset: 750
y_offset: 250
min_person_area: 2000
back2:
rtsp:
user: viewer
host: 10.0.10.10
port: 554
# values that begin with a "$" will be replaced with environment variable
password: $RTSP_PASSWORD
path: /cam/realmonitor?channel=1&subtype=2
regions:
- size: 350
x_offset: 0
y_offset: 300
min_person_area: 5000
- size: 400
x_offset: 350
y_offset: 250
min_person_area: 2000
- size: 400
x_offset: 750
y_offset: 250
min_person_area: 2000

View File

@@ -17,6 +17,32 @@ MQTT_PORT = CONFIG.get('mqtt', {}).get('port', 1883)
MQTT_TOPIC_PREFIX = CONFIG.get('mqtt', {}).get('topic_prefix', 'frigate')
MQTT_USER = CONFIG.get('mqtt', {}).get('user')
MQTT_PASS = CONFIG.get('mqtt', {}).get('password')
MQTT_CLIENT_ID = CONFIG.get('mqtt', {}).get('client_id', 'frigate')
# Set the default FFmpeg config
FFMPEG_CONFIG = CONFIG.get('ffmpeg', {})
FFMPEG_DEFAULT_CONFIG = {
'global_args': FFMPEG_CONFIG.get('global_args',
['-hide_banner','-loglevel','panic']),
'hwaccel_args': FFMPEG_CONFIG.get('hwaccel_args',
[]),
'input_args': FFMPEG_CONFIG.get('input_args',
['-avoid_negative_ts', 'make_zero',
'-fflags', 'nobuffer',
'-flags', 'low_delay',
'-strict', 'experimental',
'-fflags', '+genpts+discardcorrupt',
'-vsync', 'drop',
'-rtsp_transport', 'tcp',
'-stimeout', '5000000',
'-use_wallclock_as_timestamps', '1']),
'output_args': FFMPEG_CONFIG.get('output_args',
['-vf', 'mpdecimate',
'-f', 'rawvideo',
'-pix_fmt', 'rgb24'])
}
GLOBAL_OBJECT_CONFIG = CONFIG.get('objects', {})
WEB_PORT = CONFIG.get('web_port', 5000)
DEBUG = (CONFIG.get('debug', '0') == '1')
@@ -25,9 +51,18 @@ def main():
# connect to mqtt and setup last will
def on_connect(client, userdata, flags, rc):
print("On connect called")
if rc != 0:
if rc == 3:
print ("MQTT Server unavailable")
elif rc == 4:
print ("MQTT Bad username or password")
elif rc == 5:
print ("MQTT Not authorized")
else:
print ("Unable to connect to MQTT: Connection refused. Error code: " + str(rc))
# publish a message to signal that the service is running
client.publish(MQTT_TOPIC_PREFIX+'/available', 'online', retain=True)
client = mqtt.Client()
client = mqtt.Client(client_id=MQTT_CLIENT_ID)
client.on_connect = on_connect
client.will_set(MQTT_TOPIC_PREFIX+'/available', payload='offline', qos=1, retain=True)
if not MQTT_USER is None:
@@ -41,7 +76,7 @@ def main():
cameras = {}
for name, config in CONFIG['cameras'].items():
cameras[name] = Camera(name, config, prepped_frame_queue, client, MQTT_TOPIC_PREFIX)
cameras[name] = Camera(name, FFMPEG_DEFAULT_CONFIG, GLOBAL_OBJECT_CONFIG, config, prepped_frame_queue, client, MQTT_TOPIC_PREFIX)
prepped_queue_processor = PreppedQueueProcessor(
cameras,
@@ -56,21 +91,32 @@ def main():
# create a flask app that encodes frames a mjpeg on demand
app = Flask(__name__)
@app.route('/<camera_name>/best_person.jpg')
def best_person(camera_name):
best_person_frame = cameras[camera_name].get_best_person()
if best_person_frame is None:
best_person_frame = np.zeros((720,1280,3), np.uint8)
ret, jpg = cv2.imencode('.jpg', best_person_frame)
response = make_response(jpg.tobytes())
response.headers['Content-Type'] = 'image/jpg'
return response
@app.route('/')
def ishealthy():
# return a healh
return "Frigate is running. Alive and healthy!"
@app.route('/<camera_name>/<label>/best.jpg')
def best(camera_name, label):
if camera_name in cameras:
best_frame = cameras[camera_name].get_best(label)
if best_frame is None:
best_frame = np.zeros((720,1280,3), np.uint8)
ret, jpg = cv2.imencode('.jpg', best_frame)
response = make_response(jpg.tobytes())
response.headers['Content-Type'] = 'image/jpg'
return response
else:
return f'Camera named {camera_name} not found', 404
@app.route('/<camera_name>')
def mjpeg_feed(camera_name):
# return a multipart response
return Response(imagestream(camera_name),
mimetype='multipart/x-mixed-replace; boundary=frame')
if camera_name in cameras:
# return a multipart response
return Response(imagestream(camera_name),
mimetype='multipart/x-mixed-replace; boundary=frame')
else:
return f'Camera named {camera_name} not found', 404
def imagestream(camera_name):
while True:
@@ -87,4 +133,4 @@ def main():
camera.join()
if __name__ == '__main__':
main()
main()

74
docs/DEVICES.md Normal file
View File

@@ -0,0 +1,74 @@
# Configuration Examples
### Default (most RTSP cameras)
This is the default ffmpeg command and should work with most RTSP cameras that send h264 video
```yaml
ffmpeg:
global_args:
- -hide_banner
- -loglevel
- panic
hwaccel_args: []
input_args:
- -avoid_negative_ts
- make_zero
- -fflags
- nobuffer
- -flags
- low_delay
- -strict
- experimental
- -fflags
- +genpts+discardcorrupt
- -vsync
- drop
- -rtsp_transport
- tcp
- -stimeout
- '5000000'
- -use_wallclock_as_timestamps
- '1'
output_args:
- -vf
- mpdecimate
- -f
- rawvideo
- -pix_fmt
- rgb24
```
### RTMP Cameras
The input parameters need to be adjusted for RTMP cameras
```yaml
ffmpeg:
input_args:
- -avoid_negative_ts
- make_zero
- -fflags
- nobuffer
- -flags
- low_delay
- -strict
- experimental
- -fflags
- +genpts+discardcorrupt
- -vsync
- drop
- -use_wallclock_as_timestamps
- '1'
```
### Hardware Acceleration
Intel Quicksync
```yaml
ffmpeg:
hwaccel_args:
- -hwaccel
- vaapi
- -hwaccel_device
- /dev/dri/renderD128
- -hwaccel_output_format
- yuv420p
```

View File

@@ -1,33 +1,46 @@
import json
import cv2
import threading
from collections import Counter, defaultdict
class MqttObjectPublisher(threading.Thread):
def __init__(self, client, topic_prefix, objects_parsed, detected_objects):
def __init__(self, client, topic_prefix, objects_parsed, detected_objects, best_frames):
threading.Thread.__init__(self)
self.client = client
self.topic_prefix = topic_prefix
self.objects_parsed = objects_parsed
self._detected_objects = detected_objects
self.best_frames = best_frames
def run(self):
last_sent_payload = ""
current_object_status = defaultdict(lambda: 'OFF')
while True:
# initialize the payload
payload = {}
# wait until objects have been parsed
with self.objects_parsed:
self.objects_parsed.wait()
# add all the person scores in detected objects
# make a copy of detected objects
detected_objects = self._detected_objects.copy()
person_score = sum([obj['score'] for obj in detected_objects if obj['name'] == 'person'])
# if the person score is more than 100, set person to ON
payload['person'] = 'ON' if int(person_score*100) > 100 else 'OFF'
# send message for objects if different
new_payload = json.dumps(payload, sort_keys=True)
if new_payload != last_sent_payload:
last_sent_payload = new_payload
self.client.publish(self.topic_prefix+'/objects', new_payload, retain=False)
# total up all scores by object type
obj_counter = Counter()
for obj in detected_objects:
obj_counter[obj['name']] += obj['score']
# report on detected objects
for obj_name, total_score in obj_counter.items():
new_status = 'ON' if int(total_score*100) > 100 else 'OFF'
if new_status != current_object_status[obj_name]:
current_object_status[obj_name] = new_status
self.client.publish(self.topic_prefix+'/'+obj_name, new_status, retain=False)
# send the snapshot over mqtt as well
if not self.best_frames.best_frames[obj_name] is None:
ret, jpg = cv2.imencode('.jpg', self.best_frames.best_frames[obj_name])
if ret:
jpg_bytes = jpg.tobytes()
self.client.publish(self.topic_prefix+'/'+obj_name+'/snapshot', jpg_bytes, retain=True)
# expire any objects that are ON and no longer detected
expired_objects = [obj_name for obj_name, status in current_object_status.items() if status == 'ON' and not obj_name in obj_counter]
for obj_name in expired_objects:
self.client.publish(self.topic_prefix+'/'+obj_name, 'OFF', retain=False)

View File

@@ -38,19 +38,18 @@ class PreppedQueueProcessor(threading.Thread):
frame = self.prepped_frame_queue.get()
# Actual detection.
objects = self.engine.DetectWithInputTensor(frame['frame'], threshold=0.5, top_k=3)
objects = self.engine.DetectWithInputTensor(frame['frame'], threshold=0.5, top_k=5)
# print(self.engine.get_inference_time())
# parse and pass detected objects back to the camera
parsed_objects = []
for obj in objects:
box = obj.bounding_box.flatten().tolist()
parsed_objects.append({
'region_id': frame['region_id'],
'frame_time': frame['frame_time'],
'name': str(self.labels[obj.label_id]),
'score': float(obj.score),
'xmin': int((box[0] * frame['region_size']) + frame['region_x_offset']),
'ymin': int((box[1] * frame['region_size']) + frame['region_y_offset']),
'xmax': int((box[2] * frame['region_size']) + frame['region_x_offset']),
'ymax': int((box[3] * frame['region_size']) + frame['region_y_offset'])
'box': obj.bounding_box.flatten().tolist()
})
self.cameras[frame['camera_name']].add_objects(parsed_objects)
@@ -59,7 +58,7 @@ class PreppedQueueProcessor(threading.Thread):
class FramePrepper(threading.Thread):
def __init__(self, camera_name, shared_frame, frame_time, frame_ready,
frame_lock,
region_size, region_x_offset, region_y_offset,
region_size, region_x_offset, region_y_offset, region_id,
prepped_frame_queue):
threading.Thread.__init__(self)
@@ -71,6 +70,7 @@ class FramePrepper(threading.Thread):
self.region_size = region_size
self.region_x_offset = region_x_offset
self.region_y_offset = region_y_offset
self.region_id = region_id
self.prepped_frame_queue = prepped_frame_queue
def run(self):
@@ -88,13 +88,11 @@ class FramePrepper(threading.Thread):
cropped_frame = self.shared_frame[self.region_y_offset:self.region_y_offset+self.region_size, self.region_x_offset:self.region_x_offset+self.region_size].copy()
frame_time = self.frame_time.value
# convert to RGB
cropped_frame_rgb = cv2.cvtColor(cropped_frame, cv2.COLOR_BGR2RGB)
# Resize to 300x300 if needed
if cropped_frame_rgb.shape != (300, 300, 3):
cropped_frame_rgb = cv2.resize(cropped_frame_rgb, dsize=(300, 300), interpolation=cv2.INTER_LINEAR)
if cropped_frame.shape != (300, 300, 3):
cropped_frame = cv2.resize(cropped_frame, dsize=(300, 300), interpolation=cv2.INTER_LINEAR)
# Expand dimensions since the model expects images to have shape: [1, 300, 300, 3]
frame_expanded = np.expand_dims(cropped_frame_rgb, axis=0)
frame_expanded = np.expand_dims(cropped_frame, axis=0)
# add the frame to the queue
if not self.prepped_frame_queue.full():
@@ -103,6 +101,7 @@ class FramePrepper(threading.Thread):
'frame_time': frame_time,
'frame': frame_expanded.flatten().copy(),
'region_size': self.region_size,
'region_id': self.region_id,
'region_x_offset': self.region_x_offset,
'region_y_offset': self.region_y_offset
})

View File

@@ -2,7 +2,8 @@ import time
import datetime
import threading
import cv2
from object_detection.utils import visualization_utils as vis_util
import numpy as np
from . util import draw_box_with_label
class ObjectCleaner(threading.Thread):
def __init__(self, objects_parsed, detected_objects):
@@ -35,16 +36,15 @@ class ObjectCleaner(threading.Thread):
self._objects_parsed.notify_all()
# Maintains the frame and person with the highest score from the most recent
# motion event
class BestPersonFrame(threading.Thread):
# Maintains the frame and object with the highest score
class BestFrames(threading.Thread):
def __init__(self, objects_parsed, recent_frames, detected_objects):
threading.Thread.__init__(self)
self.objects_parsed = objects_parsed
self.recent_frames = recent_frames
self.detected_objects = detected_objects
self.best_person = None
self.best_frame = None
self.best_objects = {}
self.best_frames = {}
def run(self):
while True:
@@ -55,42 +55,30 @@ class BestPersonFrame(threading.Thread):
# make a copy of detected objects
detected_objects = self.detected_objects.copy()
detected_people = [obj for obj in detected_objects if obj['name'] == 'person']
# get the highest scoring person
new_best_person = max(detected_people, key=lambda x:x['score'], default=self.best_person)
# if there isnt a person, continue
if new_best_person is None:
continue
# if there is no current best_person
if self.best_person is None:
self.best_person = new_best_person
# if there is already a best_person
else:
now = datetime.datetime.now().timestamp()
# if the new best person is a higher score than the current best person
# or the current person is more than 1 minute old, use the new best person
if new_best_person['score'] > self.best_person['score'] or (now - self.best_person['frame_time']) > 60:
self.best_person = new_best_person
for obj in detected_objects:
if obj['name'] in self.best_objects:
now = datetime.datetime.now().timestamp()
# if the object is a higher score than the current best score
# or the current object is more than 1 minute old, use the new object
if obj['score'] > self.best_objects[obj['name']]['score'] or (now - self.best_objects[obj['name']]['frame_time']) > 60:
self.best_objects[obj['name']] = obj
else:
self.best_objects[obj['name']] = obj
# make a copy of the recent frames
recent_frames = self.recent_frames.copy()
if not self.best_person is None and self.best_person['frame_time'] in recent_frames:
best_frame = recent_frames[self.best_person['frame_time']]
best_frame = cv2.cvtColor(best_frame, cv2.COLOR_BGR2RGB)
# draw the bounding box on the frame
vis_util.draw_bounding_box_on_image_array(best_frame,
self.best_person['ymin'],
self.best_person['xmin'],
self.best_person['ymax'],
self.best_person['xmax'],
color='red',
thickness=2,
display_str_list=["{}: {}%".format(self.best_person['name'],int(self.best_person['score']*100))],
use_normalized_coordinates=False)
# convert back to BGR
self.best_frame = cv2.cvtColor(best_frame, cv2.COLOR_RGB2BGR)
for name, obj in self.best_objects.items():
if obj['frame_time'] in recent_frames:
best_frame = recent_frames[obj['frame_time']] #, np.zeros((720,1280,3), np.uint8))
label = "{}: {}% {}".format(name,int(obj['score']*100),int(obj['area']))
draw_box_with_label(best_frame, obj['xmin'], obj['ymin'],
obj['xmax'], obj['ymax'], label)
# print a timestamp
time_to_show = datetime.datetime.fromtimestamp(obj['frame_time']).strftime("%m/%d/%Y %H:%M:%S")
cv2.putText(best_frame, time_to_show, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, fontScale=.8, color=(255, 255, 255), thickness=2)
self.best_frames[name] = cv2.cvtColor(best_frame, cv2.COLOR_RGB2BGR)

View File

@@ -1,5 +1,26 @@
import numpy as np
import cv2
# convert shared memory array into numpy array
def tonumpyarray(mp_arr):
return np.frombuffer(mp_arr.get_obj(), dtype=np.uint8)
return np.frombuffer(mp_arr.get_obj(), dtype=np.uint8)
def draw_box_with_label(frame, x_min, y_min, x_max, y_max, label):
color = (255,0,0)
cv2.rectangle(frame, (x_min, y_min),
(x_max, y_max),
color, 2)
font_scale = 0.5
font = cv2.FONT_HERSHEY_SIMPLEX
# get the width and height of the text box
size = cv2.getTextSize(label, font, fontScale=font_scale, thickness=2)
text_width = size[0][0]
text_height = size[0][1]
line_height = text_height + size[1]
# set the text start position
text_offset_x = x_min
text_offset_y = 0 if y_min < line_height else y_min - (line_height+8)
# make the coords of the box with a small padding of two pixels
textbox_coords = ((text_offset_x, text_offset_y), (text_offset_x + text_width + 2, text_offset_y + line_height))
cv2.rectangle(frame, textbox_coords[0], textbox_coords[1], color, cv2.FILLED)
cv2.putText(frame, label, (text_offset_x, text_offset_y + line_height - 3), font, fontScale=font_scale, color=(0, 0, 0), thickness=2)

View File

@@ -5,59 +5,14 @@ import cv2
import threading
import ctypes
import multiprocessing as mp
from object_detection.utils import visualization_utils as vis_util
from . util import tonumpyarray
import subprocess as sp
import numpy as np
from collections import defaultdict
from . util import tonumpyarray, draw_box_with_label
from . object_detection import FramePrepper
from . objects import ObjectCleaner, BestPersonFrame
from . objects import ObjectCleaner, BestFrames
from . mqtt import MqttObjectPublisher
# fetch the frames as fast a possible and store current frame in a shared memory array
def fetch_frames(shared_arr, shared_frame_time, frame_lock, frame_ready, frame_shape, rtsp_url):
# convert shared memory array into numpy and shape into image array
arr = tonumpyarray(shared_arr).reshape(frame_shape)
# start the video capture
video = cv2.VideoCapture()
video.open(rtsp_url)
# keep the buffer small so we minimize old data
video.set(cv2.CAP_PROP_BUFFERSIZE,1)
bad_frame_counter = 0
while True:
# check if the video stream is still open, and reopen if needed
if not video.isOpened():
success = video.open(rtsp_url)
if not success:
time.sleep(1)
continue
# grab the frame, but dont decode it yet
ret = video.grab()
# snapshot the time the frame was grabbed
frame_time = datetime.datetime.now()
if ret:
# go ahead and decode the current frame
ret, frame = video.retrieve()
if ret:
# Lock access and update frame
with frame_lock:
arr[:] = frame
shared_frame_time.value = frame_time.timestamp()
# Notify with the condition that a new frame is ready
with frame_ready:
frame_ready.notify_all()
bad_frame_counter = 0
else:
print("Unable to decode frame")
bad_frame_counter += 1
else:
print("Unable to grab a frame")
bad_frame_counter += 1
if bad_frame_counter > 100:
video.release()
video.release()
# Stores 2 seconds worth of frames when motion is detected so they can be used for other threads
class FrameTracker(threading.Thread):
def __init__(self, shared_frame, frame_time, frame_ready, frame_lock, recent_frames):
@@ -92,40 +47,97 @@ class FrameTracker(threading.Thread):
if (now - k) > 2:
del self.recent_frames[k]
def get_frame_shape(rtsp_url):
def get_frame_shape(source):
# capture a single frame and check the frame shape so the correct array
# size can be allocated in memory
video = cv2.VideoCapture(rtsp_url)
video = cv2.VideoCapture(source)
ret, frame = video.read()
frame_shape = frame.shape
video.release()
return frame_shape
def get_rtsp_url(rtsp_config):
if (rtsp_config['password'].startswith('$')):
rtsp_config['password'] = os.getenv(rtsp_config['password'][1:])
return 'rtsp://{}:{}@{}:{}{}'.format(rtsp_config['user'],
rtsp_config['password'], rtsp_config['host'], rtsp_config['port'],
rtsp_config['path'])
def get_ffmpeg_input(ffmpeg_input):
frigate_vars = {k: v for k, v in os.environ.items() if k.startswith('FRIGATE_')}
return ffmpeg_input.format(**frigate_vars)
class CameraWatchdog(threading.Thread):
def __init__(self, camera):
threading.Thread.__init__(self)
self.camera = camera
def run(self):
while True:
# wait a bit before checking
time.sleep(10)
if (datetime.datetime.now().timestamp() - self.camera.frame_time.value) > 300:
print("last frame is more than 5 minutes old, restarting camera capture...")
self.camera.start_or_restart_capture()
time.sleep(5)
# Thread to read the stdout of the ffmpeg process and update the current frame
class CameraCapture(threading.Thread):
def __init__(self, camera):
threading.Thread.__init__(self)
self.camera = camera
def run(self):
frame_num = 0
while True:
if self.camera.ffmpeg_process.poll() != None:
print("ffmpeg process is not running. exiting capture thread...")
break
raw_image = self.camera.ffmpeg_process.stdout.read(self.camera.frame_size)
if len(raw_image) == 0:
print("ffmpeg didnt return a frame. something is wrong. exiting capture thread...")
break
frame_num += 1
if (frame_num % self.camera.take_frame) != 0:
continue
with self.camera.frame_lock:
self.camera.frame_time.value = datetime.datetime.now().timestamp()
self.camera.current_frame[:] = (
np
.frombuffer(raw_image, np.uint8)
.reshape(self.camera.frame_shape)
)
# Notify with the condition that a new frame is ready
with self.camera.frame_ready:
self.camera.frame_ready.notify_all()
class Camera:
def __init__(self, name, config, prepped_frame_queue, mqtt_client, mqtt_prefix):
def __init__(self, name, ffmpeg_config, global_objects_config, config, prepped_frame_queue, mqtt_client, mqtt_prefix):
self.name = name
self.config = config
self.detected_objects = []
self.recent_frames = {}
self.rtsp_url = get_rtsp_url(self.config['rtsp'])
self.ffmpeg = config.get('ffmpeg', {})
self.ffmpeg_input = get_ffmpeg_input(self.ffmpeg['input'])
self.ffmpeg_global_args = self.ffmpeg.get('global_args', ffmpeg_config['global_args'])
self.ffmpeg_hwaccel_args = self.ffmpeg.get('hwaccel_args', ffmpeg_config['hwaccel_args'])
self.ffmpeg_input_args = self.ffmpeg.get('input_args', ffmpeg_config['input_args'])
self.ffmpeg_output_args = self.ffmpeg.get('output_args', ffmpeg_config['output_args'])
camera_objects_config = config.get('objects', {})
self.take_frame = self.config.get('take_frame', 1)
self.regions = self.config['regions']
self.frame_shape = get_frame_shape(self.rtsp_url)
self.frame_shape = get_frame_shape(self.ffmpeg_input)
self.frame_size = self.frame_shape[0] * self.frame_shape[1] * self.frame_shape[2]
self.mqtt_client = mqtt_client
self.mqtt_topic_prefix = '{}/{}'.format(mqtt_prefix, self.name)
# compute the flattened array length from the shape of the frame
flat_array_length = self.frame_shape[0] * self.frame_shape[1] * self.frame_shape[2]
# create shared array for storing the full frame image data
self.shared_frame_array = mp.Array(ctypes.c_uint8, flat_array_length)
# create a numpy array for the current frame in initialize to zeros
self.current_frame = np.zeros(self.frame_shape, np.uint8)
# create shared value for storing the frame_time
self.shared_frame_time = mp.Value('d', 0.0)
self.frame_time = mp.Value('d', 0.0)
# Lock to control access to the frame
self.frame_lock = mp.Lock()
# Condition for notifying that a new frame is ready
@@ -133,120 +145,197 @@ class Camera:
# Condition for notifying that objects were parsed
self.objects_parsed = mp.Condition()
# shape current frame so it can be treated as a numpy image
self.shared_frame_np = tonumpyarray(self.shared_frame_array).reshape(self.frame_shape)
# create the process to capture frames from the RTSP stream and store in a shared array
self.capture_process = mp.Process(target=fetch_frames, args=(self.shared_frame_array,
self.shared_frame_time, self.frame_lock, self.frame_ready, self.frame_shape, self.rtsp_url))
self.capture_process.daemon = True
self.ffmpeg_process = None
self.capture_thread = None
# for each region, create a separate thread to resize the region and prep for detection
self.detection_prep_threads = []
for region in self.config['regions']:
for index, region in enumerate(self.config['regions']):
region_objects = region.get('objects', {})
# build objects config for region
objects_with_config = set().union(global_objects_config.keys(), camera_objects_config.keys(), region_objects.keys())
merged_objects_config = defaultdict(lambda: {})
for obj in objects_with_config:
merged_objects_config[obj] = {**global_objects_config.get(obj,{}), **camera_objects_config.get(obj, {}), **region_objects.get(obj, {})}
region['objects'] = merged_objects_config
self.detection_prep_threads.append(FramePrepper(
self.name,
self.shared_frame_np,
self.shared_frame_time,
self.current_frame,
self.frame_time,
self.frame_ready,
self.frame_lock,
region['size'], region['x_offset'], region['y_offset'],
region['size'], region['x_offset'], region['y_offset'], index,
prepped_frame_queue
))
# start a thread to store recent motion frames for processing
self.frame_tracker = FrameTracker(self.shared_frame_np, self.shared_frame_time,
self.frame_tracker = FrameTracker(self.current_frame, self.frame_time,
self.frame_ready, self.frame_lock, self.recent_frames)
self.frame_tracker.start()
# start a thread to store the highest scoring recent person frame
self.best_person_frame = BestPersonFrame(self.objects_parsed, self.recent_frames, self.detected_objects)
self.best_person_frame.start()
# start a thread to store the highest scoring recent frames for monitored object types
self.best_frames = BestFrames(self.objects_parsed, self.recent_frames, self.detected_objects)
self.best_frames.start()
# start a thread to expire objects from the detected objects list
self.object_cleaner = ObjectCleaner(self.objects_parsed, self.detected_objects)
self.object_cleaner.start()
# start a thread to publish object scores (currently only person)
mqtt_publisher = MqttObjectPublisher(self.mqtt_client, self.mqtt_topic_prefix, self.objects_parsed, self.detected_objects)
# start a thread to publish object scores
mqtt_publisher = MqttObjectPublisher(self.mqtt_client, self.mqtt_topic_prefix, self.objects_parsed, self.detected_objects, self.best_frames)
mqtt_publisher.start()
# create a watchdog thread for capture process
self.watchdog = CameraWatchdog(self)
# load in the mask for object detection
if 'mask' in self.config:
self.mask = cv2.imread("/config/{}".format(self.config['mask']), cv2.IMREAD_GRAYSCALE)
else:
self.mask = None
if self.mask is None:
self.mask = np.zeros((self.frame_shape[0], self.frame_shape[1], 1), np.uint8)
self.mask[:] = 255
def start_or_restart_capture(self):
if not self.ffmpeg_process is None:
print("Terminating the existing ffmpeg process...")
self.ffmpeg_process.terminate()
try:
print("Waiting for ffmpeg to exit gracefully...")
self.ffmpeg_process.wait(timeout=30)
except sp.TimeoutExpired:
print("FFmpeg didnt exit. Force killing...")
self.ffmpeg_process.kill()
self.ffmpeg_process.wait()
print("Waiting for the capture thread to exit...")
self.capture_thread.join()
self.ffmpeg_process = None
self.capture_thread = None
# create the process to capture frames from the input stream and store in a shared array
print("Creating a new ffmpeg process...")
self.start_ffmpeg()
print("Creating a new capture thread...")
self.capture_thread = CameraCapture(self)
print("Starting a new capture thread...")
self.capture_thread.start()
def start_ffmpeg(self):
ffmpeg_cmd = (['ffmpeg'] +
self.ffmpeg_global_args +
self.ffmpeg_hwaccel_args +
self.ffmpeg_input_args +
['-i', self.ffmpeg_input] +
self.ffmpeg_output_args +
['pipe:'])
print(" ".join(ffmpeg_cmd))
self.ffmpeg_process = sp.Popen(ffmpeg_cmd, stdout = sp.PIPE, bufsize=self.frame_size)
def start(self):
self.capture_process.start()
self.start_or_restart_capture()
# start the object detection prep threads
for detection_prep_thread in self.detection_prep_threads:
detection_prep_thread.start()
self.watchdog.start()
def join(self):
self.capture_process.join()
self.capture_thread.join()
def get_capture_pid(self):
return self.capture_process.pid
return self.ffmpeg_process.pid
def add_objects(self, objects):
if len(objects) == 0:
return
for obj in objects:
if obj['name'] == 'person':
person_area = (obj['xmax']-obj['xmin'])*(obj['ymax']-obj['ymin'])
# find the matching region
region = None
for r in self.regions:
if (
obj['xmin'] >= r['x_offset'] and
obj['ymin'] >= r['y_offset'] and
obj['xmax'] <= r['x_offset']+r['size'] and
obj['ymax'] <= r['y_offset']+r['size']
):
region = r
break
# find the matching region
region = self.regions[obj['region_id']]
# Compute some extra properties
obj.update({
'xmin': int((obj['box'][0] * region['size']) + region['x_offset']),
'ymin': int((obj['box'][1] * region['size']) + region['y_offset']),
'xmax': int((obj['box'][2] * region['size']) + region['x_offset']),
'ymax': int((obj['box'][3] * region['size']) + region['y_offset'])
})
# Compute the area
obj['area'] = (obj['xmax']-obj['xmin'])*(obj['ymax']-obj['ymin'])
object_name = obj['name']
if object_name in region['objects']:
obj_settings = region['objects'][object_name]
# if the min area is larger than the
# detected object, don't add it to detected objects
if obj_settings.get('min_area',-1) > obj['area']:
continue
# if the min person area is larger than the
# detected person, don't add it to detected objects
if region and region['min_person_area'] > person_area:
# if the detected object is larger than the
# max area, don't add it to detected objects
if obj_settings.get('max_area', region['size']**2) < obj['area']:
continue
# if the score is lower than the threshold, skip
if obj_settings.get('threshold', 0) > obj['score']:
continue
# compute the coordinates of the object and make sure
# the location isnt outside the bounds of the image (can happen from rounding)
y_location = min(int(obj['ymax']), len(self.mask)-1)
x_location = min(int((obj['xmax']-obj['xmin'])/2.0)+obj['xmin'], len(self.mask[0])-1)
# if the object is in a masked location, don't add it to detected objects
if self.mask[y_location][x_location] == [0]:
continue
self.detected_objects.append(obj)
with self.objects_parsed:
self.objects_parsed.notify_all()
def get_best_person(self):
return self.best_person_frame.best_frame
def get_best(self, label):
return self.best_frames.best_frames.get(label)
def get_current_frame_with_objects(self):
# make a copy of the current detected objects
detected_objects = self.detected_objects.copy()
# lock and make a copy of the current frame
with self.frame_lock:
frame = self.shared_frame_np.copy()
frame = self.current_frame.copy()
frame_time = self.frame_time.value
# convert to RGB for drawing
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# draw the bounding boxes on the screen
for obj in detected_objects:
vis_util.draw_bounding_box_on_image_array(frame,
obj['ymin'],
obj['xmin'],
obj['ymax'],
obj['xmax'],
color='red',
thickness=2,
display_str_list=["{}: {}%".format(obj['name'],int(obj['score']*100))],
use_normalized_coordinates=False)
label = "{}: {}% {}".format(obj['name'],int(obj['score']*100),int(obj['area']))
draw_box_with_label(frame, obj['xmin'], obj['ymin'], obj['xmax'], obj['ymax'], label)
for region in self.regions:
color = (255,255,255)
cv2.rectangle(frame, (region['x_offset'], region['y_offset']),
(region['x_offset']+region['size'], region['y_offset']+region['size']),
color, 2)
# print a timestamp
time_to_show = datetime.datetime.fromtimestamp(frame_time).strftime("%m/%d/%Y %H:%M:%S")
cv2.putText(frame, time_to_show, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, fontScale=.8, color=(255, 255, 255), thickness=2)
# convert back to BGR
# convert to BGR
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
return frame

View File

@@ -0,0 +1,50 @@
#!/bin/bash
set -e
CPU_ARCH=$(uname -m)
OS_VERSION=$(uname -v)
echo "CPU_ARCH ${CPU_ARCH}"
echo "OS_VERSION ${OS_VERSION}"
if [[ "${CPU_ARCH}" == "x86_64" ]]; then
echo "Recognized as Linux on x86_64."
LIBEDGETPU_SUFFIX=x86_64
HOST_GNU_TYPE=x86_64-linux-gnu
elif [[ "${CPU_ARCH}" == "armv7l" ]]; then
echo "Recognized as Linux on ARM32 platform."
LIBEDGETPU_SUFFIX=arm32
HOST_GNU_TYPE=arm-linux-gnueabihf
elif [[ "${CPU_ARCH}" == "aarch64" ]]; then
echo "Recognized as generic ARM64 platform."
LIBEDGETPU_SUFFIX=arm64
HOST_GNU_TYPE=aarch64-linux-gnu
fi
if [[ -z "${HOST_GNU_TYPE}" ]]; then
echo "Your platform is not supported."
exit 1
fi
echo "Using maximum operating frequency."
LIBEDGETPU_SRC="libedgetpu/libedgetpu_${LIBEDGETPU_SUFFIX}.so"
LIBEDGETPU_DST="/usr/lib/${HOST_GNU_TYPE}/libedgetpu.so.1.0"
# Runtime library.
echo "Installing Edge TPU runtime library [${LIBEDGETPU_DST}]..."
if [[ -f "${LIBEDGETPU_DST}" ]]; then
echo "File already exists. Replacing it..."
rm -f "${LIBEDGETPU_DST}"
fi
cp -p "${LIBEDGETPU_SRC}" "${LIBEDGETPU_DST}"
ldconfig
echo "Done."
# Python API.
WHEEL=$(ls edgetpu-*-py3-none-any.whl 2>/dev/null)
if [[ $? == 0 ]]; then
echo "Installing Edge TPU Python API..."
python3 -m pip install --no-deps "${WHEEL}"
echo "Done."
fi

View File

@@ -0,0 +1,5 @@
#!/bin/bash
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys D986B59D
echo "deb http://deb.odroid.in/5422-s bionic main" > /etc/apt/sources.list.d/odroid.list