Build logic , Regularly capture snapshots from infrared cameras , Standardize it , And store it somewhere . Mark the picture ( Character detected / It is detected that the character does not exist ) And train the model on it . Deploy the model on the raspberry pie and run regular detection against the newly captured images , Whether the people in the room exist .
Now we have all the hardware and software ready , Let's configure to periodically capture camera images and store them locally - We will use these images to train our model later .
It is basically a process that can execute and run some custom operations on a regular basis . Let us in config.yaml Add a cron, It takes pictures from the sensor and stores them in a local directory . First , Create... On raspberry pie images Catalog :
And then in config.yaml Add cron The logic of :
cron.ThermalCameraSnapshotCron:
cron_expression: '* * * * *'
actions:
- action: camera.ir.mlx90640.capture_image
args:
image_file: ~/datasets/people_detect/\
${
int(__import__('time').time())}.jpg
grayscale: true
Once you capture enough images , You can copy them to your computer , Mark them and train the model . What is waiting for us now is the boring part —— Manually mark images as positive or negative . I used a script to make this task less tedious , This script allows you to interactively mark images while viewing them , And move them to the correct target directory . Install the dependencies on the local machine and clone the repository :
from tensorflow.keras.preprocessing.image import
ImageDataGenerator
# 30% of the images goes into the test set
generator = ImageDataGenerator(rescale=1./255, validation_
split=0.3)
train_data = generator.flow_from_directory(dataset_dir,
target_size=image_size,
batch_size=batch_size,
subset='training',
class_mode='categorical',
color_mode='grayscale')
test_data = generator.flow_from_directory(dataset_dir,
target_size=image_size,
batch_size=batch_size,
subset='validation',
Refer to the - Yatu inter