In recent years, due to the continuous development of the transportation industry , Car ownership is rising , Many vehicles are driving , Collisions often occur 、 Rear end collision and other phenomena , Serious circumstances will lead to traffic accidents . And in the investigation and research, it is found that the vehicle with more traffic accidents at present is large buses , It's not just because large buses are relatively large , And its own safety supervision system has not been fully established , This increases the possibility of bus accidents , It poses a very serious threat to people's lives and economy .
2017 year 3 month 7 Japan , The Ministry of transport issued a circular on the implementation of transportation industry standards 《 Safety technical conditions for operating passenger cars 》(JT/T 1094-2016) The notice of .《 Safety technical conditions for operating passenger cars 》 Technical requirements under 4.1.5 Article specifies :9M The above operating buses are required to be equipped with lane departure warning system (LDWS) And the front collision warning that meets the standard (FCW) function , And in the first place 5 The transition period requirements for the implementation of this standard clearly stipulates 13 A transition period of months .
The continuous development of urban economy 、 The scope continues to expand , Transportation companies will also move towards high-speed development . As the policy level pays more and more attention to public transport safety . Although the vehicle is equipped with on-board video monitoring system , It can realize real-time preview and scheduling of vehicles , But the safety of driving cannot be guaranteed , Driver's operation specifications 、 The driver's driving behavior cannot be supervised , Speeding 、 Tired driving 、 Traffic violations 、 Drive and smoke 、 Inattention 、 The traffic accidents caused by bad behaviors such as making phone calls are as high as 95%, Vehicle loss is normal 5-10 times , Fuel consumption increases 10%-50%.
This project is open source , It is a relatively simple demand analysis , It is only used to detect eye fatigue , I will not open source other requirements for the time being , When you can do this according to business needs analysis , This project mainly gives you a simple guide to think .
Open source face recognition haarcascade_frontalface_default.xml And eye recognition haarcascade_eye.xml, This is a trained model , We can call .
The calling method is also very simple :
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
The so-called real-time detection , We can simply understand it as a video , The video is frame by frame , You can understand a frame as a picture , We need to test whether a person is tired , Is to detect the person's eye condition in a short time , It is to analyze the changes of eye pictures in each frame during this period , Set a standard to judge whether you are tired , This standard has been tested .
To turn on the camera in real time , Use the following function :
cap = cv2.VideoCapture(0)
The transmission parameter is 0 Represents the use of a camera , Because our notebook has only one camera , If the parameter is 1 It uses two cameras , I am a notebook , So I pass on parameters 0, Of course, you can also import videos here , You want to check whether the driver in this video is tired , Instead of real-time detection .
About opencv I've talked about getting started with video processing :opencv Introduction to video processing You can further modify and understand according to this .
import cv2
from functools import wraps
from pygame import mixer
import time
I don't need to teach module download ? The foundation is not proficient , Please see my basic column :python Basic course
def counter(func):
@wraps(func)
def tmp(*args, **kwargs):
tmp.count += 1
global lastsave
if time.time() - lastsave > 3:
# This is in seconds , therefore 5 minute = 300 second
lastsave = time.time()
tmp.count = 0
return func(*args, **kwargs)
tmp.count = 0
return tmp
Meet the output of eye closure :
@counter
def closed():
print ("Eye Closed")
Similarly, we also define an output function :
def openeye():
print( "Eye is Open")
The alarm sound also defines a function :
def sound():
mixer.init()
mixer.music.load('sound.mp3')
mixer.music.play()
use while Loop real-time detection , Grayscale the image ,detectMultiScale Determine the target size , The next step is to circle the eyes , If you have a foundation , It's not too difficult to understand him .
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = img[y:y + h, x:x + w]
eyes = eye_cascade.detectMultiScale(roi_gray)
if eyes is not ():
for (ex, ey, ew, eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)
openeye()
else:
closed()
if closed.count == 4:
print ("driver is sleeping")
sound()
cv2.imshow('img', img)
k = cv2.waitKey(30) & 0xff
Overall evaluation , The whole structure of judging fatigue is not very reasonable , It looks a little rough , This project is just a guide for everyone , If you want to do a graduation project or something , This alone is definitely not enough .
''':cvar
official account : Play with big data
I wechat :hxgsrubxjogxeeag
python Group :428335755
'''
# coding=utf-8
import cv2
from functools import wraps
from pygame import mixer
import time
lastsave = 0
def counter(func):
@wraps(func)
def tmp(*args, **kwargs):
tmp.count += 1
global lastsave
if time.time() - lastsave > 3:
# This is in seconds , therefore 5 minute = 300 second
lastsave = time.time()
tmp.count = 0
return func(*args, **kwargs)
tmp.count = 0
return tmp
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
cap = cv2.VideoCapture(0)
@counter
def closed():
print("Eye Closed")
def openeye():
print("Eye is Open")
def sound():
mixer.init()
mixer.music.load('sound.mp3')
mixer.music.play()
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = img[y:y + h, x:x + w]
eyes = eye_cascade.detectMultiScale(roi_gray)
if eyes is not ():
for (ex, ey, ew, eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)
openeye()
else:
closed()
if closed.count == 4:
print("driver is sleeping")
sound()
cv2.imshow('img', img)
k = cv2.waitKey(30) & 0xff
cap.release()
cv2.destroyAllWindows()