现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编

频道:推荐新闻 日期: 浏览:356

你能够在互联网上找到的大多数面部辨认算法和研讨论文都受到相片进犯。这些办法在检测和辨认来自网络摄像头的图画、视频和视频流中的人脸方面十分有用。可是,他们不能区别活人的脸和相片上的脸。这是由于这些算法适用于2D frames。

现在让咱们幻想一下,咱们想要完结一个面部辨认开门器。该体系能够很好地区别已知面孔和不知道面孔;以一种只对授权人敞开的办法。尽管如此,关于一个怀有歹意的人来说,只展现授权人的相片是很简单的。这便是3D探测器适用的当地,就像苹果公司的FaceID相同。可是假如咱们没有3D探测器呢?

奥巴马面部相片示例

本文的意图是完结一种根据眨眼检测的人脸活性检测算法,以阻挠相片进犯。该算法经过网络摄像头实时作业,只要在对方眨眼时才显现其名字。该程序运转如下:

  1. 检测摄像头生成的每个帧中的人脸。
  2. 关于每个检测到的脸,检测眼睛。
  3. 关于每个检测到的眼睛,检测眼睛是睁着的仍是闭着的。
  4. 假如在某一时间检测到眼睛是张开的,然后又闭上,然后再张开,咱们就得出结论,这个人眨眼了,程序就会显现它的名字(在面部辨认开门器的情况下,咱们会授权这个人进入)。

关于人脸的检测和辨认,您需求装置face_recognition库,它供给了十分有用的深度学习办法来查找和辨认图画中的人脸。特别是face_locations、face_encodings和compare_faces函数是最有用的3个函数。face_locations办法能够运用两种办法检测人脸:梯度直方图(HoG)和卷积神经网络(CNN)。本文挑选HoG办法。face_e现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编ncodings函数是一个经过预处理的卷积神经网络,它能够将图画编码成包括128个特征的向量。这个嵌入向量应该表明满足的信息来区别两个不同的人。最终,compare_faces核算两个嵌入向量之间的间隔。它将答应咱们辨认从网络摄像头帧中提取的人脸,并将其嵌入向量与咱们数据会集一切编码的人脸进行比较。最接近的向量应该对应于同一个人。

1.已知的面部数据集编码

该算法能够辨认自己和巴拉克奥巴马。我选了大约10张相片。下面是处理和编码已知面部数据集的Python代码。

def process_and_encode(images):
known_encodin现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编gs = []
known_names = []
print("[LOG] Encoding dataset ...")
for image_path in tqdm(images):
# Load image
image = cv2.imread(image_path)
# Convert it from BGR to RGB
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# detect face in the image and get its location (square boxes coordinates)
boxes = face_recognition.face_locations(image, model='hog')
# Encode the face into a 128-d embeddings vector
encoding = face_recognition.face一夜惊喜演员表_encodings(image, boxes)
# the person's name is the name of the folder where the image comes from
name = image_path.split(os.path.sep)[-2]
if len(encoding) > 0 :
known_encodings.append(encoding[0])
known_names.append(name)
return {"encodings": known_encodings, "names": known_names}

现在咱们知道了咱们想要辨认的每个人的编码,咱们能够测验经过网络摄像头辨认和辨认人脸。可是,在此之前,咱们需求区别面部相片和活人脸部。

2.人脸活性检测

提示一下,咱们的方针是在某一时间检测到眨眼动作。我练习了一个卷积神经网络来区别眼睛是睁着仍是闭着。挑选的机器学习模型是LeNet-5,该机器学习模型已经在Closed Eyes In The Wild(CEW)数据集(http://parnec.nuaa.edu.cn/xtan/data/ClosedEyeDatabases.html)进步行了练习。它由大约4800张24x24巨细的眼睛图画组成。完好机器学习模型的Python代码如下:

import os
from PIL import Image
import numpy as np
from keras.models import Sequential
from现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编 keras.layers import Conv2D
from keras.layers import AveragePooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.models import model_from_json
from keras.preprocessing.image import ImageDataGenerator
from scipy.ndimage import imread
from scipy.misc import imresize, imsave
IMG_SIZE = 24
def collect():
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True,
)
val_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
horizontal_flip=True, )
train_generator = train_datagen.flow_from_directory(
directory="dataset/train",
target_size=(IMG_SIZE, IMG_SIZE),
color_mode="grayscale",
batch_size=32,
class_mode="bina少女漫画大全ry",
shuffle=True,
seed=42
)
val_generator = val_datagen.flow_from_directory(
directory="dataset/val",
target_size=(IMG_SIZE, IMG_SIZE),
color_mode="grayscale",
batch_size=32,
class_mode="binary",
shuffle=True,
seed=42
)
return train_generator, val_generator
def save_model(model):
model_json = model.to_json()
w问酒谢花ith open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
def load_model():
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model.h5")
loaded_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return loaded_model
def train(train_genera一夜七次tor, val_generator):
STEP_SIZE_TRAIN=train_generator.n//train_generator.ba黑涩会小蛮tch_size
STEP_SIZE_VALID=val_generator.n//val_generator.batch_size
print('[LOG] Intialize Neural Network')

model = Sequential()
model.add(Conv2D(filters=6, 今日开端做男仆kernel_size=(3, 3), activation='relu', input_shape=(IMG_SIZE,IMG_SIZE,1)))
model.add(AveragePooling2D())
model.add(Conv2D(filters=16, kernel_size=(3, 3), activation='relu'))
model.add(AveragePooling2D())
model.add(Flatten())
model.add(Dense(units=120, activation='relu'))
model.add(Dense(units=84, activation='relu'))
model.add(Dense(units=1, activation = 'sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accurac殷无双君上邪y'])
model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=20
)
save_model(model)
def丝足恋 predict(img, model):
img = Image.fromarray(img, 'RGB').convert('L')
img = imresize(img, (IMG_SIZE,IMG_SIZE)).astype('float32')
img /= 255
img = img.reshape(1,IMG_SIZE,IMG_SIZE,1)
prediction = model.predict(img)
if prediction < 0.1:
prediction = 'closed'
elif prediction > 0.9:
prediction = 'open'
else:
prediction = 'idk'
return prediction
def evaluate(X_test, y_test):
model = load_model()
print('Evaluate model')
loss, acc = model.evaluate(X_test, y_test, verbose = 0)
print(acc * 100)
if __name__ == '__main__':
train_generator , val_generator = collect()
train(train_generator,val_generator)

在评价此机器极乐摇摇摇学习模型时,准确率达到了94%。

每次咱们检测到一只眼睛,咱们就用咱们的机器学习模型猜测它的状况,并盯梢眼睛状况。因而,运用下面的函数,检测眨眼变得十分简单,该函数企图在眼睛状况前史中找到closed-open-closed形式。

def isBlinking(history, maxFrames):
""" @history: A string containing the history of eyes status
where a '1' 奶茶妹妹相片means that the eyes were closed and '0' open.
@maxFrames: The maximal number of successive frames where an eye is closed """
for i in range(maxFrames):
pattern = '1' + '0'*(i+1) + '1'
if pattern in history:
return True
return Fals相似91e

3.活体人脸辨认

咱们简直具有了树立人脸辨认算法的一切要素。咱们只需求一种实时检测人脸和眼睛的办法。我运用openCV预练习的Haar-cascade分类器来完结这些使命

def detect_and_display(model, video_capture, face_detector, open_eyes_detector, left_eye_detector, right_eye_detector, data, eyes_detected):
frame = video_capture.read()
# resize the frame
frame = cv2.resize(frame, (0, 0), fx=0.6, fy=0.6)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

# Detect faces
faces = face_detector.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(50, 50),
flags=cv2.CASCADE_SCALE_IMAGE
)
# for each detected face
for (x,y,w,h) in faces:
# Encode the face into a 128-d embeddings vector
encoding = face_recognition.face_encodings(rgb, [(y, x+w, y+h, x)])[0]
# Compare the vector with all known faces encodings
matches = face_recognition.compare_faces(data["encodings"], encoding)
# For now we don't know the person name
name = "Unknown"
# If there is at least one match:
if True in matches:
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
for i in matchedIdxs:
name = data["names"][i]
counts[name] = counts.get(name, 0) + 1
# The known encoding with the most number of matches corresponds现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编 to the detected face name
name = max(counts, key=counts.get)
face = frame[y:y+h,x:x+w]
gray_face = gray[y:y+h,x:x+w]
eyes = []

# Eyes detection
# check first if eyes are open (with glasses taking into account)
open_e现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编yes_glasses = open_eyes_detector.detectMultiScale(
gray_face,
scaleFactor=1.1,
minNeighbormy1069s=5,
minSize=(30, 30),
flags = cv2.CASCADE_SCALE_IMAGE
)
# if open_eyes_glasses detect eyes then they are open
if len(open_eyes_glasses) == 2:
eyes_detected[name]+='1'
for (ex,ey,ew,eh) in open_eyes_glasses:
cv2.rectangle(face,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

# otherwise try detecting eyes using l汉艺国际教育eft and right_eye_detector
# which can detect open and closed eyasiamonstres
else:
# separate the face into left and right sides
left_face = frame[y:y+h, x+int(w/2):x+w]
left_face_gray = g现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编ray[y:y+h, x+int(w/2):x+w]
right_face = frame[y:y+h, x:x+int(w/2)]
right_f现代ix25,运用Python,Keras和OpenCV进行实时面部活体检测,邮编ace_gray = gray[y:y+h, x:x+int(w/2)]
# Detect the left eye
left_eye = left_eye_detector.detectMultiScale(
left_face_gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.CASCADE_SCALE_IMAGE
)
# Detect the right eye
right_eye = right_eye_detector.detectMultiScale(
right_face_gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.CASCADE_SCALE_IMAGE
)
eye_status = '1' # we suppose the eyes are open
# For each eye check wether the eye is closed.
# If one is closed we conclude the eyes are closed
for (ex,ey,ew,eh) in right_eye:
color = (0,255,0)
pred = predict(right_face[ey:ey+eh,ex:ex+ew],model)
if pred == 'closed':
eye_status='0'
color = (0,0,255)
cv2.rectangle(right_face,(ex,ey),(ex+ew,ey+eh),color,2)
for (ex,ey,ew,eh) in left_eye:
color = (0,255,0)
pred = predict(left_face[ey:ey+eh,ex:ex+ew],model)
if pred == 'closed':
eye_status='0'
color = (0,0,255)
cv2.rectangle(left_face,(ex,ey),(ex+ew,ey+eh),color,2)
eyes_detected[name] += eye_status
# Each time, we check if the person has blinked
# If yes, we display its name
if isBlinking(eyes_detected[name],3):
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display n祁大鹏新浪博客am严智蕴e
y = y - 15 if y - 15 > 15 else y + 15
cv2.putText(frame, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,0.75, (0, 255, 0), 2)
return frame

上述函数是用于检测和辨认实在面部的Python代码。它需求的参数:

  • model:咱们的open/closed眼睛分类器
  • video_capture:流视频
  • face_detector:Haar-cascade face分类器。运用haarcascade_frontalface_alt.xml
  • open_eyes_detector:Haar-cascade 睁眼的 分类器。运用haarcascade_eye_tree_eyeglasses.xml
  • left_eye_detector:Haar-cascade左眼分类器。运用haarcascade_lefteye_2splits.xml,它能够检测张开或闭上的眼睛。
  • right_eye_detector:Haar-cascade右眼分类器。运用haarcascade_righteye_2splits.xml,它能够检测张开或闭上的眼睛。
  • data:已知编码和已知人名的字典
  • eyes_detected:一个字典,包括每个眼睛状况前史记录。

代码解析:

  • 在第2-4行,咱们从webcam流中获取一个帧,然后调整它的巨细以加速核算速度。
  • 在第10行,咱们从帧中检测人脸,然后tonightsgirlfriend在第21行,咱们将其编码为128-d向量。
  • 在第23-38行,咱们将这个向量与已知的面部编码进行比较,并经过核算匹配的数量来确认此人的名字。挑选匹配数量最大的一个。
  • 从第45行开端,咱们测验将在面部框中进行眼睛检测。首要,咱们测验运用open_eye_detector检测睁眼。假如探测器成功,那么在第54行, 咱们在眼睛状况前史中增加“1”,这意味着眼睛是睁着的,由于open_eye_detector无法检测闭着的眼睛。不然,假如第一个分类器失利了(或许是由于眼睛是闭着的,或许只是是由于它没有辨认眼睛),那么咱们测验运用left_eye和right_eye检测器。为此咱们需求将人脸分为左右两部分,别离供给给每个分类器。
  • 从第92行开端,咱们提取眼部,咱们运用之前练习的机器学习模型猜测眼睛是否闭着。假如咱们检测到一只眼闭着,咱们假定双眼都闭合了,咱们在眼睛状况前史中增加“0”。不然咱们以为眼睛是睁着的。
  • 最终在第110行咱们运用isBlinking()用于检测眨眼的函数,假如此人眨眼,咱们会显现其名字。

4.程序其他部分代码

import os
import cv2
import face_recognition
import numpy as np
from tqdm import tqdm
from collections import defaultdict
from imutils.video import VideoStream
from eye_status import *
def init():
face_cascPath = 'haarcascade_frontalface_alt.xml'
# face_cascPath = 'lbpcascade_frontalface.xml'
open_eye_cascPath = 'haarcascade_eye_tree_eyeglasses.xml'
left_eye_cascPath = 'haarcascad金三角雇佣兵e_lefteye_2splits.xml'
right_eye_cascPath ='haarcascade_righteye_2splits.xml'
dataset = 'faces'
face_detector = cv2.CascadeClassifier(face_cascPath)
open_eyes_detector = cv2.CascadeClassifier(open_eye_cascPath)
left_eye_detector = cv2.CascadeClassifier(left_eye_cascPath)
right_eye_detector = cv2.CascadeClassifier(right_eye_cascPath)
print("[LOG] Opening webcam ...")
video_capture = VideoStream(src=0).start()
model = load_model()
print("[LOG] Collecting images ...")
images = []
for direc, _, files in tqdm(os.walk(dataset)):
for file in files:
if file.endswith("jpg"):
images.append(os.path.join(direc,file))
return (model,face_detector, open_eyes_detector, left_eye_detector,right_eye_detector, video_capture, images)
if __name__ == "__main__":
(model, face_detector, open_eyes_detector,left_eye_detector,right_eye_detector, video_capture, images) = init()
data = process_and_encode(images)
eyes_detected = defaultdict(str)
while True:
frame = detect_and_display(model, video_capture, face_detector, open_eyes_detector,left_eye_detector,right_eye_detector, data, eyes_detected)
cv2.imshow("Face Liveness Detector", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
video_captu七色女友re.stop()

热门
最新
推荐
标签