CN118864793A - A method, device, computer equipment and storage medium for collecting vehicle cabin images - Google Patents
A method, device, computer equipment and storage medium for collecting vehicle cabin images Download PDFInfo
- Publication number
- CN118864793A CN118864793A CN202411337301.3A CN202411337301A CN118864793A CN 118864793 A CN118864793 A CN 118864793A CN 202411337301 A CN202411337301 A CN 202411337301A CN 118864793 A CN118864793 A CN 118864793A
- Authority
- CN
- China
- Prior art keywords
- cabin image
- mouth
- face
- determining
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012544 monitoring process Methods 0.000 claims abstract description 48
- 238000001514 detection method Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 6
- 241001270131 Agaricus moelleri Species 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 4
- 230000002093 peripheral effect Effects 0.000 claims 4
- 238000005375 photometry Methods 0.000 claims 1
- 230000000903 blocking effect Effects 0.000 description 12
- 239000011521 glass Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/145—Illumination specially adapted for pattern recognition, e.g. using gratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本申请提供了一种车舱图像采集方法、装置、计算机设备及存储介质,其中,通过驾驶员监控摄像头实时采集车舱图像,检测当前车舱图像中是否有人脸;若当前车舱图像中有人脸,对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态;根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域;根据当前车舱图像的测光区域的像素灰度确定出驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量;根据目标调整方向和目标调整量对驾驶员监控摄像头的曝光时间进行调整;通过驾驶员监控摄像头以调整后的曝光时间采集下一时刻的车舱图像。采用上述方法,以提高采集到的车舱图像的清晰度。
The present application provides a method, device, computer equipment and storage medium for collecting cabin images, wherein the cabin image is collected in real time by a driver monitoring camera, and whether there is a face in the current cabin image is detected; if there is a face in the current cabin image, face recognition and key point positioning are performed on the current cabin image to obtain the key points of the face in the current cabin image and the occlusion status of each key point; the photometric area of the current cabin image is determined according to the occlusion status of each key point of the face in the current cabin image; the target adjustment direction and target adjustment amount of the exposure time of the driver monitoring camera are determined according to the pixel grayscale of the photometric area of the current cabin image; the exposure time of the driver monitoring camera is adjusted according to the target adjustment direction and target adjustment amount; and the cabin image of the next moment is collected by the driver monitoring camera with the adjusted exposure time. The above method is used to improve the clarity of the collected cabin image.
Description
技术领域Technical Field
本发明涉及智能座舱技术领域,具体而言,涉及一种车舱图像采集方法、装置、计算机设备及存储介质。The present invention relates to the technical field of intelligent cockpits, and in particular to a method, device, computer equipment and storage medium for acquiring cabin images.
背景技术Background Art
近红外(NIR)驾驶员监控摄像头是一种专门用于在车辆内部监控驾驶员状态(Driver Monitoring System/DMS)的摄像头,它利用近红外光谱来获取图像。这种技术能够在各种光照条件下提供清晰的图像,包括夜间或逆光环境,因此对于确保驾驶员的注意力和安全至关重要。A near-infrared (NIR) driver monitoring camera is a camera specifically designed to monitor the driver's status (Driver Monitoring System/DMS) inside a vehicle, which uses the near-infrared spectrum to acquire images. This technology is able to provide clear images in a variety of lighting conditions, including nighttime or backlit environments, and is therefore essential to ensuring the driver's attention and safety.
现有技术中,在利用近红外驾驶员监控摄像头对车舱图像进行采集时,通常是由近红外驾驶员监控摄像头基于预先设定好的曝光时间对车舱的进行图像采集。但是在研究中发现,由于车辆的行驶环境不同,车舱内的光照可能会发生变化,出现亮度较高、亮度较低或者是亮度不均匀的情况,从而会导致在基于预先设定好的曝光时间对车舱的进行图像采集后采集到的图像发生过曝、欠曝或者曝光不均匀的情况,从而无法获得清楚有效的车舱图像,或者导致采集到的车舱图像的清晰度降低。In the prior art, when using a near-infrared driver monitoring camera to collect images of the vehicle cabin, the near-infrared driver monitoring camera usually collects images of the vehicle cabin based on a pre-set exposure time. However, it is found in research that due to different driving environments of the vehicle, the lighting in the vehicle cabin may change, with high brightness, low brightness, or uneven brightness, which may cause the image collected after the image collection of the vehicle cabin based on the pre-set exposure time to be over-exposed, under-exposed, or unevenly exposed, thereby failing to obtain a clear and effective image of the vehicle cabin, or causing the clarity of the collected image of the vehicle cabin to be reduced.
发明内容Summary of the invention
有鉴于此,本发明的目的在于提供一种车舱图像采集方法、装置、计算机设备及存储介质,以提高采集到的车舱图像的清晰度。In view of this, an object of the present invention is to provide a method, an apparatus, a computer device and a storage medium for acquiring a vehicle cabin image, so as to improve the clarity of the acquired vehicle cabin image.
第一方面,本申请实施例提供了一种车舱图像采集方法,所述方法包括:In a first aspect, an embodiment of the present application provides a method for acquiring a cabin image, the method comprising:
通过驾驶员监控摄像头实时采集车舱图像,检测当前车舱图像中是否有人脸;The driver monitoring camera collects the cabin image in real time and detects whether there is a human face in the current cabin image;
若当前车舱图像中有人脸,对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态;If there is a face in the current cabin image, face recognition and key point positioning are performed on the current cabin image to obtain key points of the face in the current cabin image and the occlusion status of each key point;
根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域;Determine the photometric area of the current cabin image according to the occlusion status of each key point of the face in the current cabin image;
根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量;Determining a target adjustment direction and a target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image;
根据所述目标调整方向和所述目标调整量对所述驾驶员监控摄像头的曝光时间进行调整;adjusting the exposure time of the driver monitoring camera according to the target adjustment direction and the target adjustment amount;
通过所述驾驶员监控摄像头以调整后的曝光时间采集下一时刻的车舱图像。The vehicle cabin image at the next moment is collected by the driver monitoring camera with the adjusted exposure time.
可选地,所述对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态,包括:Optionally, the performing face recognition and key point positioning on the current cabin image to obtain key points of the face in the current cabin image and the occlusion status of each key point includes:
将当前车舱图像输入至人脸识别模型得到当前车舱图像中包含人脸的人脸外接框;Input the current cabin image into the face recognition model to obtain a face bounding box containing a face in the current cabin image;
将所述人脸外接框包含的人脸区域输入至人脸关键点定位模型中得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态。The face area included in the face circumscribed frame is input into the face key point positioning model to obtain the key points of the face in the current cabin image and the occlusion status of each key point.
可选地,所述根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域,包括:Optionally, determining the photometric area of the current cabin image according to the occlusion state of each key point of the face in the current cabin image includes:
根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态;Determining the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image;
根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态;Determining the occlusion state of the mouth according to the occlusion states of each mouth periphery key point indicating the mouth and the lower jaw in the current cabin image;
根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域。A photometric area of the current cabin image is determined according to the occlusion state of the eyes and the occlusion state of the mouth.
可选地,所述根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态,包括:Optionally, determining the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image includes:
判断各眼部关键点中遮挡状态为被遮挡的遮挡眼部关键点的数量与眼部关键点的第一总数量的占比是否超过第一预设阈值;Determine whether the ratio of the number of occluded eye key points whose occlusion state is occluded in each eye key point to the first total number of eye key points exceeds a first preset threshold;
若所述遮挡眼部关键点的数量与所述第一总数量的占比超过所述第一预设阈值,则确定出所述眼部的遮挡状态为被遮挡;If the ratio of the number of the eye-blocking key points to the first total number exceeds the first preset threshold, determining that the eye is blocked;
若所述遮挡眼部关键点的数量与所述第一总数量的占比未超过所述第一预设阈值,则确定出所述眼部的遮挡状态为未遮挡。If the ratio of the number of the blocked eye key points to the first total number does not exceed the first preset threshold, it is determined that the blockage state of the eye is not blocked.
可选地,所述根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态,包括:Optionally, determining the occlusion state of the mouth according to the occlusion states of key points around the mouth indicating the mouth and the jaw in the current cabin image includes:
判断各嘴周关键点中遮挡状态为被遮挡的遮挡嘴周关键点的数量与嘴周关键点的第二总数量的占比是否超过第二预设阈值;Determine whether the ratio of the number of occluded mouth periphery key points in the occlusion state of each mouth periphery key point to the second total number of mouth periphery key points exceeds a second preset threshold;
若所述遮挡嘴周关键点的数量与所述第二总数量的占比超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为被遮挡;If the ratio of the number of the key points around the blocked mouth to the second total number exceeds the second preset threshold, determining that the blocking state of the mouth is blocked;
若所述遮挡嘴周关键点的数量与所述第二总数量的占比未超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为未遮挡。If the ratio of the number of the blocked mouth periphery key points to the second total number does not exceed the second preset threshold, it is determined that the blocking state of the mouth is not blocked.
可选地,所述根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域,包括:Optionally, determining the photometric area of the current cabin image according to the occlusion state of the eyes and the occlusion state of the mouth includes:
若所述眼部的遮挡状态为被遮挡但所述嘴部的遮挡状态为未遮挡,则所述测光区域确定为嘴部区域;If the occlusion state of the eyes is occluded but the occlusion state of the mouth is not occluded, the light metering area is determined to be the mouth area;
若所述眼部的遮挡状态为未遮挡但所述嘴部的遮挡状态为被遮挡,则所述测光区域确定为眼部区域;If the occlusion state of the eye is unoccluded but the occlusion state of the mouth is blocked, the light metering area is determined to be the eye area;
若所述眼部和所述嘴部的遮挡状态均为被遮挡,或者所述眼部和所述嘴部的遮挡状态均为未遮挡,则将所述测光区域确定为人脸区域。If the occlusion states of the eyes and the mouth are both blocked, or if the occlusion states of the eyes and the mouth are both unblocked, the light metering area is determined as a face area.
可选地,所述根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量,包括:Optionally, determining the target adjustment direction and target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image includes:
预先配置第一参数r和第二参数gray_theshold;Pre-configure the first parameter r and the second parameter gray_theshold;
统计测光区域矩形内的像素灰度的百分位数,得到百分位数统计结果;Count the percentiles of the pixel grayscales within the rectangular area of the metering area to obtain percentile statistics results;
判断所述百分位数统计结果中的P(r)是否超过所述第二参数gray_theshold,其中,P(r)为第r个百分位数的数值;Determine whether P(r) in the percentile statistics exceeds the second parameter gray_theshold, where P(r) is the value of the rth percentile;
若所述百分位数统计结果中的P(r)超过所述第二参数gray_theshold,则将所述目标调整方向确定为减少;If P(r) in the percentile statistics exceeds the second parameter gray_theshold, the target adjustment direction is determined to be a decrease;
若所述百分位数统计结果中的P(r)未超过所述第二参数gray_theshold,将所述目标调整方向确定为增加;If P(r) in the percentile statistics result does not exceed the second parameter gray_theshold, determining the target adjustment direction to be increasing;
基于PID控制算法确定出所述目标调整量。The target adjustment amount is determined based on a PID control algorithm.
第二方面,本申请实施例提供了一种车舱图像采集装置,所述装置包括:In a second aspect, an embodiment of the present application provides a cabin image acquisition device, the device comprising:
图像检测模块,用于通过驾驶员监控摄像头实时采集车舱图像,检测当前车舱图像中是否有人脸;An image detection module is used to collect cabin images in real time through a driver monitoring camera and detect whether there is a human face in the current cabin image;
遮挡状态确定模块,用于若当前车舱图像中有人脸,对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态;The occlusion state determination module is used to perform face recognition and key point positioning on the current cabin image to obtain key points of the face in the current cabin image and the occlusion state of each key point if there is a face in the current cabin image;
测光区域确定模块,用于根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域;A photometric area determination module, used to determine the photometric area of the current cabin image according to the occlusion status of each key point of the face in the current cabin image;
调整策略确定模块,用于根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量;An adjustment strategy determination module, used to determine a target adjustment direction and a target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image;
曝光时间调整模块,用于根据所述目标调整方向和所述目标调整量对所述驾驶员监控摄像头的曝光时间进行调整;an exposure time adjustment module, configured to adjust the exposure time of the driver monitoring camera according to the target adjustment direction and the target adjustment amount;
图像采集模块,用于通过所述驾驶员监控摄像头以调整后的曝光时间采集下一时刻的车舱图像。The image acquisition module is used to acquire the vehicle cabin image at the next moment through the driver monitoring camera with the adjusted exposure time.
可选地,所述对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态,包括:Optionally, the performing face recognition and key point positioning on the current cabin image to obtain key points of the face in the current cabin image and the occlusion status of each key point includes:
将当前车舱图像输入至人脸识别模型得到当前车舱图像中包含人脸的人脸外接框;Input the current cabin image into the face recognition model to obtain a face bounding box containing a face in the current cabin image;
将所述人脸外接框包含的人脸区域输入至人脸关键点定位模型中得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态。The face area included in the face circumscribed frame is input into the face key point positioning model to obtain the key points of the face in the current cabin image and the occlusion status of each key point.
可选地,所述根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域,包括:Optionally, determining the photometric area of the current cabin image according to the occlusion state of each key point of the face in the current cabin image includes:
根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态;Determining the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image;
根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态;Determining the occlusion state of the mouth according to the occlusion states of each mouth periphery key point indicating the mouth and the lower jaw in the current cabin image;
根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域。A photometric area of the current cabin image is determined according to the occlusion state of the eyes and the occlusion state of the mouth.
可选地,所述根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态,包括:Optionally, determining the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image includes:
判断各眼部关键点中遮挡状态为被遮挡的遮挡眼部关键点的数量与眼部关键点的第一总数量的占比是否超过第一预设阈值;Determine whether the ratio of the number of occluded eye key points whose occlusion state is occluded in each eye key point to the first total number of eye key points exceeds a first preset threshold;
若所述遮挡眼部关键点的数量与所述第一总数量的占比超过所述第一预设阈值,则确定出所述眼部的遮挡状态为被遮挡;If the ratio of the number of the eye-blocking key points to the first total number exceeds the first preset threshold, determining that the eye is blocked;
若所述遮挡眼部关键点的数量与所述第一总数量的占比未超过所述第一预设阈值,则确定出所述眼部的遮挡状态为未遮挡。If the ratio of the number of the blocked eye key points to the first total number does not exceed the first preset threshold, it is determined that the blockage state of the eye is not blocked.
可选地,所述根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态,包括:Optionally, determining the occlusion state of the mouth according to the occlusion states of key points around the mouth indicating the mouth and the jaw in the current cabin image includes:
判断各嘴周关键点中遮挡状态为被遮挡的遮挡嘴周关键点的数量与嘴周关键点的第二总数量的占比是否超过第二预设阈值;Determine whether the ratio of the number of occluded mouth periphery key points in the occlusion state of each mouth periphery key point to the second total number of mouth periphery key points exceeds a second preset threshold;
若所述遮挡嘴周关键点的数量与所述第二总数量的占比超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为被遮挡;If the ratio of the number of the key points around the blocked mouth to the second total number exceeds the second preset threshold, determining that the blocking state of the mouth is blocked;
若所述遮挡嘴周关键点的数量与所述第二总数量的占比未超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为未遮挡。If the ratio of the number of the blocked mouth periphery key points to the second total number does not exceed the second preset threshold, it is determined that the blocking state of the mouth is not blocked.
可选地,所述根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域,包括:Optionally, determining the photometric area of the current cabin image according to the occlusion state of the eyes and the occlusion state of the mouth includes:
若所述眼部的遮挡状态为被遮挡但所述嘴部的遮挡状态为未遮挡,则所述测光区域确定为嘴部区域;If the occlusion state of the eyes is occluded but the occlusion state of the mouth is not occluded, the light metering area is determined to be the mouth area;
若所述眼部的遮挡状态为未遮挡但所述嘴部的遮挡状态为被遮挡,则所述测光区域确定为眼部区域;If the occlusion state of the eye is unoccluded but the occlusion state of the mouth is blocked, the light metering area is determined to be the eye area;
若所述眼部和所述嘴部的遮挡状态均为被遮挡,或者所述眼部和所述嘴部的遮挡状态均为未遮挡,则将所述测光区域确定为人脸区域。If the occlusion states of the eyes and the mouth are both blocked, or if the occlusion states of the eyes and the mouth are both unblocked, the light metering area is determined as a face area.
可选地,所述根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量,包括:Optionally, determining the target adjustment direction and target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image includes:
预先配置第一参数r和第二参数gray_theshold;Pre-configure the first parameter r and the second parameter gray_theshold;
统计测光区域矩形内的像素灰度的百分位数,得到百分位数统计结果;Count the percentiles of the pixel grayscales within the rectangular area of the metering area to obtain percentile statistics results;
判断所述百分位数统计结果中的P(r)是否超过所述第二参数gray_theshold,其中,P(r)为第r个百分位数的数值;Determine whether P(r) in the percentile statistics exceeds the second parameter gray_theshold, where P(r) is the value of the rth percentile;
若所述百分位数统计结果中的P(r)超过所述第二参数gray_theshold,则将所述目标调整方向确定为减少;If P(r) in the percentile statistics exceeds the second parameter gray_theshold, the target adjustment direction is determined to be a decrease;
若所述百分位数统计结果中的P(r)未超过所述第二参数gray_theshold,将所述目标调整方向确定为增加;If P(r) in the percentile statistics result does not exceed the second parameter gray_theshold, determining the target adjustment direction to be increasing;
基于PID控制算法确定出所述目标调整量。The target adjustment amount is determined based on a PID control algorithm.
第三方面,本申请实施例提供了一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面中任一种可选地实施方式中所述的车舱图像采集方法的步骤。In a third aspect, an embodiment of the present application provides a computer device, comprising: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the processor and the memory communicate through the bus, and when the machine-readable instructions are executed by the processor, the steps of the cabin image acquisition method described in any optional implementation manner of the first aspect are performed.
第四方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述第一方面中任一种可选地实施方式中所述的车舱图像采集方法的步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium having a computer program stored thereon. When the computer program is executed by a processor, the steps of the cabin image acquisition method described in any optional implementation manner in the first aspect above are executed.
本申请提供的技术方案包括但不限于以下有益效果:The technical solution provided by this application includes but is not limited to the following beneficial effects:
本申请通过驾驶员监控摄像头实时采集车舱图像,检测当前车舱图像中是否有人脸,若当前车舱图像中有人脸,对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态,能够以未进行曝光时间调节的驾驶员监控摄像头采集到用于作为是否需要进行曝光时间调节的原始车舱图像,并对原始车舱图像中的人脸的各关键点以及各关键点的遮挡状态进行检测,为后续曝光时间调节策略的确定提供参考依据。The present application uses a driver monitoring camera to collect cabin images in real time, detects whether there is a human face in the current cabin image, and if there is a human face in the current cabin image, performs face recognition and key point positioning on the current cabin image to obtain the key points of the face in the current cabin image and the occlusion status of each key point. The driver monitoring camera without exposure time adjustment can be used to collect the original cabin image for determining whether exposure time adjustment is required, and detects the key points of the face in the original cabin image and the occlusion status of each key point, providing a reference for determining the subsequent exposure time adjustment strategy.
然后,本申请根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域,根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量,能够确定出用于调整曝光时间的图片区域,并根据区域内的灰度数据确定出曝光调整策略。Then, the present application determines the photometric area of the current cabin image according to the occlusion status of each key point of the human face in the current cabin image, determines the target adjustment direction and target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image, and can determine the image area used to adjust the exposure time, and determine the exposure adjustment strategy based on the grayscale data in the area.
最后,根据所述目标调整方向和所述目标调整量对所述驾驶员监控摄像头的曝光时间进行调整,通过所述驾驶员监控摄像头以调整后的曝光时间采集下一时刻的车舱图像,能够实现对驾驶员监控摄像头的曝光时间的合理调节,从而提高通过驾驶员监控摄像头采集到的车舱图像的清晰度。Finally, the exposure time of the driver monitoring camera is adjusted according to the target adjustment direction and the target adjustment amount. The driver monitoring camera collects the cabin image at the next moment with the adjusted exposure time, which can achieve reasonable adjustment of the exposure time of the driver monitoring camera, thereby improving the clarity of the cabin image collected by the driver monitoring camera.
为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present invention more obvious and easy to understand, preferred embodiments are given below and described in detail with reference to the accompanying drawings.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍, 应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for use in the embodiments are briefly introduced below. It should be understood that the following drawings only show certain embodiments of the present invention and therefore should not be regarded as limiting the scope. For ordinary technicians in this field, other related drawings can be obtained based on these drawings without creative work.
图1示出了本发明实施例一所提供的一种车舱图像采集方法的流程图;FIG1 shows a flow chart of a method for collecting vehicle cabin images provided in Embodiment 1 of the present invention;
图2示出了本发明实施例一所提供的一种关键点遮挡状态检测方法的流程图;FIG2 shows a flow chart of a key point occlusion state detection method provided by Embodiment 1 of the present invention;
图3示出了本发明实施例一所提供的一种测光区域确定方法的流程图;FIG3 shows a flow chart of a method for determining a light metering area provided in Embodiment 1 of the present invention;
图4示出了本发明实施例一所提供的一种眼部遮挡状态检测方法的流程图;FIG4 shows a flow chart of a method for detecting eye occlusion status provided by Embodiment 1 of the present invention;
图5示出了本发明实施例一所提供的一种嘴部遮挡状态检测方法的流程图;FIG5 shows a flow chart of a method for detecting a mouth occlusion state provided by Embodiment 1 of the present invention;
图6示出了本发明实施例一所提供的一种具体的测光区域确定方法的流程图;FIG6 shows a flowchart of a specific method for determining a light metering area provided in Embodiment 1 of the present invention;
图7示出了本发明实施例一所提供的一种曝光时间调整策略确定方法的流程图;FIG7 shows a flow chart of a method for determining an exposure time adjustment strategy provided by Embodiment 1 of the present invention;
图8示出了本发明实施例二所提供的一种车舱图像采集装置的结构示意图;FIG8 shows a schematic structural diagram of a vehicle cabin image acquisition device provided by Embodiment 2 of the present invention;
图9示出了本发明实施例三所提供的一种计算机设备的结构示意图。FIG9 shows a schematic diagram of the structure of a computer device provided in Embodiment 3 of the present invention.
具体实施方式DETAILED DESCRIPTION
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. The components of the embodiments of the present invention generally described and shown in the drawings here can be arranged and designed in various different configurations. Therefore, the following detailed description of the embodiments of the present invention provided in the drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without making creative work belong to the scope of protection of the present invention.
实施例一Embodiment 1
为便于对本申请进行理解,下面结合图1示出的本发明实施例一所提供的一种车舱图像采集方法的流程图描述的内容对本申请实施例一进行详细说明。To facilitate understanding of the present application, the first embodiment of the present application is described in detail below in conjunction with the contents of the flowchart of a cabin image acquisition method provided by the first embodiment of the present invention shown in FIG. 1 .
参见图1所示,图1示出了本发明实施例一所提供的一种车舱图像采集方法的流程图,其中,所述方法包括步骤S101~S105:Referring to FIG. 1 , FIG. 1 shows a flow chart of a method for acquiring a vehicle cabin image according to a first embodiment of the present invention, wherein the method comprises steps S101 to S105:
S101:通过驾驶员监控摄像头实时采集车舱图像,检测当前车舱图像中是否有人脸。S101: Collecting a vehicle cabin image in real time through a driver monitoring camera to detect whether there is a human face in the current vehicle cabin image.
具体的,将实时采集到的车舱图像输入至人脸检测模型中得到当前车舱图像中是否有人脸的检测结果。Specifically, the cabin image collected in real time is input into the face detection model to obtain a detection result of whether there is a face in the current cabin image.
S102:若当前车舱图像中有人脸,对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态。S102: If there is a human face in the current cabin image, perform face recognition and key point positioning on the current cabin image to obtain key points of the human face in the current cabin image and the occlusion status of each key point.
具体的,若当前车舱图像中有人脸,将当前车舱图像输入至人脸识别模型中得到指示人脸的检测框,将各检测框指示的区域进行抠取得到人脸区域图像,将人脸区域图像输入至关键点定位模型中得到当前车舱图像中的人脸的各关键点(人脸标准68关键点)的位置信息以及各关键点的遮挡状态。Specifically, if there is a face in the current cabin image, the current cabin image is input into the face recognition model to obtain a detection frame indicating the face, the area indicated by each detection frame is clipped to obtain a face area image, and the face area image is input into the key point positioning model to obtain the position information of each key point of the face in the current cabin image (standard 68 key points of the face) and the occlusion status of each key point.
S103:根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域。S103: Determine a light metering area of the current cabin image according to the occlusion status of each key point of the human face in the current cabin image.
具体的,根据各关键点指示的区域对各关键点进行分组得到人脸各部位的关键点组,例如,人脸标准68关键点中的关键点1-关键点17指示人脸下颚,则关键点1-关键点17这17个关键点作为下颚关键点组;又例如,人脸标准68关键点中的关键点49-关键点68指示人脸嘴部,则关键点49-关键点68这20个关键点作为嘴部关键点组;又例如,人脸标准68关键点中的关键点37-关键点48指示人脸眼部,则关键点37-关键点48这12个关键点作为眼部关键点组。Specifically, the key points are grouped according to the areas indicated by the key points to obtain key point groups for various parts of the face. For example, key point 1 to key point 17 among the standard 68 key points of the face indicate the jaw, and the 17 key points of key point 1 to key point 17 are used as the jaw key point group; for another example, key point 49 to key point 68 among the standard 68 key points of the face indicate the mouth, and the 20 key points of key point 49 to key point 68 are used as the mouth key point group; for another example, key point 37 to key point 48 among the standard 68 key points of the face indicate the eyes, and the 12 key points of key point 37 to key point 48 are used as the eye key point group.
根据各关键点组中被遮挡的关键点数量确定出该关键点组所对应的人脸部位的遮挡状态,根据不同人脸部位的遮挡状态确定出当前车舱图像的测光区域。The occlusion state of the facial part corresponding to each key point group is determined according to the number of occluded key points in each key point group, and the photometric area of the current cabin image is determined according to the occlusion state of different facial parts.
S104:根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量。S104: Determine a target adjustment direction and a target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the light metering area of the current cabin image.
具体的,统计当前车舱图像的测光区域内各像素点的灰度值(值域0~255,0表示纯黑,255表示最亮),根据当前车舱图像的测光区域中各像素点的灰度值的数据分布的特征确定出对驾驶员监控摄像头的曝光时间进行调整时的目标调整方向和目标调整量,目标调整方向为增长或者缩短,目标调整量为时间长度。Specifically, the grayscale value of each pixel in the photometric area of the current cabin image is counted (the value range is 0-255, 0 represents pure black, and 255 represents the brightest), and the target adjustment direction and target adjustment amount when adjusting the exposure time of the driver monitoring camera are determined according to the data distribution characteristics of the grayscale value of each pixel in the photometric area of the current cabin image. The target adjustment direction is increase or decrease, and the target adjustment amount is the time length.
S105:根据所述目标调整方向和所述目标调整量对所述驾驶员监控摄像头的曝光时间进行调整。S105: adjusting the exposure time of the driver monitoring camera according to the target adjustment direction and the target adjustment amount.
具体的,将驾驶员监控摄像头的曝光时间由步骤S101中采集当前车舱图像时的初始量进行调整,以目标调整方向对初始量进行目标调整量的调整得到最终量。Specifically, the exposure time of the driver monitoring camera is adjusted from the initial amount when the current cabin image is collected in step S101, and the initial amount is adjusted by the target adjustment amount in the target adjustment direction to obtain the final amount.
S106:通过所述驾驶员监控摄像头以调整后的曝光时间采集下一时刻的车舱图像。S106: Capturing the vehicle cabin image at the next moment with the adjusted exposure time through the driver monitoring camera.
具体的,通过驾驶员监控摄像头以调整后的曝光时间(对应步骤S105中的曝光时间的最终量)采集下一时刻的车舱图像,实现驾驶员监控摄像头的自动曝光。Specifically, the vehicle cabin image at the next moment is collected by the driver monitoring camera with the adjusted exposure time (corresponding to the final amount of exposure time in step S105 ), so as to realize automatic exposure of the driver monitoring camera.
在一个可选的实施方案中,参见图2所示,图2示出了本发明实施例一所提供的一种关键点遮挡状态检测方法的流程图,其中,所述对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态,包括步骤S201~S202:In an optional implementation, referring to FIG. 2 , FIG. 2 shows a flow chart of a key point occlusion state detection method provided in Example 1 of the present invention, wherein the step of performing face recognition and key point positioning on the current cabin image to obtain key points of the face in the current cabin image and the occlusion state of each key point includes steps S201 to S202:
S201:将当前车舱图像输入至人脸识别模型得到当前车舱图像中包含人脸的人脸外接框。S201: Input the current cabin image into a face recognition model to obtain a face bounding box containing a face in the current cabin image.
S202:将所述人脸外接框包含的人脸区域输入至人脸关键点定位模型中得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态。S202: Inputting the face area included in the face circumscribed frame into a face key point positioning model to obtain key points of the face in the current cabin image and the occlusion status of each key point.
具体的,人脸识别模型和人脸关键点定位模型是预先利用人脸图像数据集进行训练后得到的。Specifically, the face recognition model and the face key point location model are obtained after pre-training using a face image dataset.
在一个可选的实施方案中,参见图3所示,图3示出了本发明实施例一所提供的一种测光区域确定方法的流程图,其中,所述根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域,包括步骤S301~S303:In an optional implementation, referring to FIG. 3 , FIG. 3 shows a flow chart of a method for determining a photometric area provided in Embodiment 1 of the present invention, wherein the method for determining the photometric area of the current cabin image according to the occlusion state of each key point of the face in the current cabin image comprises steps S301 to S303:
S301:根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态。S301: Determine the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image.
具体的,指示眼部的眼部关键点为人脸标准68关键点中的关键点37-关键点48。Specifically, the eye key points indicating the eyes are key points 37 to 48 among the standard 68 key points of the face.
S302:根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态。S302: Determine the occlusion state of the mouth according to the occlusion states of key points around the mouth indicating the mouth and the jaw in the current cabin image.
具体的,指示嘴部的嘴部关键点为人脸标准68关键点中的关键点49-关键点68,指示下颚的嘴部关键点为人脸标准68关键点中的关键点1-关键点17。Specifically, the mouth key points indicating the mouth are key points 49 to 68 among the standard 68 key points of the face, and the mouth key points indicating the jaw are key points 1 to 17 among the standard 68 key points of the face.
S303:根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域。S303: Determine a light metering area of the current cabin image according to the occlusion state of the eyes and the occlusion state of the mouth.
具体的,人脸被遮挡的情况通常是发生在戴眼镜或者戴口罩时,戴眼镜会导致人脸眼部被遮挡,戴口罩时会导致人脸嘴部被遮挡。因此,在戴眼镜同时戴口罩时,会导致眼部和嘴部均被遮挡;在没戴眼镜同时没戴口罩时,会导致眼部和嘴部均未被遮挡;在戴眼镜但未戴口罩时,会导致眼部被遮挡但嘴部未被遮挡;在未戴眼镜但戴口罩时,会导致眼部未被遮挡但嘴部被遮挡。基于可能出现的多种人脸被遮挡的情况,根据眼部的遮挡状态和嘴部的遮挡状态的不同遮挡状态组合,确定出当前车舱图像的测光区域。Specifically, the situation where a person's face is blocked usually occurs when wearing glasses or a mask. Wearing glasses will cause the eyes of the person to be blocked, and wearing a mask will cause the mouth of the person to be blocked. Therefore, when wearing glasses and a mask at the same time, both the eyes and the mouth will be blocked; when not wearing glasses and no mask, the eyes and the mouth will not be blocked; when wearing glasses but not wearing a mask, the eyes will be blocked but the mouth will not be blocked; when not wearing glasses but wearing a mask, the eyes will not be blocked but the mouth will be blocked. Based on the various possible situations where the face is blocked, the light metering area of the current cabin image is determined according to different combinations of eye blocking states and mouth blocking states.
在一个可选的实施方案中,参见图4所示,图4示出了本发明实施例一所提供的一种眼部遮挡状态检测方法的流程图,其中,所述根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态,包括步骤S401~S403:In an optional implementation, referring to FIG. 4 , FIG. 4 shows a flow chart of an eye occlusion state detection method provided in Example 1 of the present invention, wherein the method of determining the eye occlusion state according to the occlusion state of each eye key point indicating the eye in the current cabin image comprises steps S401 to S403:
S401:判断各眼部关键点中遮挡状态为被遮挡的遮挡眼部关键点的数量与眼部关键点的第一总数量的占比是否超过第一预设阈值。S401: Determine whether a ratio of the number of occluded eye key points whose occlusion state is occluded among the eye key points to a first total number of eye key points exceeds a first preset threshold.
具体的,统计遮挡状态为被遮挡的遮挡眼部关键点的数量,计算遮挡状态为被遮挡的遮挡眼部关键点的数量与眼部关键点的第一总数量的占比,判断该占比与第一预设阈值的关系。例如,遮挡眼部关键点的数量为3,眼部关键点的第一总数量为12,则计算得到该占比为25%。Specifically, the number of occluded eye key points whose occlusion state is occluded is counted, the ratio of the number of occluded eye key points whose occlusion state is occluded to the first total number of eye key points is calculated, and the relationship between the ratio and the first preset threshold is determined. For example, if the number of occluded eye key points is 3 and the first total number of eye key points is 12, the ratio is calculated to be 25%.
S402:若所述遮挡眼部关键点的数量与所述第一总数量的占比超过所述第一预设阈值,则确定出所述眼部的遮挡状态为被遮挡。S402: If the ratio of the number of the eye-blocking key points to the first total number exceeds the first preset threshold, it is determined that the eye is blocked.
具体的,若遮挡眼部关键点的数量与第一总数量的占比超过第一预设阈值,则说明眼部很可能被遮挡,则确定出眼部的遮挡状态为被遮挡。例如,当第一预设阈值为20%,计算得到该占比为25%时,能够确定出眼部的遮挡状态为被遮挡。Specifically, if the ratio of the number of occluded eye key points to the first total number exceeds a first preset threshold, it means that the eye is likely to be occluded, and the occlusion state of the eye is determined to be occluded. For example, when the first preset threshold is 20%, and the calculated ratio is 25%, it can be determined that the occlusion state of the eye is occluded.
S403:若所述遮挡眼部关键点的数量与所述第一总数量的占比未超过所述第一预设阈值,则确定出所述眼部的遮挡状态为未遮挡。S403: If the ratio of the number of the blocked eye key points to the first total number does not exceed the first preset threshold, determining that the blockage state of the eye is unblocked.
具体的,若遮挡眼部关键点的数量与第一总数量的占比未超过第一预设阈值,则说明眼部很可能未被遮挡,则确定出眼部的遮挡状态为未被遮挡。例如,当第一预设阈值为30%,计算得到该占比为25%时,能够确定出眼部的遮挡状态为未被遮挡。Specifically, if the ratio of the number of blocked eye key points to the first total number does not exceed the first preset threshold, it means that the eye is likely not blocked, and the occlusion state of the eye is determined to be not blocked. For example, when the first preset threshold is 30%, and the calculated ratio is 25%, it can be determined that the occlusion state of the eye is not blocked.
在一个可选的实施方案中,参见图5所示,图5示出了本发明实施例一所提供的一种嘴部遮挡状态检测方法的流程图,其中,所述根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态,包括步骤S501~S503:In an optional implementation, referring to FIG. 5 , FIG. 5 shows a flow chart of a method for detecting a mouth occlusion state provided in Example 1 of the present invention, wherein the method for determining the mouth occlusion state according to the occlusion state of each mouth periphery key point indicating the mouth and the lower jaw in the current cabin image comprises steps S501 to S503:
S501:判断各嘴周关键点中遮挡状态为被遮挡的遮挡嘴周关键点的数量与嘴周关键点的第二总数量的占比是否超过第二预设阈值。S501: Determine whether a ratio of the number of occluded mouth periphery key points in the occlusion state of being occluded among the mouth periphery key points to a second total number of mouth periphery key points exceeds a second preset threshold.
具体的,遮挡嘴周关键点为指示嘴部和下颚的关键点,遮挡嘴周关键点为遮挡状态为被遮挡的嘴部关键点和下颚关键点,统计遮挡状态为被遮挡的嘴部关键点和下颚关键点的数量,计算遮挡嘴周关键点与嘴周关键点的第而总数量的占比,判断该占比与第而预设阈值的关系。例如,遮挡嘴周关键点的数量为10,嘴周关键点的第二总数量为37,则计算得到该占比约为27%。Specifically, the occluded mouth key points are key points indicating the mouth and the jaw, and the occluded mouth key points are the mouth key points and the jaw key points whose occlusion state is occluded. The number of the mouth key points and the jaw key points whose occlusion state is occluded is counted, and the ratio of the occluded mouth key points to the second total number of mouth key points is calculated, and the relationship between the ratio and the second preset threshold is determined. For example, if the number of occluded mouth key points is 10 and the second total number of mouth key points is 37, then the ratio is calculated to be about 27%.
S502:若所述遮挡嘴周关键点的数量与所述第二总数量的占比超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为被遮挡。S502: If the ratio of the number of the key points around the blocked mouth to the second total number exceeds the second preset threshold, it is determined that the blocking state of the mouth is blocked.
具体的,若遮挡嘴周关键点的数量与第二总数量的占比超过第二预设阈值,则说明嘴周很可能被遮挡,则确定出嘴周的遮挡状态为被遮挡。例如,当第二预设阈值为20%,计算得到该占比为27%时,能够确定出嘴部的遮挡状态为被遮挡。Specifically, if the ratio of the number of key points around the mouth that are blocked to the second total number exceeds the second preset threshold, it means that the mouth is likely to be blocked, and the occlusion state of the mouth is determined to be blocked. For example, when the second preset threshold is 20%, and the calculated ratio is 27%, it can be determined that the occlusion state of the mouth is blocked.
S503:若所述遮挡嘴周关键点的数量与所述第二总数量的占比未超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为未遮挡。S503: If the ratio of the number of the blocked mouth periphery key points to the second total number does not exceed the second preset threshold, determining that the blocking state of the mouth is unblocked.
具体的,若遮挡嘴周关键点的数量与第二总数量的占比未超过第二预设阈值,则说明嘴周很可能未被遮挡,则确定出嘴部的遮挡状态为未被遮挡。例如,当第二预设阈值为95%,计算得到该占比为27%时,能够确定出嘴部的遮挡状态为未被遮挡。Specifically, if the ratio of the number of key points around the mouth that are blocked to the second total number does not exceed the second preset threshold, it means that the mouth is likely not blocked, and the blocking state of the mouth is determined to be not blocked. For example, when the second preset threshold is 95% and the calculated ratio is 27%, it can be determined that the blocking state of the mouth is not blocked.
在一个可选的实施方案中,参见图6所示,图6示出了本发明实施例一所提供的一种具体的测光区域确定方法的流程图,其中,所述根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域,包括步骤S601~S603:In an optional implementation, referring to FIG. 6 , FIG. 6 shows a flowchart of a specific method for determining a photometric area provided in Example 1 of the present invention, wherein the photometric area of the current cabin image is determined according to the occlusion state of the eyes and the occlusion state of the mouth, including steps S601 to S603:
S601:若所述眼部的遮挡状态为被遮挡但所述嘴部的遮挡状态为未遮挡,则所述测光区域确定为嘴部区域。S601: If the occlusion state of the eyes is occluded but the occlusion state of the mouth is not occluded, the light measurement area is determined to be the mouth area.
具体的,若眼部的遮挡状态为被遮挡但嘴部的遮挡状态为未遮挡,则以未遮挡的嘴部区域作为测光区域,或者以人脸的下半部分区域作为测光区域,或者以人脸的下三分之二部分区域作为测光区域。Specifically, if the occlusion state of the eyes is blocked but the occlusion state of the mouth is unblocked, the unblocked mouth area is used as the metering area, or the lower half of the face is used as the metering area, or the lower two-thirds of the face is used as the metering area.
S602:若所述眼部的遮挡状态为未遮挡但所述嘴部的遮挡状态为被遮挡,则所述测光区域确定为眼部区域。S602: If the occlusion state of the eyes is unoccluded but the occlusion state of the mouth is blocked, the light metering area is determined to be the eye area.
具体的,若眼部的遮挡状态为未遮挡但嘴部的遮挡状态为被遮挡,则以未遮挡的眼部区域作为测光区域,或者以人脸的上半部分区域作为测光区域,或者以人脸的上三分之一部分区域作为测光区域。Specifically, if the occlusion state of the eyes is unoccluded but the occlusion state of the mouth is blocked, the unoccluded eye area is used as the metering area, or the upper half of the face is used as the metering area, or the upper third of the face is used as the metering area.
S603:若所述眼部和所述嘴部的遮挡状态均为被遮挡,或者所述眼部和所述嘴部的遮挡状态均为未遮挡,则将所述测光区域确定为人脸区域。S603: If the occlusion states of the eyes and the mouth are both blocked, or if the occlusion states of the eyes and the mouth are both unblocked, determine the light metering area as a face area.
具体的,若所述眼部和所述嘴部的遮挡状态均为被遮挡,或者所述眼部和所述嘴部的遮挡状态均为未遮挡,则将整个人脸区域作为测光区域。Specifically, if the occlusion states of the eyes and the mouth are both blocked, or the occlusion states of the eyes and the mouth are both unblocked, the entire face area is used as the light measurement area.
若当前车舱图像中没有人脸,则将短期内(5s内)最近一次采集到的车舱图像中的人脸区域作为测光区域。若短期内(5s内)采集到的车舱图像中也没有人脸,则将预设区域作为测光区域。If there is no face in the current cabin image, the face area in the cabin image that was collected most recently within a short period of time (within 5 seconds) is used as the light measurement area. If there is no face in the cabin image that was collected within a short period of time (within 5 seconds), the preset area is used as the light measurement area.
在一个可选的实施方案中,参见图7所示,图7示出了本发明实施例一所提供的一种曝光时间调整策略确定方法的流程图,其中,所述根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量,包括步骤S701~S706:In an optional implementation, referring to FIG. 7 , FIG. 7 shows a flow chart of a method for determining an exposure time adjustment strategy provided by Example 1 of the present invention, wherein the target adjustment direction and target adjustment amount of the exposure time of the driver monitoring camera are determined according to the pixel grayscale of the photometric area of the current cabin image, including steps S701 to S706:
S701:预先配置第一参数r和第二参数gray_theshold。S701: pre-configure a first parameter r and a second parameter gray_theshold.
S702:统计测光区域矩形内的像素灰度的百分位数,得到百分位数统计结果。S702: Count the percentiles of the grayscales of pixels within the light metering area rectangle to obtain percentile statistics results.
S703:判断所述百分位数统计结果中的P(r)是否超过所述第二参数gray_theshold,其中,P(r)为第r个百分位数的数值。S703: Determine whether P(r) in the percentile statistics result exceeds the second parameter gray_theshold, where P(r) is the value of the rth percentile.
S704:若所述百分位数统计结果中的P(r)超过所述第二参数gray_theshold,则将所述目标调整方向确定为减少。S704: If P(r) in the percentile statistics result exceeds the second parameter gray_theshold, the target adjustment direction is determined to be decreasing.
S705:若所述百分位数统计结果中的P(r)未超过所述第二参数gray_theshold,将所述目标调整方向确定为增加。S705: If P(r) in the percentile statistics result does not exceed the second parameter gray_theshold, the target adjustment direction is determined to be increasing.
具体的,预设了两个参数r和gray_theshold。如果该百分位数统计结果的P(r)大于gray_theshold,则通过设置寄存器sensor,使曝光时间减小;否则使曝光时间增大(驾驶员监控摄像头寄存器sensor都会提供调整曝光时间的接口函数,调用该函数即可对曝光时间进行调整)。Specifically, two parameters r and gray_theshold are preset. If the percentile statistical result P(r) is greater than gray_theshold, the exposure time is reduced by setting the register sensor; otherwise, the exposure time is increased (the driver monitoring camera register sensor provides an interface function for adjusting the exposure time, which can be called to adjust the exposure time).
S706:基于PID控制算法确定出所述目标调整量。S706: Determine the target adjustment amount based on the PID control algorithm.
具体的,采用PID控制算法计算需要调整的目标调整量Δu(k),其中,Δu(k)=kp*(ek-ek-1) + ki*ek+ kd*(ek-2ek-1+ek-2),式中kp为第k次比例控制系数,ki为第k次积分控制系数,kd为第k次微分控制系数,ek为第k次偏差;上述的比例系数、积分系数、微分系数根据不同的曝光时间分段设置,通过在不同的曝光时间区间使用不同的系数,可以获得更好的控制性能,提高稳定性和收敛速度,其曝光时间区间设置为0-100us、100-200us、200-400us、400-800us、800us以上。Specifically, a PID control algorithm is used to calculate the target adjustment amount Δu(k) that needs to be adjusted, where Δu(k)= kp *( ek - ek-1 )+ ki * ek + kd *(ek- 2ek -1 + ek-2 ), where kp is the kth proportional control coefficient, ki is the kth integral control coefficient, kd is the kth differential control coefficient, and ek is the kth deviation; the above-mentioned proportional coefficient, integral coefficient, and differential coefficient are set according to different exposure time segments. By using different coefficients in different exposure time intervals, better control performance can be obtained, and the stability and convergence speed can be improved. The exposure time intervals are set to 0-100us, 100-200us, 200-400us, 400-800us, and above 800us.
实施例二Embodiment 2
参见图8所示,图8示出了本发明实施例二所提供的一种车舱图像采集装置的结构示意图,其中,所述装置包括:Referring to FIG. 8 , FIG. 8 shows a schematic diagram of the structure of a vehicle cabin image acquisition device provided in Embodiment 2 of the present invention, wherein the device comprises:
图像检测模块801,用于通过驾驶员监控摄像头实时采集车舱图像,检测当前车舱图像中是否有人脸;The image detection module 801 is used to collect the cabin image in real time through the driver monitoring camera and detect whether there is a human face in the current cabin image;
遮挡状态确定模块802,用于若当前车舱图像中有人脸,对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态;The occlusion state determination module 802 is used to perform face recognition and key point positioning on the current cabin image to obtain key points of the face in the current cabin image and the occlusion state of each key point if there is a face in the current cabin image;
测光区域确定模块803,用于根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域;A photometric region determination module 803 is used to determine the photometric region of the current cabin image according to the occlusion status of each key point of the face in the current cabin image;
调整策略确定模块804,用于根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量;An adjustment strategy determination module 804, configured to determine a target adjustment direction and a target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image;
曝光时间调整模块805,用于根据所述目标调整方向和所述目标调整量对所述驾驶员监控摄像头的曝光时间进行调整;An exposure time adjustment module 805, configured to adjust the exposure time of the driver monitoring camera according to the target adjustment direction and the target adjustment amount;
图像采集模块806,用于通过所述驾驶员监控摄像头以调整后的曝光时间采集下一时刻的车舱图像。The image acquisition module 806 is used to acquire the cabin image at the next moment through the driver monitoring camera with the adjusted exposure time.
在一个可选的实施方案中,所述对当前车舱图像进行人脸识别和关键点定位得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态,包括:In an optional implementation, the step of performing face recognition and key point positioning on the current cabin image to obtain key points of the face in the current cabin image and the occlusion status of each key point includes:
将当前车舱图像输入至人脸识别模型得到当前车舱图像中包含人脸的人脸外接框;Input the current cabin image into the face recognition model to obtain a face bounding box containing a face in the current cabin image;
将所述人脸外接框包含的人脸区域输入至人脸关键点定位模型中得到当前车舱图像中的人脸的各关键点以及各关键点的遮挡状态。The face area included in the face circumscribed frame is input into the face key point positioning model to obtain the key points of the face in the current cabin image and the occlusion status of each key point.
在一个可选的实施方案中,所述根据当前车舱图像中的人脸的各关键点的遮挡状态确定出当前车舱图像的测光区域,包括:In an optional implementation manner, determining the photometric area of the current cabin image according to the occlusion state of each key point of the face in the current cabin image includes:
根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态;Determining the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image;
根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态;Determining the occlusion state of the mouth according to the occlusion states of key points around the mouth indicating the mouth and the jaw in the current cabin image;
根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域。A photometric area of the current cabin image is determined according to the occlusion state of the eyes and the occlusion state of the mouth.
在一个可选的实施方案中,所述根据当前车舱图像中的指示眼部的各眼部关键点的遮挡状态确定出所述眼部的遮挡状态,包括:In an optional implementation, determining the occlusion state of the eye according to the occlusion state of each eye key point indicating the eye in the current cabin image includes:
判断各眼部关键点中遮挡状态为被遮挡的遮挡眼部关键点的数量与眼部关键点的第一总数量的占比是否超过第一预设阈值;Determine whether the ratio of the number of occluded eye key points whose occlusion state is occluded in each eye key point to the first total number of eye key points exceeds a first preset threshold;
若所述遮挡眼部关键点的数量与所述第一总数量的占比超过所述第一预设阈值,则确定出所述眼部的遮挡状态为被遮挡;If the ratio of the number of the eye-blocking key points to the first total number exceeds the first preset threshold, determining that the eye is blocked;
若所述遮挡眼部关键点的数量与所述第一总数量的占比未超过所述第一预设阈值,则确定出所述眼部的遮挡状态为未遮挡。If the ratio of the number of the blocked eye key points to the first total number does not exceed the first preset threshold, it is determined that the blockage state of the eye is not blocked.
在一个可选的实施方案中,所述根据当前车舱图像中的指示嘴部和下颚的各嘴周关键点的遮挡状态确定出所述嘴部的遮挡状态,包括:In an optional implementation, the determining the occlusion state of the mouth according to the occlusion states of each mouth periphery key point indicating the mouth and the lower jaw in the current cabin image includes:
判断各嘴周关键点中遮挡状态为被遮挡的遮挡嘴周关键点的数量与嘴周关键点的第二总数量的占比是否超过第二预设阈值;Determine whether the ratio of the number of occluded mouth periphery key points in the occlusion state of each mouth periphery key point to the second total number of mouth periphery key points exceeds a second preset threshold;
若所述遮挡嘴周关键点的数量与所述第二总数量的占比超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为被遮挡;If the ratio of the number of the key points around the blocked mouth to the second total number exceeds the second preset threshold, determining that the blocking state of the mouth is blocked;
若所述遮挡嘴周关键点的数量与所述第二总数量的占比未超过所述第二预设阈值,则确定出所述嘴部的遮挡状态为未遮挡。If the ratio of the number of the blocked mouth periphery key points to the second total number does not exceed the second preset threshold, it is determined that the blocking state of the mouth is not blocked.
在一个可选的实施方案中,所述根据所述眼部的遮挡状态和所述嘴部的遮挡状态确定出当前车舱图像的测光区域,包括:In an optional implementation, determining the photometric area of the current cabin image according to the occlusion state of the eyes and the occlusion state of the mouth includes:
若所述眼部的遮挡状态为被遮挡但所述嘴部的遮挡状态为未遮挡,则所述测光区域确定为嘴部区域;If the occlusion state of the eyes is occluded but the occlusion state of the mouth is not occluded, the light metering area is determined to be the mouth area;
若所述眼部的遮挡状态为未遮挡但所述嘴部的遮挡状态为被遮挡,则所述测光区域确定为眼部区域;If the occlusion state of the eye is unoccluded but the occlusion state of the mouth is blocked, the light metering area is determined to be the eye area;
若所述眼部和所述嘴部的遮挡状态均为被遮挡,或者所述眼部和所述嘴部的遮挡状态均为未遮挡,则将所述测光区域确定为人脸区域。If the occlusion states of the eyes and the mouth are both blocked, or if the occlusion states of the eyes and the mouth are both unblocked, the light metering area is determined as a face area.
在一个可选的实施方案中,所述根据当前车舱图像的测光区域的像素灰度确定出所述驾驶员监控摄像头的曝光时间的目标调整方向和目标调整量,包括:In an optional implementation, determining the target adjustment direction and target adjustment amount of the exposure time of the driver monitoring camera according to the pixel grayscale of the photometric area of the current cabin image includes:
预先配置第一参数r和第二参数gray_theshold;Pre-configure the first parameter r and the second parameter gray_theshold;
统计测光区域矩形内的像素灰度的百分位数,得到百分位数统计结果;Count the percentiles of the pixel grayscales within the rectangular area of the metering area to obtain percentile statistics results;
判断所述百分位数统计结果中的P(r)是否超过所述第二参数gray_theshold,其中,P(r)为第r个百分位数的数值;Determine whether P(r) in the percentile statistics exceeds the second parameter gray_theshold, where P(r) is the value of the rth percentile;
若所述百分位数统计结果中的P(r)超过所述第二参数gray_theshold,则将所述目标调整方向确定为减少;If P(r) in the percentile statistics exceeds the second parameter gray_theshold, the target adjustment direction is determined to be a decrease;
若所述百分位数统计结果中的P(r)未超过所述第二参数gray_theshold,将所述目标调整方向确定为增加;If P(r) in the percentile statistics result does not exceed the second parameter gray_theshold, determining the target adjustment direction to be increasing;
基于PID控制算法确定出所述目标调整量。The target adjustment amount is determined based on a PID control algorithm.
实施例三Embodiment 3
基于同一申请构思,参见图9所示,图9示出了本发明实施例三所提供的一种计算机设备的结构示意图,其中,如图9所示,本申请实施例三所提供的一种计算机设备900包括:Based on the same application concept, referring to FIG. 9 , FIG. 9 shows a schematic diagram of the structure of a computer device provided in Embodiment 3 of the present invention. As shown in FIG. 9 , a computer device 900 provided in Embodiment 3 of the present application includes:
处理器901、存储器902和总线903,所述存储器902存储有所述处理器901可执行的机器可读指令,当计算机设备900运行时,所述处理器901与所述存储器902之间通过所述总线903进行通信,所述机器可读指令被所述处理器901运行时执行上述实施例一所示的车舱图像采集方法的步骤。A processor 901, a memory 902 and a bus 903, wherein the memory 902 stores machine-readable instructions executable by the processor 901. When the computer device 900 is running, the processor 901 communicates with the memory 902 via the bus 903. When the processor 901 is running, the machine-readable instructions execute the steps of the cabin image acquisition method shown in the above-mentioned embodiment 1.
实施例四Embodiment 4
基于同一申请构思,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述实施例中任一项所述的车舱图像采集方法的步骤。Based on the same application concept, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed by a processor, the steps of the cabin image acquisition method described in any one of the above embodiments are executed.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system and device described above can refer to the corresponding process in the aforementioned method embodiment, and will not be repeated here.
本发明实施例所提供的进行人脸识别模型训练的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行前面方法实施例中所述的方法,具体实现可参见方法实施例,在此不再赘述。The computer program product for training a face recognition model provided in an embodiment of the present invention includes a computer-readable storage medium storing program code. The instructions included in the program code can be used to execute the method described in the previous method embodiment. The specific implementation can be found in the method embodiment, which will not be repeated here.
本发明实施例所提供的人脸识别模型训练装置可以为设备上的特定硬件或者安装于设备上的软件或固件等。本发明实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,前述描述的系统、装置和单元的具体工作过程,均可以参考上述方法实施例中的对应过程,在此不再赘述。The face recognition model training device provided in the embodiment of the present invention can be specific hardware on the device or software or firmware installed on the device. The implementation principle and technical effects of the device provided in the embodiment of the present invention are the same as those of the aforementioned method embodiment. For the sake of brief description, for the parts not mentioned in the device embodiment, reference can be made to the corresponding contents in the aforementioned method embodiment. Those skilled in the art can clearly understand that for the convenience and simplicity of description, the specific working processes of the systems, devices and units described above can all refer to the corresponding processes in the aforementioned method embodiment, and will not be repeated here.
在本发明所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the embodiments provided by the present invention, it should be understood that the disclosed devices and methods can be implemented in other ways. The device embodiments described above are only schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation. For example, multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some communication interfaces, and the indirect coupling or communication connection of devices or units can be electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本发明提供的实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in the embodiment provided by the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention, or the part that contributes to the prior art, or the part of the technical solution, can be embodied in the form of a software product. The computer software product is stored in a storage medium, including several instructions to enable a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk, etc., which can store program codes.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释,此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that similar numbers and letters represent similar items in the following figures. Therefore, once an item is defined in one figure, it does not need to be further defined and explained in subsequent figures. In addition, the terms "first", "second", "third", etc. are only used to distinguish the description and are not to be understood as indicating or implying relative importance.
最后应说明的是:以上所述实施例,仅为本发明的具体实施方式,用以说明本发明的技术方案,而非对其限制,本发明的保护范围并不局限于此,尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that the above-described embodiments are only specific implementations of the present invention, which are used to illustrate the technical solutions of the present invention, rather than to limit them. The protection scope of the present invention is not limited thereto. Although the present invention is described in detail with reference to the above-described embodiments, those skilled in the art should understand that any person skilled in the art can still modify the technical solutions described in the above-described embodiments within the technical scope disclosed by the present invention, or can easily think of changes, or perform equivalent replacements on some of the technical features thereof; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention. They should all be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411337301.3A CN118864793A (en) | 2024-09-25 | 2024-09-25 | A method, device, computer equipment and storage medium for collecting vehicle cabin images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411337301.3A CN118864793A (en) | 2024-09-25 | 2024-09-25 | A method, device, computer equipment and storage medium for collecting vehicle cabin images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118864793A true CN118864793A (en) | 2024-10-29 |
Family
ID=93177451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411337301.3A Pending CN118864793A (en) | 2024-09-25 | 2024-09-25 | A method, device, computer equipment and storage medium for collecting vehicle cabin images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118864793A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101202841A (en) * | 2006-12-11 | 2008-06-18 | 株式会社理光 | Camera device and exposure control method for camera device |
JP2009100252A (en) * | 2007-10-17 | 2009-05-07 | Mitsubishi Electric Corp | Imaging device |
CN105245786A (en) * | 2015-09-09 | 2016-01-13 | 厦门美图之家科技有限公司 | Self-timer method based on intelligent light measurement, self-timer system and photographing terminal |
WO2019237992A1 (en) * | 2018-06-15 | 2019-12-19 | Oppo广东移动通信有限公司 | Photographing method and device, terminal and computer readable storage medium |
CN113762136A (en) * | 2021-09-02 | 2021-12-07 | 北京格灵深瞳信息技术股份有限公司 | Face image occlusion judgment method and device, electronic equipment and storage medium |
WO2022062379A1 (en) * | 2020-09-22 | 2022-03-31 | 北京市商汤科技开发有限公司 | Image detection method and related apparatus, device, storage medium, and computer program |
CN114520880A (en) * | 2020-11-18 | 2022-05-20 | 华为技术有限公司 | Exposure parameter adjusting method and device |
WO2022134337A1 (en) * | 2020-12-21 | 2022-06-30 | 平安科技(深圳)有限公司 | Face occlusion detection method and system, device, and storage medium |
US20240135747A1 (en) * | 2022-04-08 | 2024-04-25 | Mashang Consumer Finance Co., Ltd. | Information processing method, computer device, and storage medium |
CN118097628A (en) * | 2024-01-18 | 2024-05-28 | 北京航空航天大学杭州创新研究院 | Driver fatigue detection method and device for face shielding |
-
2024
- 2024-09-25 CN CN202411337301.3A patent/CN118864793A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101202841A (en) * | 2006-12-11 | 2008-06-18 | 株式会社理光 | Camera device and exposure control method for camera device |
JP2009100252A (en) * | 2007-10-17 | 2009-05-07 | Mitsubishi Electric Corp | Imaging device |
CN105245786A (en) * | 2015-09-09 | 2016-01-13 | 厦门美图之家科技有限公司 | Self-timer method based on intelligent light measurement, self-timer system and photographing terminal |
WO2019237992A1 (en) * | 2018-06-15 | 2019-12-19 | Oppo广东移动通信有限公司 | Photographing method and device, terminal and computer readable storage medium |
WO2022062379A1 (en) * | 2020-09-22 | 2022-03-31 | 北京市商汤科技开发有限公司 | Image detection method and related apparatus, device, storage medium, and computer program |
CN114520880A (en) * | 2020-11-18 | 2022-05-20 | 华为技术有限公司 | Exposure parameter adjusting method and device |
WO2022134337A1 (en) * | 2020-12-21 | 2022-06-30 | 平安科技(深圳)有限公司 | Face occlusion detection method and system, device, and storage medium |
CN113762136A (en) * | 2021-09-02 | 2021-12-07 | 北京格灵深瞳信息技术股份有限公司 | Face image occlusion judgment method and device, electronic equipment and storage medium |
US20240135747A1 (en) * | 2022-04-08 | 2024-04-25 | Mashang Consumer Finance Co., Ltd. | Information processing method, computer device, and storage medium |
CN118097628A (en) * | 2024-01-18 | 2024-05-28 | 北京航空航天大学杭州创新研究院 | Driver fatigue detection method and device for face shielding |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107240092B (en) | Image ambiguity detection method and device | |
EP1973059B1 (en) | Face region detecting device, method, and program | |
CN109389135B (en) | Image screening method and device | |
US20170092340A1 (en) | Method and device for adjusting hardware refresh rate of terminal | |
CN110287791B (en) | Screening method and system for face pictures | |
CN105744268A (en) | Camera shielding detection method and device | |
CN112672114B (en) | Method, system, equipment and storage medium for switching day and night modes of monitoring equipment | |
US20100253495A1 (en) | In-vehicle image processing device, image processing method and memory medium | |
JPH0944685A (en) | Face image processor | |
CN101542521A (en) | Eye color correction device and program | |
US20160301856A1 (en) | Focus estimating device, imaging device, and storage medium storing image processing program | |
CN103763458B (en) | Method and device for scene change detection | |
US20150324647A1 (en) | Method for determining the length of a queue | |
CN111937497B (en) | Control method, control device and infrared camera | |
DE112020004948T5 (en) | EXPOSURE CHANGE CONTROL IN LOW LIGHT ENVIRONMENTS | |
CN111368596A (en) | Face recognition backlight compensation method and device, readable storage medium and equipment | |
JP2007157063A (en) | Image processor, image processing method and computer program | |
CN111225162A (en) | Image exposure control method, system, readable storage medium and camera equipment | |
CN100515041C (en) | Method for automatically controlling exposure and device for automatically compensating exposure | |
JP2013020352A (en) | Object detection device, method and program | |
CN112884805B (en) | A light field imaging method with cross-scale adaptive mapping | |
CN111586230A (en) | System and method for adjusting screen brightness of mobile terminal | |
CN113808135B (en) | Image brightness abnormality detection method, electronic device, and storage medium | |
CN118864793A (en) | A method, device, computer equipment and storage medium for collecting vehicle cabin images | |
JP4739870B2 (en) | Sunglasses detection device and face center position detection device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |