CN114743290A - Driving record control method and device and automobile - Google Patents
Driving record control method and device and automobile Download PDFInfo
- Publication number
- CN114743290A CN114743290A CN202210538505.8A CN202210538505A CN114743290A CN 114743290 A CN114743290 A CN 114743290A CN 202210538505 A CN202210538505 A CN 202210538505A CN 114743290 A CN114743290 A CN 114743290A
- Authority
- CN
- China
- Prior art keywords
- video
- driver
- driving
- driving record
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 230000008451 emotion Effects 0.000 claims abstract description 221
- 239000012634 fragment Substances 0.000 claims description 36
- 230000011664 signaling Effects 0.000 claims description 19
- 230000009471 action Effects 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 16
- 230000008921 facial expression Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 description 18
- 210000004709 eyebrow Anatomy 0.000 description 12
- 230000008859 change Effects 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 5
- 238000011835 investigation Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000002996 emotional effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 239000002729 catgut Substances 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/08—Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
- G07C5/0841—Registering performance data
- G07C5/085—Registering performance data using electronic data carriers
- G07C5/0866—Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/08—Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
- G07C5/0841—Registering performance data
- G07C5/0875—Registering performance data using magnetic data carriers
- G07C5/0891—Video recorder in combination with video camera
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Hospice & Palliative Care (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention relates to the technical field of automobiles, and discloses a driving record control method, a driving record control device and an automobile, wherein the method comprises the following steps: after the vehicle is started, acquiring real-time video and audio of a driver in the vehicle at the current moment; judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video; and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment. By applying the technical scheme of the invention, the target emotion can be identified through real-time audio and video, the driving record when the target emotion is watched can be quickly positioned and reviewed through the first tag, the generation of the first driving record segment based on the target emotion of the driver is realized, the first tag associated with the target emotion is quickly positioned and displayed according to the target emotion, and the driving record video when the driver is in the target emotion can be quickly checked.
Description
Technical Field
The embodiment of the invention relates to the field of automobiles, in particular to a driving record control method and device and an automobile.
Background
The automobile data recorder can record the traffic conditions of the front and the periphery of the vehicle in the driving process in a circulating video recording mode; when the vehicle is emergently braked or a collision accident occurs, the automobile data recorder is triggered to automatically store the video clip when the emergency brake or the accident occurs, so that the situation that important content is covered by the circularly recorded video is avoided, and a user can provide evidence for investigating the responsibility of the traffic accident by replaying the stored video clip.
Other road participants do not drive regularly for a long time, road rage of the current vehicle driver can be caused, and traffic accidents can be caused if violent driving and the vehicles of the two parties do snaking and racing. Under the scene, the user can check the driving record at the time of collision or emergency braking conveniently, but if the driving record of the user at the road rage stage needs to be checked, a lot of time is consumed to carefully browse the recorded driving record video.
Disclosure of Invention
In view of the above problems, the application provides a driving record control method, a driving record control device and an automobile, which are used for solving the problem that in the prior art, a user cannot directly find a driving record segment in an irascid road period in a video recorded by a driving recorder and manually finds the time consumed by the driving record segment in the irascid road period in the driving process of a driver in an irascid road or a specific emotion.
According to an aspect of an embodiment of the present invention, there is provided a driving record control method, including: after the vehicle is started, acquiring real-time video and audio of a driver in the vehicle at the current moment; judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video; and if so, adding a first tag for associating the target emotion to a first driving recording segment, wherein the first driving recording segment comprises the current moment.
In an alternative approach, adding a first tag for associating the target emotion to a first driving recording segment, further comprises: acquiring the position information of the vehicle at the current moment; generating the first tag including the current time, the location information, and the target emotion; adding the first tag to the first driving record segment; wherein the time span of the first driving record segment is centered at the current time.
In an optional manner, the real-time video includes an image of a driver, and the determining that the driver has the target emotion at the current time based on the real-time video further includes: identifying a driver image included in the real-time audio and video; and when the facial expression or the limb action included in the driver image meets a preset condition, judging that the driver has the target emotion at the current moment.
In an optional manner, the real-time audio and video includes a driver's voice, and the determining that the driver has the target emotion at the current time based on the real-time audio and video further includes: identifying the driver voice included in the real-time video; and when the decibel of the sound of the driver or the keywords contained in the sound of the driver accord with preset conditions, judging that the driver has the target emotion at the current moment.
In an optional manner, after the adding the first tag for associating the target emotion to the first driving recording segment, the method further comprises: generating a first vehicle interior video clip for attesting the first vehicle recording clip, wherein the time span of the first vehicle interior video clip comprises the current moment, and the time length of the first vehicle interior video clip is the same as that of the first vehicle recording clip; and storing the first driving recording fragment and the first in-vehicle video and audio fragment locally.
In an optional manner, after the determining result is yes, the method further includes: sending a first signaling for adding a label to the first driving record segment to a driving record component; the first signaling is used for enabling the driving recording component to determine a span parameter of the duration of the first driving recording segment, so that the first driving recording segment is determined in the recorded video of the driving recording component based on the span parameter and the current moment.
In an optional manner, after the adding the first tag for associating the target emotion to the first driving recording segment, the method further comprises: receiving a driving record retrieval instruction input by a user; and when the keywords included in the driving record retrieval instruction are matched with the target emotion, controlling a user interface to display the first label and the corresponding first driving record fragment.
In an alternative, the target emotion is an angry emotion.
According to another aspect of the embodiments of the present invention, there is provided a driving record control apparatus including: the video module is used for acquiring the real-time video of a driver in the vehicle at the current moment after the vehicle is started; the central control module is used for judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video; and the execution module is used for adding a first tag used for associating the target emotion to a first driving record segment if the judgment result is yes, wherein the first driving record segment comprises the current moment.
According to another aspect of an embodiment of the present invention, there is provided an automobile including: the system comprises a driving recording component, a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the driving recording component is used for acquiring a driving recording video; the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation of the driving record control method.
According to another aspect of the embodiments of the present invention, there is provided a computer-readable storage medium, wherein at least one executable instruction is stored, and when the executable instruction runs on a driving record control device/automobile, the driving record control device/automobile executes the operation of the driving record control method according to any one of the above contents.
The beneficial effect of this application: by acquiring the real-time audio and video in the vehicle, the target emotion of the driver can be identified based on analysis of the real-time audio and video; by constructing the first tag, a user can quickly position and review the driving recording video in the recorded video of the driving recording component when the driving recording video is in the target emotion, a first driving recording segment is generated based on the target emotion of the driver, the first tag associated with the target emotion and the corresponding first driving recording segment are quickly positioned and displayed according to the target emotion, and the driving recording video in the target emotion of the driver can be quickly checked.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow chart of a first embodiment of a driving record control method of the invention;
fig. 2 shows a flow chart of a second embodiment of the driving record control method of the present invention;
fig. 3 shows a flow chart of a third embodiment of the driving record control method of the present invention;
fig. 4 shows a flow chart of a fourth embodiment of the driving record control method of the present invention;
fig. 5 shows a flowchart of a fifth embodiment of the driving record control method of the present invention;
fig. 6 shows a flowchart of a fifth embodiment of the driving record control method of the present invention;
FIG. 7A is a schematic view of an automobile cockpit and a tachograph user interface in accordance with an embodiment of the present invention;
FIG. 7B is a schematic view of a vehicle cockpit and a tachograph user interface in accordance with another embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating the structure of an embodiment of the present invention showing a first driving record fragment;
fig. 9 shows a schematic structural diagram of an embodiment of the motor vehicle of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
Fig. 1 shows a flow chart of a first embodiment of a tachograph control method according to the present invention, which method may be performed by a tachograph control means/vehicle. As shown in fig. 1, the method comprises the steps of:
step 110: and after the vehicle is started, acquiring the real-time video and audio of a driver in the vehicle at the current moment.
The real-time video and audio may include video and sound at the current time. The images can be acquired by a front camera configured on a driving recording component, and also can be acquired by a camera configured in a cab of the vehicle; the sound can be obtained by a microphone arranged on the driving recording component, and can also be obtained by a microphone arranged in a cab of the vehicle.
After the vehicle is started, the tachograph means is typically configured to begin recording off-board video, which may include road conditions in front of the vehicle and around the vehicle. The real-time audio and video is an in-vehicle video, and can be used for drivers, co-passengers and cockpit environments at the current moment. If the driving recording component is provided with the front camera and the microphone, the real-time video including the driver at the current moment can be acquired through the driving recording component. The driving recording video is typically an off-board video, but may also include an in-board video. If the driving recording part is not provided with a front camera and a microphone, the real-time video can be obtained through a vehicle-mounted camera and a vehicle-mounted microphone which are arranged in a vehicle cockpit; the real-time video can be obtained through the combination of a front camera of the driving recording component and a vehicle-mounted microphone configured on a vehicle cockpit; or the real-time audio and video are acquired by combining the microphone of the driving recording component and the vehicle-mounted camera configured on the vehicle cockpit.
Step 120: and judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video.
Wherein the target emotion may be implemented as an angry emotion. After the real-time video and audio of the driver at the current moment are obtained, the image contained in the video and/or the sound contained in the video is analyzed, so that whether the driver is in an angry emotion state at the current moment is judged.
For example, images in the real-time audio and video are analyzed, and the anger emotion can be judged according to the comparison of the eyebrow spacing length and the eyebrow length with the recorded normal state data, or the anger emotion can be judged according to the comparison of the binocular area with the recorded normal state data, or the anger emotion can be judged according to the comparison of the nostril diameter with the recorded normal state data, or whether the driver has the anger emotion can be judged according to the comparison of the nose bridge length with the recorded normal state data; or analyzing the sound in the real-time audio and video, and judging the anger emotion of the driver according to the sound decibel change amplitude or the keyword included in the detected sound; or according to multidimensional indexes such as sound decibel characteristics, keyword characteristics, limb action characteristics, expression characteristics and the like, the angry emotion can be judged.
It should be noted that, for the convenience of the technical solution of the present invention, the following description of implementing the target emotion as angry emotion is not repeated; although the driving record control method is explained by taking the angry emotion as an example, the target emotion can be implemented as other emotions such as happiness, melancholy and sadness; the method can analyze the facial features, the limb features and the voice features of the driver to judge different target emotions.
Step 130: and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment.
The first driving recording segment is the content of a specific time period recorded in a video outside the vehicle by the driving recording component in the driving process of the vehicle, and the specific time period comprises the current moment when the driver is determined to be in an angry emotion state. When the driving recording component is configured with a front camera and a microphone, the first driving recording segment can also comprise an in-vehicle video of the specific time period.
When the target emotion fact is an angry emotion, the first tag may be associated with an angry emotion; namely, after the first driving recording segment is added with the first label, the user can directly search the first label or directly locate the position of the first driving recording segment in the recorded video through keyword angry search. When the driver is judged to be in the angry emotion state again, the driving recording part is controlled to generate a corresponding second driving recording segment, so that the user can quickly locate and check driving recording videos corresponding to different angry emotion time periods through different labels related to the angry emotion.
In the implementation process of the method driving record control method, the target emotion of the driver can be identified based on analysis of the real-time video and audio in the vehicle by acquiring the real-time video and audio in the vehicle; by constructing the first tag, a user can quickly locate and review the driving recording video in the target emotion in the recorded video of the driving recording part.
Fig. 2 shows a flow chart of a second embodiment of the tachograph control method according to the present invention, which method may be performed by a tachograph control means/vehicle. As shown in fig. 2, the method comprises the steps of:
step 210: and after the vehicle is started, acquiring the real-time video and audio of a driver in the vehicle at the current moment.
The real-time video and audio may include video and sound at the current time. After the vehicle is started, the tachograph means is typically configured to begin recording off-board video, which may include road conditions in front of the vehicle and around the vehicle. The real-time audio and video is an in-vehicle video and can comprise a driver, a co-passenger and a cockpit environment at the current moment. If the driving recording component is provided with the front camera and the microphone, the real-time video including the driver at the current moment can be acquired through the driving recording component. The driving recording video is usually an outside-vehicle video, but may also include an inside-vehicle video.
Step 220: and judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video.
After the real-time video and audio of the driver at the current moment are obtained, the image contained in the real-time video and audio is analyzed, and/or the sound contained in the real-time video and audio is analyzed, so that whether the driver is in an angry emotion state at the current moment or not is judged. For example, images in the real-time audio and video are analyzed, and the anger emotion can be judged according to the comparison of the eyebrow spacing length and the eyebrow length with the recorded normal state data, or the anger emotion can be judged according to the comparison of the binocular area with the recorded normal state data, or the anger emotion can be judged according to the comparison of the nostril diameter with the recorded normal state data, or whether the driver has the anger emotion can be judged according to the comparison of the nose bridge length with the recorded normal state data; or analyzing the sound in the real-time audio and video, and judging the anger emotion of the driver according to the sound decibel change amplitude or the keyword included in the detected sound; or according to multidimensional indexes such as sound decibel characteristics, keyword characteristics, limb action characteristics and expression characteristics, the angry emotion can be judged.
Step 230: and if so, acquiring the position information of the vehicle at the current moment.
When the user is judged to be in an angry emotion state based on real-time audio and video in the vehicle, the current position information of the vehicle can be acquired through the vehicle-mounted positioning system, the driving recording part with the positioning function or the vehicle-mounted navigation system so as to record road names, position coordinates, nearby landmark buildings and the like where the vehicle is located when the angry emotion is recorded.
Step 240: generating the first tag including the current time, the location information, and the target emotion.
Wherein the first label may comprise a current moment, i.e. time information, at which the user is determined to be in an angry emotional state; the first tag may also include acquired vehicle location information, i.e., geographic information; the first label may also include keyword anger, i.e. mood information. When the first tag displays the text content including the time information, the geographic information and the emotion information, the user can know the main recording content of the first driving recording segment by reading the first tag.
It should be noted that the first tag may further include other content, such as information such as a video length, etc., a video clip, or combine the content or a part of the content with other content to generate the first tag including different content, so as to meet the personalized requirements of the customer.
Step 250: adding the first tag to the first driving record segment; wherein the time span of the first driving recording segment is centered around the current time.
Wherein the first tag associated to an angry emotion may be added to the first driving record segment of the driving record video after it is generated. In the user interface, searching a first label by a user, and directly positioning the position of a first driving recording segment in the recorded driving recording video; or the first label and the corresponding first driving record segment can be quickly positioned by retrieving the angry keyword, the time information or the position information included in the first label.
The user may set the time span of the first driving recording segment, for example, a time span of 5 minutes before and after the current time when the anger emotion is determined; the time span can be preset according to actual needs, and parameter configuration can be carried out when the time length of the first driving recording segment needs to be expanded or reduced so as to meet the actual needs of users. The time span of the first driving recording segment necessarily contains the current time, and is usually generated with the current time of the determination of the anger emotion as the center.
When the user is in an angry emotion state again, the driving recording part generates a corresponding second driving recording segment and a corresponding second label, so that the user can quickly locate and play back driving recording videos corresponding to different angry emotion time periods by checking and retrieving different labels.
In the implementation process of the method driving record control method, position characteristics can be added to the first label row by acquiring vehicle position information; by constructing the first tag comprising the current moment, the position information and the target emotion, the user can quickly know the content of the driving recording segment through the tag content, and the tag can be conveniently retrieved by multiple keywords and the driving recording segment can be conveniently positioned.
Fig. 3 shows a flow chart of a third embodiment of the tachograph control method according to the present invention, which method may be performed by a tachograph control means/vehicle. As shown in fig. 3, the method comprises the steps of:
step 310: and after the vehicle is started, acquiring the real-time video and audio of a driver in the vehicle at the current moment.
The real-time video and audio may include video and sound at the current time. After the vehicle is started, the tachograph means is typically configured to begin recording off-board video, which may include road conditions in front of the vehicle and around the vehicle. The real-time audio and video is an in-vehicle video and can comprise a driver, a passenger and a cockpit environment at the current moment. If the driving recording component is provided with the front camera and the microphone, the real-time video including the driver at the current moment can be acquired through the driving recording component. The driving recording video is typically an off-board video, but may also include an in-board video.
Step 320: and identifying the driver image included by the real-time video.
The real-time video and audio at the current moment acquired in the driving process of the vehicle can comprise a driver, other co-passengers and the environment in a driving cabin; based on image analysis techniques, the current driver image included therein may be identified, which may include an image of a limb thereof, which may include a movement of a limb, a facial image, which may include a facial expression.
Step 330: and when the facial expression or the limb action included in the driver image meets a preset condition, judging that the driver has the target emotion at the current moment.
The preset condition for triggering the determination of anger emotion according to the facial expression may be implemented, for example, as a condition that the eyebrow pitch length and the eyebrow length are decreased by 10% or more from the normal values and maintained for 5 seconds or more, or that the area of both eyes is maintained for 5 seconds or more than 20% of the normal area, or that the nostril diameter is periodically fluctuated and the maximum value thereof is measured to be greater than 20% of the normal value, or that the nose bridge length is decreased by 20% or more from the normal average value and maintained for 3 seconds, or that the arm displacement frequency is increased by 30% or more from the normal average frequency value and maintained for 3 seconds. Although the embodiments of the present invention have exemplified various technical means for determining an angry emotion, the present invention is not limited to the specific technical means for determining an angry emotion.
Step 340: and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment.
The first driving recording segment is the content of a specific time period recorded in a video outside the vehicle by the driving recording component in the driving process of the vehicle, and the specific time period comprises the current moment when the driver is determined to be in an angry emotion state. When the driving recording component is configured with a front camera and a microphone, the first driving recording segment can also comprise an in-vehicle video of the specific time period. When the target emotion fact is an angry emotion, the first tag may be associated with an angry emotion; namely, after the first driving recording segment is added with the first label, the user can directly search the first label or directly locate the position of the first driving recording segment in the recorded video through keyword angry search. The user may set the time span of the first driving recording segment, for example, a 5 minute time span before and after the current time of determining an angry emotion.
In an optional mode, in the step of judging that the driver has the target emotion at the current moment based on the real-time audio and video, firstly, the voice of the driver is identified based on the real-time audio and video; and then when the decibel of the sound of the driver or the keywords contained in the sound of the driver accord with preset conditions, judging that the driver has the target emotion at the current moment.
In the process of judging the angry emotion through the image, the angry emotion can be judged based on the sound analysis. For example, the driver can be judged to be in an angry emotional state when the decibel change amplitude of the voice of the driver is more than 30% larger than the normal change amplitude within 2 seconds; or analyzing and acquiring keywords capable of expressing angry emotion based on the sound included in the real-time audio and video, and judging that the user is in an angry emotion state; whether the driver is in an angry emotional state or not can be judged through frequency analysis of the voice of the driver; although the embodiments of the present invention have exemplified various technical means for determining an angry emotion, the present invention is not limited to the specific technical means for determining an angry emotion.
In the implementation process of the method driving record control method, the facial expression and the body movement of a driver in the driving process of the vehicle can be acquired by acquiring the image of the driver; by acquiring the voice of the driver, the voice characteristics and the voice content of the driver in the driving process of the vehicle can be acquired; by analyzing the facial expressions, the body movements, the sound characteristics and the voice content, the judgment of the target emotion of the user can be realized.
Fig. 4 shows a flow chart of a fourth embodiment of the tachograph control method according to the present invention, which method may be performed by a tachograph control means/vehicle. As shown in fig. 4, the method comprises the steps of:
step 410: and after the vehicle is started, acquiring the real-time video and audio of a driver in the vehicle at the current moment.
The real-time video and audio may include video and sound at the current time. After the vehicle is started, the tachograph means is typically configured to begin recording off-board video, which may include road conditions in front of the vehicle and around the vehicle. The real-time audio and video is an in-vehicle video and can comprise a driver, a co-passenger and a cockpit environment at the current moment. If the driving recording component is provided with the front camera and the microphone, the real-time video including the driver at the current moment can be acquired through the driving recording component. The driving recording video is typically an off-board video, but may also include an in-board video.
Step 420: and judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video.
After the real-time video and audio of the driver at the current moment are obtained, the image contained in the real-time video and audio is analyzed, and/or the sound contained in the real-time video and audio is analyzed, so that whether the driver is in an angry emotion state at the current moment or not is judged. For example, images in the real-time audio and video are analyzed, and the anger emotion can be judged according to the comparison of the eyebrow spacing length and the eyebrow length with the recorded normal state data, or the anger emotion can be judged according to the comparison of the binocular area with the recorded normal state data, or the anger emotion can be judged according to the comparison of the nostril diameter with the recorded normal state data, or whether the driver has the anger emotion can be judged according to the comparison of the nose bridge length with the recorded normal state data; or analyzing the sound in the real-time audio and video, and judging the anger emotion of the driver according to the sound decibel change amplitude or the keyword included in the detected sound; or according to multidimensional indexes such as sound decibel characteristics, keyword characteristics, limb action characteristics and expression characteristics, the angry emotion can be judged.
Step 430: and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment.
The first driving recording segment is the content of a specific time period recorded in a video outside the vehicle by the driving recording component in the driving process of the vehicle, and the specific time period comprises the current moment when the driver is determined to be in an angry emotion state. When the driving recording component is configured with a front camera and a microphone, the first driving recording segment can also comprise an in-vehicle video of the specific time period. When the target emotion fact is an angry emotion, the first tag may be associated with an angry emotion; namely, after the first driving recording segment is added with the first label, the user can directly search the first label or directly locate the position of the first driving recording segment in the recorded video through keyword angry search. The user may set the time span of the first driving recording segment, for example, a time span of 5 minutes around the current time when the anger emotion is determined.
Step 440: and generating a first vehicle interior video clip for attesting the first vehicle recording clip, wherein the time span of the first vehicle interior video clip comprises the current moment, and the time length of the first vehicle interior video clip is the same as that of the first vehicle recording clip.
The first in-car audio and video clip is an in-car video of the user in the period of angry emotion, and the duration of the in-car audio and video clip can be the same as that of the first driving recording clip; the duration of the video clip in the first vehicle can be longer than or shorter than the duration of the first driving recording clip; but the time span of the first in-car audio-video segment necessarily includes the current moment when the user is angry. The first driving recording segment and the first in-vehicle audio-video segment can be mutually verified, the conditions inside and outside the vehicle during the road rage of the user can be more comprehensively analyzed, and evidence data is provided for possible accidents in the later period for investigation. The first vehicle interior video and audio segment can be acquired through a front camera and a microphone of the vehicle recording component, and can also be acquired through a vehicle-mounted camera and a microphone arranged in a cab.
Step 450: and storing the first driving recording fragment and the first in-vehicle video and audio fragment locally.
After the first tag is added to the first driving recording segment, the first driving recording segment can be stored locally, including being stored in a storage component of a driving recording component, or being stored in a storage component of a vehicle, or being stored in a mobile phone APP, so that the first driving recording segment is prevented from being covered when the driving recording component circularly records a video.
The first travel record fragment may be stored in a local album, or a local folder. The first in-vehicle audio and video clip for documenting the first driving record clip may also be stored in the local photo album, the local folder, or in a different local photo album or local folder. As shown in fig. 7A, the first tab is displayed as: video 1-2022/02/14-XX road-anger, the second label appears as: video 1-2022/02/14-XX road-anger; the user interfaces of the first label and the second label can be displayed through a driving recording part, a vehicle central control screen or a mobile phone screen; the user interface can further comprise the duration of the first driving recording segment corresponding to the first label, a playing control, a video segment catgut and other contents, wherein the video segment catgut will highlight the image of the user at the moment of angry emotion so as to highlight the angry emotion attribute of the first driving recording segment corresponding to the first label. Compared with fig. 7B, the user interface displays the driving record segments generated by the driving record component when the vehicle is in collision and emergency braking, the labels of the driving record segments are not associated with emotion, and the user cannot search the 2 labels displayed in the user interface and the corresponding driving record segments through the emotion characteristics.
In the implementation process of the driving record control method, the first in-vehicle video and audio fragment is constructed, so that mutual evidence can be obtained with the first driving record fragment, and accident investigation evidence data is perfected; through with first driving record fragment, with first car video and audio piece save in local album, can avoid driving record part circulation to record and lead to video file by the cover.
Fig. 5 shows a flow chart of a fifth embodiment of the tachograph control method according to the present invention, which method may be performed by a tachograph control means/vehicle. As shown in fig. 5, the method comprises the steps of:
step 510: and after the vehicle is started, acquiring the real-time video and audio of a driver in the vehicle at the current moment.
The real-time video and audio may include video and sound at the current moment. After the vehicle is started, the tachograph means is typically configured to begin recording off-board video, which may include road conditions in front of the vehicle and around the vehicle. The real-time audio and video is an in-vehicle video and can comprise a driver, a passenger and a cockpit environment at the current moment. If the driving recording component is provided with the front camera and the microphone, the real-time video including the driver at the current moment can be acquired through the driving recording component. The driving recording video is typically an off-board video, but may also include an in-board video.
Step 520: and judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video.
After the real-time video and audio of the driver at the current moment are obtained, the image contained in the real-time video and audio is analyzed, and/or the sound contained in the real-time video and audio is analyzed, so that whether the driver is in an angry emotion state at the current moment or not is judged. For example, images in the real-time audio and video are analyzed, and the anger emotion can be judged according to the comparison of the eyebrow spacing length and the eyebrow length with the recorded normal state data, or the anger emotion can be judged according to the comparison of the binocular area with the recorded normal state data, or the anger emotion can be judged according to the comparison of the nostril diameter with the recorded normal state data, or whether the driver has the anger emotion can be judged according to the comparison of the nose bridge length with the recorded normal state data; or analyzing the sound in the real-time audio and video, and judging the anger emotion of the driver according to the sound decibel change amplitude or the keyword included in the detected sound; or according to multidimensional indexes such as sound decibel characteristics, keyword characteristics, limb action characteristics, expression characteristics and the like, the angry emotion can be judged.
Step 530: if the judgment result is yes, sending a first signaling for adding a label to the first driving recording segment to a driving recording component; the first signaling is used for enabling the driving recording component to determine a span parameter of the duration of the first driving recording segment, so that the first driving recording segment is determined in the recorded video of the driving recording component based on the span parameter and the current moment.
When the driver is judged to be in an angry emotion state at the current moment based on the in-vehicle audio and video, the driving recording component is controlled to generate a first driving recording segment, the in-vehicle system can be triggered to send a first signaling to the driving recording component, and the first signaling can be used for controlling the driving recording component to generate the first driving recording segment.
And after receiving the first signaling, the driving recording part determines the time length of the first driving recording segment according to the span parameter preset by the user. For example, when the user implements the span parameter to 5 minutes, the drive recording part will take the video content within 5 minutes before and after the current time when the user is determined to be angry as the center as the first drive recording segment; the span parameter can be determined according to actual needs, and the value of the span parameter is not particularly limited by the invention.
When the current time and the span parameter for judging the anger of the user can be determined in the recorded video of the driving recording part, the duration of the first driving recording segment and the position in the recorded video can be determined, and the first driving recording segment can be determined and generated.
Step 540: adding a first tag for associating the target emotion to a first driving recording segment, wherein the first driving recording segment includes the current time.
Wherein a first tag associated with an angry emotion is added to the first driving record segment after it is generated; the first tag may include a current time, location information, and a target emotion keyword. The first driving recording segment is that the driving recording component records the content of a specific time period in the vehicle exterior video during the driving process of the vehicle, and the specific time period comprises the current moment when the driver is determined to be in an angry emotional state. When the driving recording component is configured with a front camera and a microphone, the first driving recording segment can also comprise an in-vehicle video of the specific time period. When the target emotion fact is an angry emotion, the first tag may be associated with an angry emotion; namely, after the first driving recording segment is added with the first label, the user can directly search the first label or directly locate the position of the first driving recording segment in the recorded video through keyword angry search. The user may set the time span of the first driving recording segment, for example, a time span of 5 minutes around the current time when the anger emotion is determined.
In the implementation process of the method driving record control method, a driving record part can be controlled to generate a first driving record fragment in time by constructing a first signaling; by constructing the span parameters, the duration of the first driving recording segment and the position of the first driving recording segment in the recorded video can be determined.
Fig. 6 shows a flow chart of a fifth embodiment of the tachograph control method according to the present invention, which method may be performed by a tachograph control means/vehicle. As shown in fig. 6, the method comprises the steps of:
step 610: and after the vehicle is started, acquiring the real-time video and audio of a driver in the vehicle at the current moment.
The real-time video and audio may include video and sound at the current time. After the vehicle is started, the tachograph means is typically configured to begin recording off-board video, which may include road conditions in front of the vehicle and around the vehicle. The real-time audio and video is an in-vehicle video and can comprise a driver, a passenger and a cockpit environment at the current moment. If the driving recording component is provided with the front camera and the microphone, the real-time video including the driver at the current moment can be acquired through the driving recording component. The driving recording video is typically an off-board video, but may also include an in-board video.
Step 620: and judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video.
After the real-time video and audio of the driver at the current moment are acquired, the image contained by the driver is analyzed, and/or the sound contained by the driver is analyzed, so that whether the driver is in an angry emotion state at the current moment or not is judged. For example, images in the real-time audio and video are analyzed, and the anger emotion can be judged according to the comparison of the eyebrow spacing length and the eyebrow length with the recorded normal state data, or the anger emotion can be judged according to the comparison of the binocular area with the recorded normal state data, or the anger emotion can be judged according to the comparison of the nostril diameter with the recorded normal state data, or whether the driver has the anger emotion can be judged according to the comparison of the nose bridge length with the recorded normal state data; or analyzing the sound in the real-time audio and video, and judging the anger emotion of the driver according to the sound decibel change amplitude or the keyword included in the detected sound; or according to multidimensional indexes such as sound decibel characteristics, keyword characteristics, limb action characteristics, expression characteristics and the like, the angry emotion can be judged.
Step 630: and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment.
The first driving recording segment is the content of a specific time period recorded in a video outside the vehicle by the driving recording component in the driving process of the vehicle, and the specific time period comprises the current moment when the driver is determined to be in an angry emotion state. When the driving recording component is configured with a front camera and a microphone, the first driving recording segment can also comprise an in-vehicle video of the specific time period. When the target emotion fact is an angry emotion, the first tag may be associated with an angry emotion; namely, after the first driving recording segment is added with the first label, the user can directly search the first label or directly locate the position of the first driving recording segment in the recorded video through keyword angry search. The user may set the time span of the first driving recording segment, for example, a time span of 5 minutes around the current time when the anger emotion is determined.
Step 640: and receiving a driving record retrieval instruction input by a user.
After the driving recording component generates a plurality of driving recording fragments, the user can look up and retrieve the driving recording fragments generated by driving. The driving record retrieval instruction can be output through a voice instruction or input retrieval keywords on an interactive interface. The terminal for realizing retrieval and display can comprise a driving record component or a mobile phone APP matched with the driving record component. The user can also retrieve and display the driving recording fragments through a vehicle central control screen, and the central control screen can acquire the driving recording fragment data from a driving recording component.
Step 650: and when the keywords included in the driving record retrieval instruction are matched with the target emotion, controlling a user interface to display the first label and the corresponding first driving record fragment.
After receiving a driving record retrieval instruction input by a user, comparing keywords included in the driving record retrieval instruction, and if the keywords are matched with angry emotions, controlling a user interface to display all labels related to the angry emotions and corresponding driving record segments thereof; and when the resources are not matched, the user interface can be controlled to display prompt information to prompt the user that the corresponding resources are not found.
For example, when the user gives a voice command "anger" to the center screen in the vehicle, the center screen in the vehicle will display a user interface as shown in fig. 7A. The user interface may include a tab display area, a video playback area, and a control area.
The tag display area may include a first tag, and a second tag, and the first tag may be displayed as: video 1-2022/02/14-XX road-anger, the second label may appear as: video 1-2022/02/14-XX road-anger.
The video playing area can display a cover of the currently selected driving recording section or a video section catwalk, and the video section catwalk can highlight the image picture of the user at the angry emotion moment so as to highlight the angry emotion attribute of the driving recording section.
The control area may include play controls for the selected driving recording segment, and the play controls may include play, fast forward, fast backward, timeline, setting, and the like.
The first driving recording segment can also be displayed and played through a driving recording component, or a vehicle system or a mobile phone APP, and can be shared to a third person or a third party application.
In the implementation process of the method driving record control method, the retrieval of the driving record fragments can be realized by constructing a driving record retrieval instruction; by matching the target emotion with the keywords of the driving record retrieval instruction, all tags related to the target emotion can be displayed on the user interface, and a user can retrieve the driving record segments through the target emotion.
Fig. 8 shows a schematic structural diagram of an embodiment of the driving record control device of the present invention. As shown in fig. 8, the apparatus 800 includes: the system comprises a video module 810, a central control module 820 and an execution module 830.
The video module is used for acquiring the real-time video of a driver in the vehicle at the current moment after the vehicle is started;
the central control module is used for judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video;
and the execution module is used for adding a first tag used for associating the target emotion to a first driving recording segment if the judgment result is yes, wherein the first driving recording segment comprises the current moment.
In an optional mode, the central control module is used for acquiring the position information of the vehicle at the current moment;
the execution module is used for generating the first label comprising the current moment, the position information and the target emotion; adding the first tag to the first driving record segment; wherein the time span of the first driving recording segment includes the current time of day.
In an optional mode, the video and audio module is used for identifying a driver image included in the real-time video and audio;
and the central control module is used for judging that the driver has the target emotion at the current moment when the facial expression or the limb action included in the driver image meets a preset condition.
In an optional mode, the audio-video module is used for identifying the driver voice included by the real-time audio-video;
and the central control module is used for judging that the driver has the target emotion at the current moment when the decibel of the sound of the driver or the keyword contained by the sound meets a preset condition.
In an optional manner, the audio-video module is configured to generate a first in-vehicle audio-video clip for attesting the first driving recording clip, where a time span of the first in-vehicle audio-video real clip includes the current time, and a duration of the first in-vehicle audio-video real clip is the same as a duration of the first driving recording clip;
the execution module is used for storing the first driving recording fragment and the first in-vehicle video and audio fragment locally.
In an optional mode, the execution module is configured to send a first signaling for tagging the first driving record segment to a driving record component; the driving recording part is used for determining a span parameter of the duration of the first driving recording segment by the driving recording part according to the first signaling, so as to determine the first driving recording segment in the recorded video of the driving recording part based on the span parameter and the current moment.
In an optional mode, the central control module is used for receiving a driving record retrieval instruction input by a user;
and the central control module is used for controlling a user interface to display the first label and the corresponding first driving record fragment when the keyword included in the driving record retrieval instruction is matched with the target emotion.
In an alternative approach, the central control module is used to implement the target emotion as an angry emotion.
By applying the technical scheme of the invention, the facial expression and the limb actions can be acquired by acquiring the image of the driver; by acquiring the voice of the driver, the voice characteristics and the voice content can be acquired; the target emotion of the driver can be recognized by analyzing the facial expressions, the body actions, the sound characteristics and the voice content; by constructing the first label, the first driving recording segment corresponding to the first label can be quickly positioned and reviewed in the recorded video; by constructing a first tag comprising current time, position information and target emotion, video content can be known through the tag; by constructing the first in-vehicle audio and video segment, the evidence of the segment can be recorded for the first vehicle and the accident investigation evidence data can be improved; the driving recording fragments can be prevented from being covered by storing the driving recording fragments in the local photo album; by constructing the first signaling, a first driving recording segment can be generated in time; by constructing span parameters, the duration can be determined; by matching the keywords, all tags related to the target emotion can be displayed, the first driving recording segment can be generated in time based on the target emotion of the driver, the first tag related to the target emotion and the corresponding first driving recording segment can be quickly positioned and displayed through keyword retrieval including the target emotion, and the beneficial effect of quickly viewing the driving recording video when the driver is in the target emotion is achieved.
Fig. 9 is a schematic structural diagram of an embodiment of an automobile according to the present invention, and the embodiment of the present invention does not limit the concrete implementation of the automobile.
As shown in fig. 9, the automobile may include: a tachograph means, a processor (processor)902, a Communications Interface (Communications Interface)904, a memory (memory)906, and a Communications bus 908.
Wherein: processor 902, communication interface 904, and memory 906 communicate with one another via a communication bus 908. A communication interface 904 for communicating with network elements of other devices, such as clients or other servers. The driving recording component is used for acquiring a driving recording video; the processor 902 is configured to execute the program 910, and may specifically execute the relevant steps in the embodiment of the driving record control method.
In particular, program 910 may include program code comprising computer-executable instructions.
The processor 902 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the invention. The one or more processors included in the vehicle may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
A memory 906 for storing a program 910. The memory 906 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 910 may be specifically invoked by the processor 902 to cause the vehicle to perform the following operations:
after the vehicle is started, acquiring real-time video and audio of a driver in the vehicle at the current moment;
judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video;
and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment.
In an alternative manner, adding a first tag for associating the target emotion to a first driving recording segment, further comprises:
acquiring the position information of the vehicle at the current moment;
generating the first label comprising the current moment, the position information and the target emotion;
adding the first tag to the first driving record segment; wherein the time span of the first driving recording segment includes the current time of day.
In an optional manner, the real-time video includes an image of a driver, and the determining that the driver has the target emotion at the current time based on the real-time video further includes:
identifying a driver image included in the real-time audio and video;
and when the facial expression or the limb action included in the image of the driver meets a preset condition, judging that the driver has the target emotion at the current moment.
In an optional manner, the real-time audio and video includes a driver's voice, and the determining that the driver has the target emotion at the current time based on the real-time audio and video further includes:
identifying driver sounds included in the real-time audio and video;
and when the decibel of the sound of the driver or the keywords contained in the sound of the driver accord with preset conditions, judging that the driver has the target emotion at the current moment.
In an optional manner, after adding a first tag for associating the target emotion to a first driving recording segment, the method further comprises:
generating a first vehicle interior video clip for attesting the first vehicle recording clip, wherein the time span of the first vehicle interior video clip comprises the current moment, and the time length of the first vehicle interior video clip is the same as that of the first vehicle recording clip;
and storing the first driving recording fragment and the first in-vehicle video and audio fragment locally.
In an optional manner, if the determination result is yes, the method further includes:
sending a first signaling for adding a label to the first driving record segment to a driving record component;
the first signaling is used for enabling the driving recording component to determine a span parameter of the duration of the first driving recording segment, so that the first driving recording segment is determined in the recorded video of the driving recording component based on the span parameter and the current moment.
In an optional manner, after adding the first tag for associating the target emotion to the first driving recording segment, the method further includes:
receiving a driving record retrieval instruction input by a user;
and when the keywords included in the driving record retrieval instruction are matched with the target emotion, controlling a user interface to display the first label and the corresponding first driving record fragment.
In an alternative, the target emotion is an angry emotion.
By applying the technical scheme of the invention, the facial expression and the limb actions can be obtained by obtaining the image of the driver; by acquiring the voice of the driver, the voice characteristics and the voice content can be acquired; the target emotion of the driver can be recognized by analyzing the facial expressions, the body actions, the sound characteristics and the voice content; by constructing the first label, the first driving recording segment corresponding to the first label can be quickly positioned and reviewed in the recorded video; by constructing a first tag comprising current time, position information and target emotion, video content can be known through the tag; by constructing the first in-car audio and video segment, the evidence of the segment can be recorded for the first driving and the accident investigation evidence data can be improved; the driving recording fragments can be prevented from being covered by storing the driving recording fragments in the local photo album; by constructing the first signaling, a first driving recording segment can be generated in time; by constructing span parameters, the duration can be determined; by matching the keywords, all tags related to the target emotion can be displayed, the first driving recording segment can be generated in time based on the target emotion of the driver, the first tag related to the target emotion and the corresponding first driving recording segment can be quickly positioned and displayed through keyword retrieval including the target emotion, and the beneficial effect of quickly viewing the driving recording video when the driver is in the target emotion is achieved.
An embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores at least one executable instruction, and when the executable instruction runs on a driving record control device/automobile, the driving record control device/automobile executes the driving record control method in any method embodiment described above.
The executable instructions may specifically be adapted to cause the driving record control device/vehicle to perform the following operations:
after the vehicle is started, acquiring real-time video and audio of a driver in the vehicle at the current moment;
judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video;
and if so, adding a first tag for associating the target emotion to a first driving recording segment, wherein the first driving recording segment comprises the current moment.
In an alternative approach, adding a first tag for associating the target emotion to a first driving recording segment, further comprises:
acquiring the position information of the vehicle at the current moment;
generating the first tag including the current time, the location information, and the target emotion;
adding the first tag to the first driving record segment; wherein the time span of the first tachograph fragment comprises the current time of day.
In an optional manner, the real-time video includes an image of a driver, and the determining that the driver has the target emotion at the current time based on the real-time video further includes:
identifying a driver image included in the real-time audio and video;
and when the facial expression or the limb action included in the driver image meets a preset condition, judging that the driver has the target emotion at the current moment.
In an optional manner, the real-time audio and video includes a driver's voice, and the determining that the driver has the target emotion at the current time based on the real-time audio and video further includes:
identifying the driver voice included in the real-time video;
and when the decibel of the sound of the driver or the keywords contained in the sound of the driver accord with preset conditions, judging that the driver has the target emotion at the current moment.
In an optional manner, after adding a first tag for associating the target emotion to a first driving recording segment, the method further comprises:
generating a first vehicle interior video clip for attesting the first vehicle recording clip, wherein the time span of the first vehicle interior video clip comprises the current moment, and the time length of the first vehicle interior video clip is the same as that of the first vehicle recording clip;
and storing the first driving recording fragment and the first in-vehicle video and audio fragment locally.
In an optional manner, if the determination result is yes, the method further includes:
sending a first signaling for adding a label to the first driving record segment to a driving record component;
the first signaling is used for enabling the driving recording component to determine a span parameter of the duration of the first driving recording segment, so that the first driving recording segment is determined in the recorded video of the driving recording component based on the span parameter and the current moment.
In an optional manner, after adding a first tag for associating the target emotion to a first driving recording segment, the method further comprises:
receiving a driving record retrieval instruction input by a user;
and when the keywords included by the driving record retrieval instruction are matched with the target emotion, controlling a user interface to display the first tag and the corresponding first driving record fragment.
In an alternative, the target emotion is an angry emotion.
By applying the technical scheme of the invention, the facial expression and the limb actions can be acquired by acquiring the image of the driver; by acquiring the voice of the driver, the voice characteristics and the voice content can be acquired; the target emotion of the driver can be recognized by analyzing the facial expression, the body movement, the sound characteristics and the voice content; by constructing the first label, the first driving recording segment corresponding to the first label can be quickly positioned and reviewed in the recorded video; by constructing a first tag comprising current time, position information and target emotion, video content can be known through the tag; by constructing the first in-vehicle audio and video segment, the evidence of the segment can be recorded for the first vehicle and the accident investigation evidence data can be improved; the driving recording fragments can be prevented from being covered by storing the driving recording fragments in the local photo album; by constructing the first signaling, a first driving recording segment can be generated in time; by constructing the span parameters, the duration can be determined; by matching the keywords, all tags related to the target emotion can be displayed, the first driving recording segment can be generated in time based on the target emotion of the driver, the first tag related to the target emotion and the corresponding first driving recording segment can be quickly positioned and displayed through keyword retrieval including the target emotion, and the beneficial effect of quickly viewing the driving recording video when the driver is in the target emotion is achieved.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. In addition, embodiments of the present invention are not directed to any particular programming language.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. Similarly, in the above description of example embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. Where the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or elements are mutually exclusive.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limited to the order of execution unless otherwise specified.
Claims (11)
1. A method for controlling a driving record, the method comprising:
after a vehicle is started, acquiring real-time video and audio of a driver in the vehicle at the current moment;
judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video;
and if so, adding a first tag for associating the target emotion to a first driving record segment, wherein the first driving record segment comprises the current moment.
2. The driving record control method according to claim 1, wherein the adding of the first tag for associating the target emotion to the first driving record segment further comprises:
acquiring the position information of the vehicle at the current moment;
generating the first tag including the current time, the location information, and the target emotion;
adding the first tag to the first driving record segment; wherein the time span of the first driving record segment is centered at the current time.
3. The driving record control method according to claim 1, wherein the real-time audio includes an image of a driver, and the determining that the driver has the target emotion at the current time based on the real-time audio further includes:
identifying a driver image included in the real-time audio and video;
and when the facial expression or the limb action included in the image of the driver meets a preset condition, judging that the driver has the target emotion at the current moment.
4. The driving record control method according to claim 1 or 3, wherein the real-time audio includes a driver's voice, and the determining that the driver has the target emotion at the current time based on the real-time audio further includes:
identifying driver sounds included in the real-time audio and video;
and when the decibel of the sound of the driver or the keywords contained in the sound of the driver accord with preset conditions, judging that the driver has the target emotion at the current moment.
5. The driving record control method according to claim 1, wherein after the adding of the first tag for associating the target emotion to the first driving record segment, the method further comprises:
generating a first vehicle interior video clip for attesting the first vehicle recording clip, wherein the time span of the first vehicle interior video clip comprises the current moment, and the time length of the first vehicle interior video clip is the same as that of the first vehicle recording clip;
and storing the first driving recording fragment and the first in-vehicle video and audio fragment locally.
6. The driving record control method according to claim 1, wherein after the determination that the driver has the target emotion at the current time, the method further comprises:
sending a first signaling for adding a label to the first driving record segment to a driving record component; the first signaling is used for enabling the driving recording component to determine a span parameter of the duration of the first driving recording segment, so that the first driving recording segment is determined in the recorded video of the driving recording component based on the span parameter and the current moment.
7. The driving record control method according to claim 1, wherein after the adding of the first tag for associating the target emotion to the first driving record segment, the method further comprises:
receiving a driving record retrieval instruction input by a user;
and when the keywords included in the driving record retrieval instruction are matched with the target emotion, controlling a user interface to display the first label and the corresponding first driving record fragment.
8. A method according to any one of claims 1 to 7, wherein the target emotion is an angry emotion.
9. A driving record control apparatus, characterized in that the apparatus comprises:
the video module is used for acquiring the real-time video of a driver in the vehicle at the current moment after the vehicle is started;
the central control module is used for judging whether the driver has a target emotion at the current moment or not based on the real-time audio and video;
and the execution module is used for adding a first tag used for associating the target emotion to a first driving record segment if the judgment result is yes, wherein the first driving record segment comprises the current moment.
10. An automobile, comprising: the system comprises a driving recording component, a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the driving recording component is used for acquiring a driving recording video;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation of the driving record control method according to any one of claims 1-8.
11. A computer-readable storage medium, characterized in that the storage medium has stored therein at least one executable instruction, which when run on a driving record control device/car, causes the driving record control device/car to perform the operations of the driving record control method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210538505.8A CN114743290A (en) | 2022-05-17 | 2022-05-17 | Driving record control method and device and automobile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210538505.8A CN114743290A (en) | 2022-05-17 | 2022-05-17 | Driving record control method and device and automobile |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114743290A true CN114743290A (en) | 2022-07-12 |
Family
ID=82287494
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210538505.8A Pending CN114743290A (en) | 2022-05-17 | 2022-05-17 | Driving record control method and device and automobile |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114743290A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115695684A (en) * | 2022-09-28 | 2023-02-03 | 海尔优家智能科技(北京)有限公司 | Multimedia data editing method and device, storage medium and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106463048A (en) * | 2014-06-23 | 2017-02-22 | 丰田自动车株式会社 | On-vehicle emergency notification device |
CN110692093A (en) * | 2017-05-31 | 2020-01-14 | 北京嘀嘀无限科技发展有限公司 | Apparatus and method for recognizing driving behavior based on motion data |
CN110874874A (en) * | 2018-08-30 | 2020-03-10 | 上海卓酷科技有限公司 | Automatic driving data acquisition system and working method thereof |
CN112078588A (en) * | 2020-08-11 | 2020-12-15 | 大众问问(北京)信息科技有限公司 | Vehicle control method and device and electronic equipment |
CN113287298A (en) * | 2019-01-30 | 2021-08-20 | Jvc建伍株式会社 | Image processing device, image processing method, and image processing program |
-
2022
- 2022-05-17 CN CN202210538505.8A patent/CN114743290A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106463048A (en) * | 2014-06-23 | 2017-02-22 | 丰田自动车株式会社 | On-vehicle emergency notification device |
CN110692093A (en) * | 2017-05-31 | 2020-01-14 | 北京嘀嘀无限科技发展有限公司 | Apparatus and method for recognizing driving behavior based on motion data |
CN110874874A (en) * | 2018-08-30 | 2020-03-10 | 上海卓酷科技有限公司 | Automatic driving data acquisition system and working method thereof |
CN113287298A (en) * | 2019-01-30 | 2021-08-20 | Jvc建伍株式会社 | Image processing device, image processing method, and image processing program |
CN112078588A (en) * | 2020-08-11 | 2020-12-15 | 大众问问(北京)信息科技有限公司 | Vehicle control method and device and electronic equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115695684A (en) * | 2022-09-28 | 2023-02-03 | 海尔优家智能科技(北京)有限公司 | Multimedia data editing method and device, storage medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402925B (en) | Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium | |
US11361555B2 (en) | Road environment monitoring device, road environment monitoring system, and road environment monitoring program | |
CA3115234C (en) | Roadside assistance system | |
JP4669448B2 (en) | Information recording apparatus, information recording method, and program | |
CN111445599B (en) | Automatic short video generation method and device for automobile data recorder | |
CN115720253B (en) | Video processing method, device, vehicle and storage medium | |
AU2013215098A1 (en) | Image capture system | |
EP4047561A1 (en) | Method for recognizing an emotion of a driver, apparatus, device, medium and vehicle | |
CN113071511A (en) | Method and device for displaying reverse image, electronic equipment and storage medium | |
CN114743290A (en) | Driving record control method and device and automobile | |
CN111104547A (en) | Method and device for processing data in vehicle | |
CN110955798A (en) | Control method, device and equipment based on vehicle-mounted multimedia system and vehicle | |
CN115859219A (en) | Multi-modal interaction method, device, equipment and storage medium | |
CN110356346B (en) | Vehicle-mounted function pushing system and method for vehicle | |
CN112540677A (en) | Control method, device and system of vehicle-mounted intelligent equipment and computer readable medium | |
US11881065B2 (en) | Information recording device, information recording method, and program for recording information | |
CN112543937A (en) | Data processing method, device and equipment | |
JP2020194206A (en) | Learning method, driving support method, learning program, driving support program, learning device, driving support system, and learning system | |
JP2023084966A (en) | Information recording support method, information recording support device, information recording support program, and information recording support system | |
JP2023124286A (en) | Information provision method and information provision device | |
CN119155546A (en) | Vehicle-mounted video recording method, device, apparatus, readable storage medium and program product | |
CN116039653B (en) | State identification method, device, vehicle and storage medium | |
US11808599B2 (en) | Route recording with real-time annotation and re-display system | |
CN114390254B (en) | Rear-row cockpit monitoring method and device and vehicle | |
CN113688277B (en) | Voice content display method, device and system and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220712 |
|
RJ01 | Rejection of invention patent application after publication |