[go: up one dir, main page]

CN111866382A - Method for acquiring image, electronic device and computer readable storage medium - Google Patents

Method for acquiring image, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN111866382A
CN111866382A CN202010664822.5A CN202010664822A CN111866382A CN 111866382 A CN111866382 A CN 111866382A CN 202010664822 A CN202010664822 A CN 202010664822A CN 111866382 A CN111866382 A CN 111866382A
Authority
CN
China
Prior art keywords
vehicle
person
camera
emotion
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010664822.5A
Other languages
Chinese (zh)
Inventor
雷亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qinggan Intelligent Technology Co Ltd
Original Assignee
Shanghai Qinggan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qinggan Intelligent Technology Co Ltd filed Critical Shanghai Qinggan Intelligent Technology Co Ltd
Priority to CN202010664822.5A priority Critical patent/CN111866382A/en
Publication of CN111866382A publication Critical patent/CN111866382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for collecting images, an electronic device and a computer readable storage medium, wherein the method for collecting the images comprises the following steps: the method comprises the steps of obtaining scene information in the vehicle, determining whether the scene information in the vehicle has the characteristic of generating a recording instruction, and generating the recording instruction to control a camera to collect images in response to the scene information in the vehicle having the characteristic of generating the recording instruction. By the mode, the scene in the vehicle can be recorded efficiently and accurately, and the user experience is good.

Description

Method for acquiring image, electronic device and computer readable storage medium
Technical Field
The present invention relates to the field of automatic shooting, and in particular, to a method, an electronic device, and a computer-readable storage medium for acquiring an image.
Background
Nowadays, more and more families have private cars, and driving vehicles to go out becomes a part of people's lives. Meanwhile, recording and sharing the life of the user at any time and any place also becomes the trend of an information-based and network-based society. However, there are many inconveniences in recording the scene in the vehicle at any time in the prior art. For example, a driver is inconvenient to photograph a scene or an instant in the vehicle by himself/herself during driving, and the photographing angle, the view range and the photographing effect are not ideal due to the limitation of the space in the vehicle when other members in the vehicle take a photograph. In addition, the living scenes needing to be recorded in the vehicle are dynamic and evanescent in time, and long reaction time is needed between people who are aware of the scenes needing to be shot and the camera is turned on, so that the scenes which people want to record are often difficult to record efficiently and accurately.
Disclosure of Invention
The invention aims to provide a method for acquiring images, electronic equipment and a computer-readable storage medium, which can efficiently, accurately and automatically record scenes in a vehicle and have good user experience.
In order to solve the above technical problem, the present application provides a method for acquiring an image, including:
obtaining scene information in the vehicle;
determining whether the scene information in the vehicle has the characteristic of generating a recording instruction; and
and in response to the fact that the scene information in the vehicle is determined to have the characteristic of generating a recording instruction, generating the recording instruction to control a camera to acquire an image.
The scene information in the vehicle comprises one or more of images, voice information and multimedia playing contents in the vehicle of people in the vehicle.
Wherein the determining whether the scene information in the vehicle has the characteristic of generating a recording instruction comprises:
determining whether the person in the vehicle is a preset person or not;
determining whether the emotion of a person in the vehicle is a preset emotion type; and
and determining that the scene information in the vehicle has the characteristic of generating a recording instruction in response to determining that the person in the vehicle is the preset person and the emotion of the person in the vehicle is the preset emotion type.
Wherein the determining whether the person in the vehicle is the preset person comprises at least one of the following:
determining whether the person in the vehicle is the preset person or not based on a face recognition technology;
determining whether the person in the vehicle is the preset person or not based on voiceprint recognition technology; and
and determining whether the person in the vehicle is the preset person or not based on account login information.
Wherein, the determining whether the emotion of the person in the vehicle is the preset emotion type comprises:
extracting emotion correlation characteristics in the scene information in the vehicle;
matching the emotion type corresponding to the emotion correlation characteristics; and
and when the emotion type is a preset emotion type, determining the emotion of the person in the vehicle as the preset emotion type.
Wherein, the method further comprises:
detecting a preset operation instruction input by a person in the vehicle, wherein the preset operation instruction comprises at least one of a voice operation instruction, a function key operation instruction and a gesture operation instruction which are used for indicating the camera to collect images; and
and generating a recording instruction to control the camera to acquire an image in response to the detection of the predetermined operation instruction.
Wherein, the generating a recording instruction to control the camera to acquire images comprises:
Determining cameras to be activated based on the distribution situation of people in the vehicle, wherein the cameras comprise one or more of a first camera mounted above a rearview mirror, a second camera mounted at different positions in the vehicle and a third camera mounted outside the vehicle body;
selecting the frame rate and the duration of the collected image; and
generating a recording instruction for instructing the camera to be activated to acquire an image based on the selected frame rate and duration.
Wherein the determining of the camera to be activated includes at least one of:
when the people in the vehicle are distributed in the front row and the rear row at the same time, determining that the first camera is the camera to be activated;
when the people in the vehicle are distributed in the front row, determining that a second camera arranged on the front row is the camera to be activated; and
and when the person in the vehicle with the preset emotion type leaves the vehicle, determining that the third camera is the camera to be activated.
Wherein, the method further comprises:
and selecting the frame rate and the duration of the collected images according to at least one of the emotion type, the shooting position, the camera type, the personnel type and the personnel number.
Wherein, the method further comprises:
classifying and collecting the automatic recording segments formed by the collected images according to preset labels, wherein the preset labels comprise one or more of shooting time, places, people and emotion types; and
and storing the automatic recording segment to vehicle-mounted terminal equipment and/or sending the automatic recording segment to a server.
Wherein, the method further comprises:
acquiring images and/or voices of people in the vehicle;
matching a corresponding emotion type according to the image and/or voice of the person in the vehicle; and
and when the emotion type is the emotion type to be adjusted, presenting the automatic recording fragment corresponding to the emotion type to be adjusted.
Wherein the sending the automatic recording segment to a server comprises at least one of:
sending the automatic recording segments to a designated contact and/or a social platform;
receiving a sharing confirmation instruction, and sending the automatic recording segment according to the sharing confirmation instruction;
automatically sending the automatically recorded segment.
The present application further provides an electronic device, comprising:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the apparatus to perform the method for acquiring images as described above.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a machine, implements a method for acquiring images as described above.
The application relates to a method, an electronic device and a computer-readable storage medium for acquiring images, wherein the method for acquiring the images comprises the following steps: the method comprises the steps of obtaining scene information in the vehicle, determining whether the scene information in the vehicle has the characteristic of generating a recording instruction, and generating the recording instruction to control a camera to collect images in response to the scene information in the vehicle having the characteristic of generating the recording instruction. By the mode, the scene in the vehicle can be recorded efficiently and accurately, and the user experience is good.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, the present application may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present application more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic application environment diagram of a method for acquiring an image according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for acquiring an image according to an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
Fig. 1 is a schematic application environment diagram of a method for acquiring an image according to an embodiment of the present invention. As shown in fig. 1, the system architecture of the present embodiment includes a vehicle 11 and a server 12, which provide a medium for communication links therebetween via a network, which may include various types of connections, such as wired and/or wireless communication links, and the like. In the method for capturing an image of the present embodiment, the camera 13 of the vehicle 11 may be controlled by the server 12 to perform image capturing. It should be understood that the number of vehicles 11, servers 12, and cameras 13 and their mounting positions in fig. 1 are merely illustrative. There may be any number of vehicles 11, servers 12, and cameras 13 associated, as the implementation requires. The camera 13 on the vehicle 11 is used for executing the function of acquiring images according to the recording instruction sent by the server 12, and the camera 13 may be a microphone, may also be connected to a microphone in the vehicle, or a combination of multiple ways. The recognition and analysis of the environment in the vehicle can be realized at the camera 13, or the camera 13 is connected with the vehicle machine, and then the analysis and the judgment are realized at the vehicle machine end, or the analysis and the judgment can be realized through the server 12 connected with the vehicle machine, or the analysis and the judgment can be realized through the cloud end, or the combination of various modes.
Fig. 2 is a schematic flow chart of a method for acquiring an image according to an embodiment of the present invention. As shown in fig. 2, a method for acquiring an image according to an embodiment of the present invention includes:
step 201: obtaining scene information in the vehicle;
step 202: determining whether the scene information in the vehicle has the characteristic of generating a recording instruction;
step 203: and in response to determining that the scene information in the vehicle has the characteristic of generating a recording instruction, generating the recording instruction to control the camera to acquire the image.
In this embodiment, the scene recording camera is automatically activated according to the scene information in the vehicle. The in-vehicle environment information can be continuously received by turning on devices such as a vehicle-mounted camera and a microphone, for example, images, voice information or multimedia playing contents of people in the vehicle are collected, whether the in-vehicle environment information has the characteristic of generating a recording instruction is judged according to the collected in-vehicle scene information, and if the in-vehicle environment information has the characteristic of generating the recording instruction, the camera for shooting the in-vehicle scene is activated. In order to obtain scene information in the vehicle and realize image acquisition, a wide-angle camera can be arranged above the rearview mirror, so that the view finding range of the camera covers the panoramic view in the vehicle. A plurality of cameras can also be arranged in different positions and spaces in the vehicle, so that a single camera can record scenes aiming at specific positions in the vehicle. Of course, in other embodiments, a panoramic camera may also be mounted on the exterior of the vehicle body. The user can select or the system can automatically determine the camera to be activated according to the recording instruction generated by the scene information in the vehicle so as to control the camera to shoot the current scene in the vehicle and ensure the integrity of the scene record in the vehicle. For example, when people in the vehicle are simultaneously distributed in the front row and the rear row, the wide-angle camera on the rearview mirror is determined as the camera to be activated so as to ensure that images of the people in the front row and the rear row in the vehicle are acquired; when people in the vehicle are distributed in the front row, only images of the people in the front row in the vehicle need to be acquired, so that the camera arranged on the front row is determined as a camera to be activated; and when the person in the vehicle with the preset emotion type leaves the vehicle, in order to ensure the continuity of the collected images, the panoramic camera mounted on the vehicle body can be determined as the camera to be activated, and the images are continuously collected after the person in the vehicle leaves the vehicle.
In this embodiment, after the in-vehicle scene information is acquired, it is determined whether the in-vehicle scene information has a feature of generating a recording instruction. Firstly, determining whether a person in the vehicle is a preset person; if the person in the vehicle is a preset person, judging whether the emotion of the person in the vehicle is a preset emotion type; and if the emotion of the person in the vehicle is a preset emotion type, determining that the scene information in the vehicle has the characteristic of generating a recording instruction in response to the fact that the person in the vehicle is the preset person and the emotion of the person in the vehicle is the preset emotion type.
Specifically, when determining whether the person in the vehicle is the preset person, it may be determined whether the person in the vehicle is the preset person based on a face recognition technology. For example, a face image of a preset person is previously entered, and then it is determined whether it is the preset person by the person in the vehicle through face recognition. Whether the person in the vehicle is a preset person can also be determined based on voiceprint recognition technology. Voiceprints are the spectrum of sound waves carrying verbal information displayed with an electro-acoustic instrument. In actual implementation, voiceprint information of a preset person needs to be input into a system, and then the collected voiceprint information of the person in the vehicle identifies whether the person in the vehicle is the preset person through the processes of voice signal processing, voiceprint feature extraction, voiceprint modeling, voiceprint comparison, decision making and the like. Whether the person in the vehicle is a preset person can be determined based on the account login information. For example, after the person in the vehicle enters the vehicle, the person logs in the account by himself. And then comparing the account login information of the person in the vehicle with the account information of the preset person, and judging whether the person in the vehicle is the preset person. Thus, the scene recording camera is activated according to the identity of people in the vehicle identified by the face, the voiceprint or the account login information. For example, when a family member is detected, a strategy of automatically activating the scene camera can be performed, otherwise, the camera can be actively selected and activated only through a button and the like, so that the safety and the privacy are improved.
When determining whether the emotion of the person in the vehicle is a preset emotion type, the emotion of the person in the vehicle is determined to be the preset emotion type by extracting emotion association features in scene information in the vehicle and then matching the emotion type corresponding to the emotion association features.
The recording instruction comprises a recording instruction based on speech emotion recognition, a recording instruction based on facial expression recognition and/or a recording instruction based on multimedia playing content recognition. The computer analyzes and processes the signals collected from the sensors to obtain the emotional state of the opposite side, and the behavior is called emotion recognition. The method can be used for detecting physiological signals such as respiration, heart rhythm, body temperature and the like, and can also be used for detecting emotional behaviors such as facial expression recognition, voice emotion recognition and posture recognition.
The voice emotion data set is an important basis for researching voice emotion recognition, and can be divided into a discrete emotion database and a dimension emotion database according to emotion description types, wherein the discrete emotion database takes discrete language tags (such as happiness and sadness) as emotion labels, and the dimension emotion database expresses emotion by continuous real coordinate values. Taking the example of generating the recording instruction through voice recognition, when judging whether the voice has the characteristics of generating the recording instruction, a voice-emotion database containing the corresponding relation between the voice emotion associated characteristics and the emotion types can be established in advance, then the emotion associated characteristics in the voice, such as the speed, the tone, the volume, the syllable duration, the pause time between syllables and the like, are extracted, the emotion types corresponding to the emotion associated characteristics are matched in the voice-emotion database, and when the emotion types are the preset emotion types, the voice is judged to have the characteristics of generating the recording instruction. Based on the speech emotion recognition, the emotional environments such as happiness, surprise, anger, sadness and the like in the car are judged, and the scene recording camera is selectively activated to record the scene in the car. For example, it may be set to record only the scene in the car when the emotional environment is happy. Basic principles of speech emotion recognition: establishing an emotional voice library, extracting emotional characteristics (such as speed, tone, volume, syllable duration, pause time between syllables and the like) in the voice signal, and matching and identifying the emotion of the voice representation.
Of course, when determining whether the image has the feature of generating the recording instruction by image recognition, an image-emotion database including a correspondence between emotion-related features of the image and emotion types may be established in advance, and then emotion-related features in the image, such as a mouth opening ratio, an eye opening degree, an eyebrow inclination angle, and a lip muscle movement, may be extracted, and an emotion type corresponding to the emotion-related features may be matched in the image-emotion database, and when the emotion type is a preset emotion type, it may be determined that the image has the feature of generating the recording instruction. Based on the emotion recognition of the face image, the emotional environments such as happiness, surprise, anger, sadness and the like in the car are judged, and the scene recording camera is selectively activated to record the scene in the car. For example, it may be set to record only the scene in the car when the emotional environment is happy. The emotion recognition basic principle of the face image is as follows: establishing face emotion characteristic parameters, face recognition and characteristic matching. For example, the different emotions are judged by the opening ratio of the mouth, the degree of opening of the eyes, the inclination angle of the eyebrows, the movement of the lip muscles, and the like. In other embodiments, the recording instruction can be generated by combining voice recognition and image recognition, so that the recognition accuracy is improved.
In addition, a recording instruction can be generated through in-vehicle multimedia playing contents, wherein the in-vehicle multimedia playing contents comprise audio, video, images and the like. When judging whether the currently played multimedia content has the characteristic of generating the recording instruction, the local multimedia content can be labeled in advance according to the emotion type, and labels such as happiness, sadness, excitement, anger and the like are set. And determining the emotion type of the played multimedia content according to the emotion label of the played multimedia content. When the played multimedia content has no preset emotion tag, the corresponding emotion type can be identified through character information, voice tone of audio or character facial expression, color composition, change rule and the like of the image contained in the multimedia content. And when the emotion type is a preset emotion type, judging that the multimedia playing content has the characteristic of generating a recording instruction.
In one embodiment, a preset operation instruction input by a person in an automobile is detected, wherein the preset operation instruction comprises at least one of a voice operation instruction, a function key operation instruction and a gesture operation instruction which are used for indicating a camera to acquire an image; and in response to detecting the preset operation instruction, generating a recording instruction to control the camera to acquire the image. And activating the camera to shoot the scene in the vehicle according to the operation instruction by receiving the operation instruction. Taking a voice operation password as an example, when the person in the vehicle speaks: and when the scene is recorded, the scene recording camera is randomly turned on and starts to record. In addition, specific function keys may also issue a record command, such as a key located on the steering wheel or a key located near each seat accessory (e.g., window adjustment button). Of course, a specific gesture action can also be completed on the camera to control the activation of the camera for shooting the scene in the vehicle.
In an embodiment, after the frame rate and the time length of the collected image are selected, the camera is controlled to collect the image based on the selected frame rate and the selected time length, and the camera for collecting the image can be controlled to be switched according to the change of the position of the scene in the vehicle. The time length or the frame rate for acquiring the images can be automatically set or set as a fixed value according to the environment in the vehicle. The frame rate and the duration of the collected images can be selected according to at least one of the emotion type, the shooting position, the camera type, the person type and the number of persons. Frame rate is the frequency with which bitmap images, called frames, appear continuously on the display. For example, the time length or the frame rate of the collected images can be selected according to the type or the data of people in the vehicle, different image collecting time lengths can be set for the people in the vehicle, or a function of the number of the people in the vehicle and the frame rate of the collected images is preset, and the actual frame rate of the collected images is calculated according to the number of the people in the vehicle during shooting so as to adapt to the shooting requirements of different scenes in the vehicle. Furthermore, the frame rate can be divided into three stages: the system comprises a low-speed gear, a normal gear and an acceleration gear, wherein if the system is used for shooting scenes such as high-speed motion outside a vehicle or people flow, the frame rate of the acceleration gear is used for image acquisition, if the system is a scene which is more warm and romantic, the system is used for image acquisition, and other scenes can use the frame rate of the normal gear for image acquisition so as to achieve an ideal shooting effect.
In one embodiment, the automatic recording segments formed by the collected images are classified and collected according to preset labels, wherein the preset labels comprise one or more of shooting time, shooting place, shooting person and emotion type. And storing the automatic recording segments to the vehicle-mounted terminal equipment and/or sending the automatic recording segments to the server. When the automatic recording segment is sent to the server, the automatic recording segment can be sent to the designated contact and/or the social platform, and the recording segment is sent according to the sharing confirmation instruction. In other embodiments, the shooting content can be automatically shared after the shooting is finished, or scenes recorded by the camera can be shared according to preset rules. For example, the confirmation mode: confirming whether to share the friends or not to the user after the console is played back; a sharing mode: automatically sharing the recorded content to the selected friends; an active mode: only content actively recorded by pressing a key is shared out. The way of sharing may be a bound social software account.
In one embodiment, when the content shot by the camera is stored to record the scene in the vehicle, the shot content is classified and collected according to preset tags, the preset tags comprise one or more of shooting time, shooting location, shooting people and shooting emotion types, and then the shot content is stored to the local and/or cloud. An in-car life photo album or video collection is generated based on the content recorded by the scene recording camera, and the classification and collection rules can be according to time, place, people (face recognition), in-car scenes (happy scenes) and the like. The recorded content can be played back and viewed in the vehicle console, and the method is very convenient.
In one embodiment, the emotion of people in the vehicle can be recognized, and a video or photo album set is automatically played on the center console according to the emotion of the people in the vehicle so as to adjust the emotion of the people. For example, images and/or voices of people in the vehicle are continuously collected, corresponding emotion types are matched according to the images and/or voices of the people in the vehicle, and when the emotion types are emotion types to be adjusted, scene records in the vehicle corresponding to the emotion types are played. When the matched emotion types are negative emotions such as anger, sadness, heaviness and the like, the in-vehicle record with the emotion types of happiness and joy and the like can be automatically played to adjust the in-vehicle atmosphere, and the user experience is good.
The method for acquiring the image provided by the embodiment of the invention comprises the following steps: the method comprises the steps of obtaining scene information in the vehicle, determining whether the scene information in the vehicle has the characteristic of generating a recording instruction, and generating the recording instruction to control a camera to collect images in response to the scene information in the vehicle having the characteristic of generating the recording instruction. By the mode, the scene in the vehicle can be recorded efficiently and accurately, and the user experience is good.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and applicable scope of the embodiments of the present disclosure. As shown in fig. 3, the present application further provides an electronic device 600 comprising a processing unit 601, which may perform the method of the embodiments of the present disclosure according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM602, and the RAM603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM602 and/or RAM 603. Note that the above-described programs may also be stored in one or more memories other than the ROM602 and the RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in one or more memories.
Electronic device 600 may also include input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604, according to an embodiment of the disclosure. The electronic device 600 may also include one or more of the following components connected to an input/output (I/O) interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. Further, a drive, removable media. A computer program such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like may also be connected to an input/output (I/O) interface 605 as necessary, so that the computer program read out therefrom is installed into the storage section 608 as necessary.
Method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, an embodiment of the present disclosure includes a computer program product. Comprising a computer program, carried on a computer readable storage medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from a removable medium. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, and the like described above may be implemented by computer program modules according to embodiments of the present disclosure.
Embodiments of the present application also provide a computer-readable storage medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
The specific process of executing the above method steps in this embodiment is described in detail in the related description of the first embodiment, and is not described herein again.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (14)

1. A method for acquiring an image, comprising:
obtaining scene information in the vehicle;
determining whether the scene information in the vehicle has the characteristic of generating a recording instruction; and
and in response to the fact that the scene information in the vehicle is determined to have the characteristic of generating a recording instruction, generating the recording instruction to control a camera to acquire an image.
2. The method of claim 1, wherein the in-vehicle scene information comprises one or more of images of in-vehicle occupants, voice information, and in-vehicle multimedia playback content.
3. The method of claim 1, wherein the determining whether the in-vehicle scene information is characteristic of generating a recording instruction comprises:
Determining whether the person in the vehicle is a preset person or not;
determining whether the emotion of a person in the vehicle is a preset emotion type; and
and determining that the scene information in the vehicle has the characteristic of generating a recording instruction in response to determining that the person in the vehicle is the preset person and the emotion of the person in the vehicle is the preset emotion type.
4. The method of claim 3, wherein the determining whether the in-vehicle occupant is the pre-set occupant comprises at least one of:
determining whether the person in the vehicle is the preset person or not based on a face recognition technology;
determining whether the person in the vehicle is the preset person or not based on voiceprint recognition technology; and
and determining whether the person in the vehicle is the preset person or not based on account login information.
5. The method of claim 3, wherein the determining whether the emotion of the person in the vehicle is of the preset emotion type comprises:
extracting emotion correlation characteristics in the scene information in the vehicle;
matching the emotion type corresponding to the emotion correlation characteristics; and
and when the emotion type is a preset emotion type, determining the emotion of the person in the vehicle as the preset emotion type.
6. The method of claim 1, further comprising:
detecting a preset operation instruction input by a person in the vehicle, wherein the preset operation instruction comprises at least one of a voice operation instruction, a function key operation instruction and a gesture operation instruction which are used for indicating the camera to collect images; and
and generating a recording instruction to control the camera to acquire an image in response to the detection of the predetermined operation instruction.
7. The method of claim 1 or 6, wherein the generating recording instructions to control the camera to capture images comprises:
determining cameras to be activated based on the distribution situation of people in the vehicle, wherein the cameras comprise one or more of a first camera mounted above a rearview mirror, a second camera mounted at different positions in the vehicle and a third camera mounted outside the vehicle body;
selecting the frame rate and the duration of the collected image; and
generating a recording instruction for instructing the camera to be activated to acquire an image based on the selected frame rate and duration.
8. The method of claim 7, wherein the determining the camera to be activated comprises at least one of:
when the people in the vehicle are distributed in the front row and the rear row at the same time, determining that the first camera is the camera to be activated;
When the people in the vehicle are distributed in the front row, determining that a second camera arranged on the front row is the camera to be activated; and
and when the person in the vehicle with the preset emotion type leaves the vehicle, determining that the third camera is the camera to be activated.
9. The method of claim 7, wherein the method further comprises:
and selecting the frame rate and the duration of the collected images according to at least one of the emotion type, the shooting position, the camera type, the personnel type and the personnel number.
10. The method of claim 1, wherein the method further comprises:
classifying and collecting the automatic recording segments formed by the collected images according to preset labels, wherein the preset labels comprise one or more of shooting time, places, people and emotion types; and
and storing the automatic recording segment to vehicle-mounted terminal equipment and/or sending the automatic recording segment to a server.
11. The method of claim 10, wherein the method further comprises:
acquiring images and/or voices of people in the vehicle;
matching a corresponding emotion type according to the image and/or voice of the person in the vehicle; and
And when the emotion type is the emotion type to be adjusted, presenting the automatic recording fragment corresponding to the emotion type to be adjusted.
12. The method of claim 10, wherein said sending said automatically recorded segment to a server comprises at least one of:
sending the automatic recording segments to a designated contact and/or a social platform;
receiving a sharing confirmation instruction, and sending the automatic recording segment according to the sharing confirmation instruction;
automatically sending the automatically recorded segment.
13. An electronic device, comprising:
at least one processing unit;
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the apparatus to perform the steps of the method for acquiring an image of any of claims 1 to 12.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a machine, carries out the method for acquiring an image according to any one of claims 1 to 12.
CN202010664822.5A 2020-07-10 2020-07-10 Method for acquiring image, electronic device and computer readable storage medium Pending CN111866382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010664822.5A CN111866382A (en) 2020-07-10 2020-07-10 Method for acquiring image, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010664822.5A CN111866382A (en) 2020-07-10 2020-07-10 Method for acquiring image, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111866382A true CN111866382A (en) 2020-10-30

Family

ID=72984257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010664822.5A Pending CN111866382A (en) 2020-07-10 2020-07-10 Method for acquiring image, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111866382A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820402A (en) * 2021-01-22 2022-07-29 本田技研工业(中国)投资有限公司 Image processing method, apparatus, system, electronic device, and computer-readable storage medium
CN114928706A (en) * 2022-03-28 2022-08-19 岚图汽车科技有限公司 Application method and device based on in-vehicle scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005057A1 (en) * 2016-07-01 2018-01-04 Hyundai Motor Company Apparatus and method for capturing face image of decreased reflection on spectacles in vehicle
CN108366199A (en) * 2018-02-01 2018-08-03 海尔优家智能科技(北京)有限公司 A kind of image-pickup method, device, equipment and computer readable storage medium
CN111277755A (en) * 2020-02-12 2020-06-12 广州小鹏汽车科技有限公司 Photographing control method and system and vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180005057A1 (en) * 2016-07-01 2018-01-04 Hyundai Motor Company Apparatus and method for capturing face image of decreased reflection on spectacles in vehicle
CN108366199A (en) * 2018-02-01 2018-08-03 海尔优家智能科技(北京)有限公司 A kind of image-pickup method, device, equipment and computer readable storage medium
CN111277755A (en) * 2020-02-12 2020-06-12 广州小鹏汽车科技有限公司 Photographing control method and system and vehicle

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820402A (en) * 2021-01-22 2022-07-29 本田技研工业(中国)投资有限公司 Image processing method, apparatus, system, electronic device, and computer-readable storage medium
CN114820402B (en) * 2021-01-22 2024-11-05 本田技研工业(中国)投资有限公司 Image processing method, device, system, electronic device and computer-readable storage medium
CN114928706A (en) * 2022-03-28 2022-08-19 岚图汽车科技有限公司 Application method and device based on in-vehicle scene

Similar Documents

Publication Publication Date Title
CN110826370B (en) Method and device for identifying identity of person in vehicle, vehicle and storage medium
CN109302486B (en) Method and system for pushing music according to environment in vehicle
CN107825429A (en) Interface and method
JP2017007652A (en) Method for recognizing a speech context for speech control, method for determining a speech control signal for speech control, and apparatus for executing the method
CN113780062A (en) A vehicle intelligent interaction method, storage medium and chip based on emotion recognition
JP7469467B2 (en) Digital human-based vehicle interior interaction method, device, and vehicle
Kashevnik et al. Multimodal corpus design for audio-visual speech recognition in vehicle cabin
JP2014096632A (en) Imaging system
JP2019158975A (en) Utterance system
CN111866382A (en) Method for acquiring image, electronic device and computer readable storage medium
CN114760417A (en) Image shooting method and device, electronic equipment and storage medium
JP2019185117A (en) Atmosphere estimating device
US11450209B2 (en) Vehicle and method for controlling thereof
CN111813491A (en) An anthropomorphic interaction method, device and car of an in-vehicle assistant
JP2018133696A (en) In-vehicle device, content providing system, and content providing method
CN111736700B (en) Digital human-based cabin interaction method, device and vehicle
CN115205917A (en) Man-machine interaction method and electronic equipment
JP7068156B2 (en) Information processing equipment and programs
KR102689884B1 (en) Vehicle and control method for the same
JP2018180424A (en) Speech recognition apparatus and speech recognition method
JP2021111046A (en) Recording controller and recording control program
CN117842022A (en) Driving safety control method and device for artificial intelligent cabin, vehicle and medium
CN111429882A (en) Method and device for playing voice and electronic equipment
CN110908576A (en) Vehicle system/vehicle application display method and device and electronic equipment
CN116486383A (en) Smoking behavior recognition method, smoking detection model, device, vehicle, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201030