CN114511834B - Method and device for determining prompt information, electronic equipment and storage medium - Google Patents
Method and device for determining prompt information, electronic equipment and storage mediumInfo
- Publication number
- CN114511834B CN114511834B CN202011284279.2A CN202011284279A CN114511834B CN 114511834 B CN114511834 B CN 114511834B CN 202011284279 A CN202011284279 A CN 202011284279A CN 114511834 B CN114511834 B CN 114511834B
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- frame
- vehicle
- target vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the disclosure relates to a method, a device, electronic equipment and a storage medium for determining prompt information. The method comprises the steps of obtaining road images shot by image acquisition equipment, taking a first frame of road image shot by the image acquisition equipment as a detection image in a preset detection period, identifying a detection frame of a target vehicle, taking other frames of road images shot by the image acquisition equipment in the detection period as tracking images, determining the detection frame and wheel grounding line parameters of the target vehicle in each frame of tracking image based on the detection frame of the target vehicle identified before the frame of tracking image and the frame of tracking image, and determining whether the target vehicle is reminded of the target vehicle or not based on the ground equation of the road in front of the guide vehicle, the detection frame of the target vehicle corresponding to the tracking image in the detection period and the wheel grounding line parameters of the target vehicle, which are determined in the detection period, so that the possibility of accident occurrence is reduced, and the aim of assisting driver driving is fulfilled.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of intelligent driving, in particular to a method, a device, electronic equipment and a non-transitory computer readable storage medium for determining prompt information.
Background
In the technical field of intelligent driving, no matter an unmanned driving technology or an auxiliary driving technology, behaviors (such as plugging, doubling and the like) of other vehicles in the surrounding environment of the vehicle are detected in time, whether the behaviors influence the driving of the vehicle or not is judged, and further corresponding avoidance measures are adopted, so that the intelligent driving system is an important link for ensuring the safe driving of the vehicle and reducing driving accidents.
To detect the behavior of other vehicles in the surroundings of the host vehicle, it is first necessary to detect other vehicles in the surroundings of the host vehicle. At present, other vehicles in the surrounding environment of the vehicle are detected, images of the surrounding environment of the vehicle are acquired through an image sensor of the vehicle, and then the type of the target in the images and the two-dimensional detection frame information of the target are determined by using a target detection algorithm, wherein the two-dimensional detection frame information can represent the position information of the target in the images.
However, even if the vehicle and the position in the image are detected, the behavior of the vehicle in the image cannot be further detected, and it cannot be determined whether the behavior of the vehicle in the image has an influence on the running of the host vehicle.
The above description of the discovery process of the problem is merely for aiding in understanding the technical solution of the present disclosure, and does not represent an admission that the above is prior art.
Disclosure of Invention
To address at least one problem with the prior art, at least one embodiment of the present disclosure provides a method, apparatus, electronic device, and non-transitory computer readable storage medium for determining hint information.
In a first aspect, an embodiment of the present disclosure provides a method for determining a prompt message, where the method includes:
acquiring a road image of a road in front of a guided vehicle, which is shot by an image acquisition device carried by the guided vehicle;
In a preset detection period, taking a first frame of road image shot by the image acquisition equipment as a detection image, and identifying a detection frame of at least one target vehicle positioned at the periphery of the guided vehicle from the detection image;
taking other frame road images shot by the image acquisition equipment in the detection period as tracking images, and determining the detection frame of the target vehicle in the frame tracking images and the wheel grounding wire parameters of the target vehicle based on the detection frame of the target vehicle which is identified before the frame tracking images and the frame tracking images aiming at each frame tracking image;
and determining whether to remind the guided vehicle of paying attention to the target vehicle based on the ground equation of the road ahead of the guided vehicle, the detection frame of the target vehicle corresponding to the tracking image and the wheel grounding line parameters of the target vehicle, which are determined in the detection period.
In a second aspect, an embodiment of the present disclosure further proposes an apparatus for determining a hint information, where the apparatus includes:
an acquisition unit configured to acquire a road image of a road ahead of a guided vehicle captured by an image capturing device mounted on the guided vehicle;
A vehicle detection unit, configured to identify, in a preset detection period, a detection frame of at least one target vehicle located around the guided vehicle from a detection image that is taken as a detection image by using a first frame of road image captured by the image capturing device;
the vehicle tracking unit is used for taking other frame road images shot by the image acquisition equipment in the detection period as tracking images, and determining the detection frame of the target vehicle in the frame tracking images and the wheel grounding line parameters of the target vehicle based on the detection frame of the target vehicle which is identified before the frame tracking image and the frame tracking image for each frame tracking image;
And the determining prompt unit is used for determining whether to remind the guided vehicle of paying attention to the target vehicle based on the ground equation of the road in front of the guided vehicle, the detection frame of the target vehicle corresponding to the tracking image in the detection period and the wheel grounding line parameters of the target vehicle.
In a third aspect, the disclosed embodiments also provide an electronic device comprising a processor and a memory, the processor being configured to perform the steps of the method according to the first aspect by invoking a program or instructions stored in the memory.
In a fourth aspect, embodiments of the present disclosure also propose a non-transitory computer-readable storage medium storing a program or instructions for causing a computer to perform the steps of the method according to the first aspect.
In a fifth aspect, the presently disclosed embodiments also propose a computer program product, wherein the computer program product comprises a computer program stored in a non-transitory computer readable storage medium, at least one processor of the computer reading and executing the computer program from the storage medium, such that the computer performs the steps of the method according to the first aspect.
It can be seen that in at least one embodiment of the present disclosure, a detection frame of a target vehicle may be obtained by acquiring a road image of a road ahead of a guided vehicle, detecting the target vehicle with respect to the road image as a detection image, tracking the target vehicle with respect to the road image as a tracking image, and obtaining a detection frame of the target vehicle in the tracking image and a wheel ground line parameter of the target vehicle, so as to determine whether to remind the guided vehicle of paying attention to the target vehicle according to the detection frame and the wheel ground line parameter of the target vehicle and a ground equation of the road ahead of the guided vehicle, thereby reducing the possibility of accident occurrence and achieving the purpose of assisting the driver in driving.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings to those of ordinary skill in the art.
FIG. 1 is an exemplary architecture diagram for a lead vehicle provided by an embodiment of the present disclosure;
FIG. 2 is an exemplary block diagram of an intelligent driving system provided by an embodiment of the present disclosure;
FIG. 3 is an exemplary block diagram of an apparatus for determining hint information provided by embodiments of the present disclosure;
FIG. 4 is an exemplary block diagram of an electronic device provided by an embodiment of the present disclosure;
FIG. 5 is an exemplary flow chart of a method of determining hint information provided by embodiments of the present disclosure;
FIG. 6 is an exemplary application scenario diagram provided by an embodiment of the present disclosure;
Fig. 7 is a schematic projection view of a wheel grounding wire in the application scenario shown in fig. 6.
Detailed Description
In order that the above-recited objects, features and advantages of the present disclosure may be more clearly understood, a more particular description of the disclosure will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is to be understood that the described embodiments are some, but not all, of the embodiments of the present disclosure. The specific embodiments described herein are to be considered in an illustrative rather than a restrictive sense. All other embodiments derived by a person of ordinary skill in the art based on the described embodiments of the present disclosure fall within the scope of the present disclosure.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The embodiment of the disclosure provides a method, a device, electronic equipment or a non-transitory computer readable storage medium for determining prompt information, which is characterized in that a road image of a road in front of a guided vehicle is obtained, a target vehicle is detected on the road image serving as a detection image to obtain a detection frame of the target vehicle, the target vehicle is tracked on the road image serving as a tracking image to obtain the detection frame of the target vehicle in the tracking image and the wheel grounding line parameters of the target vehicle, and whether the guided vehicle is reminded of the target vehicle or not is determined according to the detection frame of the target vehicle, the wheel grounding line parameters and a ground equation of the road in front of the guided vehicle, so that the possibility of accident occurrence is reduced, and the driving of a driver is assisted.
The lead vehicle mentioned in the embodiments of the present disclosure may be an intelligent driving vehicle, which is a vehicle that carries different levels of intelligent driving systems, including, for example, an unmanned driving system, an assisted driving system, a driving assistance system, a highly autonomous driving system, a fully autonomous driving vehicle, and the like.
The embodiment of the disclosure can be applied to electronic equipment, and the electronic equipment is provided with an intelligent driving system. In some embodiments, the electronic device may be an electronic device that is loaded on a lead vehicle. In some embodiments, the electronic device may be an off-board electronic device. For example, the electronic device may be used to test intelligent driving algorithms.
The embodiments of the present disclosure may be applied to different scenes, for example, AR (Augmented Reality ) navigation scenes, where a driver may be visually alerted to a target vehicle. It should be noted that the application scenario of the embodiments of the present disclosure is merely some examples or embodiments of the present disclosure, and it is obvious to those skilled in the art that the present disclosure may be applied to other similar scenarios without performing inventive work.
In order to make the description clearer, the method, the device, the electronic device or the non-transitory computer readable storage medium for determining the prompt information will be described by taking a guided vehicle as an example in the embodiments of the disclosure.
Fig. 1 is an exemplary overall architecture diagram of a lead vehicle provided in an embodiment of the present disclosure. As shown in FIG. 1, the lead vehicle includes a sensor set, an intelligent drive system 100, a vehicle under-floor actuation system, and other components that may be used to drive the vehicle and control the operation of the vehicle, such as a brake pedal, a steering wheel, and an accelerator pedal.
And the sensor group is used for collecting data of the external environment of the vehicle and position data of the detection vehicle. The sensor group includes, for example, but is not limited to, at least one of an image acquisition device (e.g., camera), a laser radar, a millimeter wave radar, an ultrasonic radar, a GPS (Global Positioning System ), and an IMU (Inertial Measurement Unit, inertial measurement unit).
In some embodiments, the sensor set is further configured to collect kinetic data of the vehicle, and the sensor set further includes, for example, but not limited to, at least one of a wheel speed sensor, a speed sensor, an acceleration sensor, a steering wheel angle sensor, and a front wheel steering angle sensor.
The intelligent driving system 100 is configured to acquire sensing data of a sensor group, where the sensing data includes, but is not limited to, an image, a video, a laser point cloud, millimeter waves, GPS information, a vehicle state, and the like. In some embodiments, the intelligent driving system 100 performs environment sensing and vehicle positioning based on the sensing data to generate sensing information and vehicle pose, the intelligent driving system 100 performs planning and decision based on the sensing information and the vehicle pose to generate planning and decision information, and the intelligent driving system 100 generates vehicle control instructions based on the planning and decision information and issues the vehicle control instructions to a vehicle floor execution system.
In some embodiments, intelligent driving system 100 may be a software system, a hardware system, or a combination of software and hardware systems. For example, the intelligent driving system 100 is a software system running on an operating system, and the in-vehicle hardware system is a hardware system supporting the running of the operating system.
In some embodiments, the intelligent driving system 100 may interact with a cloud server. In some embodiments, the intelligent driving system 100 interacts with the cloud server through a wireless communication network (e.g., a wireless communication network including, but not limited to, a GPRS network, a Zigbee network, a Wifi network, a 3G network, a 4G network, a 5G network, etc.).
In some embodiments, the cloud server is used to interact with the vehicle. The cloud server can send environment information, positioning information, control information and other information needed in the intelligent driving process of the vehicle to the vehicle. In some embodiments, the cloud server may receive sensing data from a vehicle end, vehicle state information, vehicle driving information, and related information of a vehicle request. In some embodiments, the cloud server may remotely control the vehicle based on user settings or vehicle requests. In some embodiments, the cloud server may be a server or a server group. The server farm may be centralized, or may be distributed. In some embodiments, the cloud server may be local or remote.
And the vehicle floor execution system is used for receiving the vehicle control instruction and controlling the vehicle to run based on the vehicle control instruction. In some embodiments, the vehicle under-floor implement system includes, but is not limited to, a steering system, a braking system, and a drive system. In some embodiments, the vehicle floor execution system may further include a floor controller configured to parse the vehicle control command and issue it to corresponding systems such as a steering system, a braking system, and a driving system, respectively.
In some embodiments, the lead vehicle may also include a vehicle CAN bus, not shown in FIG. 1, that connects to the vehicle under-floor execution system. Information interaction between the intelligent driving system 100 and the vehicle floor execution system is transferred through the vehicle CAN bus.
Fig. 2 is an exemplary block diagram of an intelligent driving system 200 provided in an embodiment of the present disclosure. In some embodiments, intelligent driving system 200 may be implemented as intelligent driving system 100 in fig. 1 or as part of intelligent driving system 100 for controlling vehicle travel.
As shown in FIG. 2, the intelligent driving system 200 may be divided into a plurality of modules, which may include, for example, a perception module 201, a planning module 202, a control module 203, a prompting module 204, and some other modules that may be used for intelligent driving.
The sensing module 201 is used for sensing and positioning environment. In some embodiments, the sensing module 201 is configured to obtain sensor data, V2X (Vehicle to X) data, high-precision map, and the like, and perform environment sensing and positioning based on at least one of the above data, to generate sensing information and positioning information. Wherein the perceived information may include, but is not limited to, at least one of obstacle information, road signs/markers, pedestrian/vehicle information, and travelable areas. The positioning information includes a vehicle pose.
The planning module 202 is used to make path planning and decisions. In some embodiments, the planning module 202 generates planning and decision information based on the perception information and positioning information generated by the perception module 201. In some embodiments, the planning module 202 may also generate planning and decision information in conjunction with at least one of V2X data, high-precision maps, and the like. Wherein the planning information may include, but is not limited to, planning a path, etc., and the decision information may include, but is not limited to, at least one of behavior (including, but not limited to, following, passing, stopping, detouring, etc.), vehicle heading, vehicle speed, desired acceleration of the vehicle, desired steering wheel angle, etc.
The control module 203 is configured to generate a control instruction of the vehicle floor execution system based on the planning and decision information, and issue the control instruction to enable the vehicle floor execution system to control the vehicle to run. The control commands may include, but are not limited to, steering wheel steering, lateral control commands, longitudinal control commands, and the like.
The prompting module 204 is used for detecting a target vehicle on a road image serving as a detection image to obtain a detection frame of the target vehicle, tracking the target vehicle on the road image serving as a tracking image to obtain the detection frame of the target vehicle in the tracking image and the wheel grounding line parameters of the target vehicle, and further determining whether to prompt the guiding vehicle to pay attention to the target vehicle according to the detection frame of the target vehicle, the wheel grounding line parameters and a ground equation of a road in front of the guiding vehicle, thereby reducing the possibility of accident occurrence and achieving the purpose of assisting the driver to drive.
In some embodiments, the functions of the prompt module 204 may be integrated into the perception module 201, the planning module 202, or the control module 203, or may be configured as a module independent from the intelligent driving system 200, and the prompt module 204 may be a software module, a hardware module, or a module combining software and hardware. For example, the hint module 204 is a software module running on an operating system, and the on-board hardware system is a hardware system that supports the running of the operating system.
Fig. 3 is an exemplary block diagram of an apparatus 300 for determining hint information according to embodiments of the present disclosure. In some embodiments, the apparatus 300 for determining hint information may be implemented as the hint module 204 or a portion of the hint module 204 of FIG. 2.
As shown in fig. 3, the apparatus 300 for determining hint information may include, but is not limited to, an acquisition unit 301, a vehicle detection unit 302, a vehicle tracking unit 303, and a determination hint unit 304.
Acquisition unit 301
An acquisition unit 301 for acquiring environmental information in front of the vehicle. In some embodiments, the acquisition unit 301 acquires a road image of a road ahead of the guided vehicle captured by an image capturing apparatus mounted on the guided vehicle.
It is understood that the road image includes environmental information such as vehicles, pedestrians, buildings on both sides of the road, and signboards, which travel or stay on the road, in addition to the road itself. The road image may be understood as an environmental image in front of the lead vehicle, the environmental image including road information therein.
In some embodiments, the image capturing frame rate of the image capturing device is 20 frames per second, that is, the image capturing device may capture 20 frames of road images per second, and it is understood that this embodiment is only illustrative, and not limited to a specific value of the image capturing frame rate, and those skilled in the art may set according to actual needs.
In some embodiments, the image capturing device may use a camera that is commonly used in the market, and the type and model of the camera may be selected according to actual needs, which is not limited in this embodiment.
Vehicle detection unit 302
The vehicle detection unit 302 is configured to perform target vehicle detection on a road image as a detection image, and obtain a detection frame of the target vehicle. In some embodiments, the vehicle detection unit 302 uses the first frame of road image captured by the image capturing device as a detection image in a preset detection period, and identifies a detection frame of at least one target vehicle located around the guided vehicle from the detection image.
In some embodiments, the preset detection period may be 500 ms, that is, the image detection frame rate is 2 frames per second, that is, there are 2 frames of road images per second as the detection images, which is to be understood that this embodiment is only illustrative, and the specific value of the detection period is not limited, and may be set by those skilled in the art according to actual needs.
In some embodiments, the vehicle detection unit 302 may input the detection image into the target detection network, resulting in a set of target information output by the target detection network, each of the set of target information including a target type and a target detection box. The vehicle detection unit 302 may screen out target information of which target type is a vehicle from the target information set, and obtain a detection frame of at least one target vehicle around the guided vehicle.
The input of the target detection network is an image, and the input is the type and the detection frame of different targets in the image. The detection box (bounding box) is a two-dimensional box, and can represent the position of the target in the image.
In some embodiments, the target detection network may be trained to output the detection boxes of the target vehicle directly, i.e., the input to the target detection network is a detection image and the output is the detection boxes of all vehicles in the detection image. In the embodiment, the detection frame comprises a full frame, a tail frame and a head frame, wherein the full frame is a rectangular frame capable of framing the whole vehicle of the target vehicle, the tail frame is a rectangular frame capable of framing the tail of the target vehicle, and the head frame is a rectangular frame capable of framing the head of the target vehicle. In some embodiments, the target detection network is trained using a deep learning target detection toolbox mmdetection.
In some embodiments, the target detection network may employ a different network, for example, the target detection network may be SSD (Single Shot MultiBox Detector) networks, and for example, the target detection network may be other target detection networks in the deep learning field.
Vehicle tracking unit 303
And the vehicle tracking unit 303 is used for tracking the target vehicle on the road image serving as the tracking image, so as to obtain a detection frame of the target vehicle in the tracking image and wheel grounding line parameters of the target vehicle. The wheel grounding wire is a connecting wire of a contact point between the wheel and the ground. In some embodiments, the vehicle tracking unit 303 uses other frame road images captured by the image capturing device in the detection period as tracking images, and for each frame tracking image, determines a detection frame of the target vehicle in the frame tracking image and a wheel ground line parameter of the target vehicle based on a detection frame of the target vehicle that has been identified before the frame tracking image and the frame tracking image. The wheel ground line parameter is understood to be the slope and intercept of the wheel ground line.
In some embodiments, if the image capturing frame rate of the image capturing apparatus is 20 frames per second and the image detecting frame rate of the vehicle detecting unit 302 is 2 frames per second, the road image as the tracking image in one detection period is 9 frames in total, that is, the vehicle tracking unit 303 can perform the target vehicle tracking on 18 frames of the road image per second.
In some embodiments, detection or tracking is performed in real time. For example, every time the image capturing apparatus captures a frame of road image, detection or tracking is performed, that is, if the frame of road image is taken as a detection image, the vehicle detection unit 302 performs target vehicle detection on the frame of road image, and if the frame of road image is taken as a tracking image, the vehicle tracking unit 303 performs target vehicle tracking on the frame of road image. Wherein the vehicle tracking unit 303 determines a detection frame of the target vehicle in the frame road image and a wheel ground line parameter of the target vehicle based on a detection frame of the target vehicle that has been identified before the frame road image and the frame road image.
In some embodiments, detection and tracking is performed in batches. For example, the image capturing apparatus captures a plurality of frame road images in a preset detection period, detects and tracks the plurality of frame images in batches in the detection period, that is, the vehicle detecting unit 302 detects the target vehicle with the first frame road image in the detection period as the detection image, and the vehicle tracking unit 303 tracks the target vehicle with the other frame road images in the detection period as the tracking images. Wherein the vehicle tracking unit 303 determines, for each frame tracking image, a detection frame of the target vehicle in the frame tracking image and a wheel-ground line parameter of the target vehicle based on a detection frame of the target vehicle that has been identified before the frame tracking image and the frame tracking image.
In some embodiments, the vehicle tracking unit 303 may determine the set of crop boxes corresponding to the detected image based on the detected boxes of the at least one target vehicle in the detected image. In some embodiments, the vehicle tracking unit 303 may determine, for any frame tracking image (a next frame tracking image that does not include a detection image), a set of detection frames corresponding to a first frame detection image preceding the frame tracking image (there is no detection image between the first frame detection image and the frame tracking image), and a set of crop frames corresponding to the frame tracking image based on a set of detection frames corresponding to all tracking images between the frame tracking image and the first frame detection image.
In some embodiments, the vehicle tracking unit 303 determines, for any detection frame in the detection image, the center position of the detection frame as the center position of the crop frame. In some embodiments, the vehicle tracking unit 303 may determine, for any frame tracking image, a center position of a detection frame of any target vehicle in a first frame detection image preceding the frame tracking image (there is no detection image between the first frame detection image and the frame tracking image), and a center position of a detection frame of the same target vehicle in all tracking images between the frame tracking image and the first frame detection image, based on the center position of the detection frame of the frame tracking image.
For example, in one detection period, the center position of the frame 2 of the road image (as the tracking image) is the center position of the detection frame of the target vehicle in the frame 1 of the road image (as the detection image), the center position of the frame 3 of the road image (as the tracking image) is predicted based on the center positions of the detection frames of the same target vehicle and the frame 2 of the vehicle, and the center position of the frame 4 of the road image (as the tracking image) is predicted based on the center positions of the detection frames of the same target vehicle and the frame 1 of the vehicle, the frame 2 of the vehicle, and the frame 3 of the vehicle.
The prediction mode can be that the center positions of the detection frames of the target vehicles in different frame images are connected, and prediction is carried out on the extension line of the connection.
In some embodiments, the vehicle tracking unit 303 expands the length and the width of any detection frame in the detection image by a preset multiple as the length and the width of a clipping frame, so as to obtain a clipping frame corresponding to the detection frame, where the clipping frame is used for clipping a next frame tracking image of the detection image. The length and the width of the detection frame are both enlarged by preset multiples to be used as the length and the width of the cutting frame, and the purpose is that a cutting image obtained by cutting the next frame of tracking image of the detection image by the cutting frame can comprise a target vehicle, so that the target vehicle can be conveniently tracked and the data volume is reduced.
In some embodiments, the vehicle tracking unit 303 expands the length and the width of the detection frame of the same target vehicle in the previous frame tracking image of the frame tracking image by a preset multiple as the length and the width of the crop frame corresponding to the frame tracking image, so as to obtain the crop frame corresponding to the detection frame.
In some embodiments, the preset multiple is 1.2, 1.5 or 1.6, and it is understood that this embodiment is only illustrative and not limited to a specific value of the preset multiple, and those skilled in the art may set the preset multiple according to actual needs. It is understood that the processing of the vehicle tracking unit 303 for any frame detection image is similar, and will not be described again.
In some embodiments, the vehicle tracking unit 303 may determine a set of crop images corresponding to each frame of the tracking image based on the set of crop frames corresponding to the detected image. In some embodiments, the vehicle tracking unit 303 may determine a set of crop images corresponding to a next frame of the detected image based on the set of crop frames corresponding to the detected image. In some embodiments, the vehicle tracking unit 303 may determine, for any frame tracking image (a next frame tracking image that does not include a detection image), a crop frame set corresponding to the frame tracking image based on a detection frame set corresponding to a first frame detection image preceding the frame tracking image (the first frame detection image and the frame tracking image having no detection image therebetween) and based on detection frame sets corresponding to all tracking images between the frame tracking image and the first frame detection image, and crop the frame tracking image based on the crop frame set corresponding to the frame tracking image to obtain the crop image set corresponding to the frame tracking image.
In some embodiments, the vehicle tracking unit 303 may input the clipping image set corresponding to each frame of tracking image into the vehicle tracking network, so as to obtain the detection frame of the target vehicle in each frame of tracking image and the wheel grounding line parameter of the target vehicle output by the vehicle tracking network. The wheel grounding line parameters are the slope and intercept of the wheel grounding line in the clipping image.
In some embodiments, the vehicle tracking network may employ a CNN (Convolutional Neural Networks, convolutional neural network) regression network, or may employ any tracking network in computer vision. In some embodiments, the vehicle tracking network may employ a deep neural network reasoning framework MNN.
Determination prompting unit 304
The determining prompt unit 304 is configured to determine whether to remind the guiding vehicle to pay attention to the target vehicle, thereby reducing the possibility of accident occurrence and achieving the purpose of assisting the driver in driving. In some embodiments, the determining prompting unit 304 may determine whether the target vehicle is a dangerous vehicle, and if so, determine to prompt the guiding vehicle to pay attention to the target vehicle based on the ground equation of the road ahead of the guiding vehicle determined in the detection period, the detection frame of the target vehicle corresponding to the tracking image in the detection period, and the wheel ground line parameter of the target vehicle. The dangerous vehicles are vehicles that affect the running behavior of the own vehicle (guided vehicle) and are likely to cause traffic accidents, and include, for example, parallel vehicles, stoppered vehicles, and vehicles that are encountered when the own vehicle turns, runs in the front left direction, runs in the front right direction, and the like.
For example, if the image capturing frame rate of the image capturing apparatus is 20 frames per second and the image detection frame rate of the vehicle detection unit 302 is 2 frames per second, the road image as the detection image in one detection period is 1 frame, and the road image as the tracking image is 9 frames in total, the determination prompting unit 304 determines whether to remind the leading vehicle to pay attention to the target vehicle in combination with the detection and tracking results of the 10 frames.
In some embodiments, the determination hint unit 304 may derive a ground equation to guide the road ahead of the vehicle during the detection period by way of IPM (INVERSE PERSPECTIVE MAPPING, reverse perspective transformation).
In some embodiments, the determination prompting unit 304 may determine the position and orientation of the target vehicle based on the detection frame of the target vehicle corresponding to the tracking image and the wheel ground line parameter of the target vehicle in the detection period. In some embodiments, the determination prompting unit 304 may convert the wheel ground line parameters from the image coordinate system to the world coordinate system based on the internal and external parameters of the image sensor, resulting in the orientation of the target vehicle. In some embodiments, the determining prompt unit 304 may convert a detection frame of the target vehicle corresponding to the tracking image in the detection period from an image coordinate system to a world coordinate system based on the internal parameters and the external parameters of the image sensor, so as to obtain the position of the target vehicle.
In some embodiments, the determination prompting unit 304 may determine whether to alert the lead vehicle to pay attention to the target vehicle based on the ground equation of the road ahead of the lead vehicle determined during the detection period, the wheel ground line parameter of the target vehicle corresponding to the tracking image during the detection period, the position and orientation of the target vehicle, the position and orientation of the own vehicle.
In some embodiments, the determining prompt unit 304 predicts the driving track of the target vehicle based on the position and the orientation of the target vehicle corresponding to the wheel ground line parameter if the wheel ground line parameter is not parallel to the ground equation under the world coordinate system, predicts the driving track of the own vehicle based on the position and the orientation of the own vehicle, and further determines whether to remind the guiding vehicle to pay attention to the target vehicle based on the driving track of the target vehicle and the driving track of the own vehicle.
The determination presentation unit 304 may determine whether the target vehicle is a vehicle that affects the running behavior of the own vehicle (guided vehicle) and is liable to cause traffic accidents, for example, including a merging vehicle, a stoppered vehicle, and a vehicle that encounters while the own vehicle turns, runs ahead left, runs ahead right, and the like, based on the running track of the target vehicle and the running track of the own vehicle.
In some embodiments, the determining prompt unit 304, after determining to alert the lead vehicle to the target vehicle, alerts visually, audibly, or a combination thereof, so that the driver can learn the alert in time.
In some embodiments, the division of each unit in the apparatus 300 for determining prompt information is only one logic function division, and other division manners may be implemented in practice, for example, at least two units of the acquisition unit 301, the vehicle detection unit 302, the vehicle tracking unit 303, and the determining prompt unit 304 may be implemented as one unit, and the acquisition unit 301, the vehicle detection unit 302, the vehicle tracking unit 303, or the determining prompt unit 304 may be also divided into a plurality of sub-units. It is understood that each unit or sub-unit can be implemented in electronic hardware, or in combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art can implement the described functionality using different methods for each particular application.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic device may support operation of the intelligent driving system. The electronic device may be an in-vehicle device for guiding a vehicle, or may be an off-vehicle device.
As shown in fig. 4, the electronic device comprises at least one processor 401, at least one memory 402 and at least one communication interface 403. The various components in the electronic device are coupled together by a bus system 404. A communication interface 403 for information transmission with an external device. It is appreciated that the bus system 404 serves to facilitate connected communications between these components. The bus system 404 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 404 in fig. 4 for clarity of illustration.
It will be appreciated that the memory 402 in this embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
In some implementations, the memory 402 stores elements, executable units or data structures, or a subset thereof, or an extended set thereof, operating system and application programs.
The operating system includes various system programs, such as a framework layer, a core library layer, a driving layer, and the like, and is used for realizing various basic tasks and processing hardware-based tasks. Applications, including various applications such as media players (mediaplayers), browsers (browses), etc., are used to implement various application tasks. The program for implementing the method for determining prompt information provided by the embodiment of the disclosure may be included in an application program.
In the embodiment of the present disclosure, the processor 401 is configured to execute the steps of the embodiments of the method for determining the prompt information provided by the embodiment of the present disclosure by calling a program or an instruction stored in the memory 402, specifically, a program or an instruction stored in an application program.
The method for determining prompt information provided by the embodiment of the disclosure may be applied to the processor 401 or implemented by the processor 401. The processor 401 may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 401 or by instructions in the form of software. The processor 401 may be a general purpose processor, a digital signal processor (DigitalSignalProcessor, DSP), an application specific integrated circuit (application specific IntegratedCircuit, ASIC), an off-the-shelf programmable gate array (FieldProgrammableGateArray, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the method for determining prompt information provided in the embodiments of the present disclosure may be directly embodied in the execution of a hardware decoding processor, or may be executed by a combination of hardware and software units in the decoding processor. The software elements may be located in a random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 402 and the processor 401 reads the information in the memory 402 and in combination with its hardware performs the steps of the method.
Fig. 5 is an exemplary flowchart of a method for determining hint information according to embodiments of the present disclosure. The execution subject of the method is an electronic device, and in some embodiments, the execution subject of the method may also be an intelligent driving system supported by the electronic device. For convenience of description, the following embodiment describes a flow of a method for determining prompt information by using an electronic device as an execution subject.
As shown in fig. 5, in step 501, the electronic apparatus acquires a road image of a road ahead of a guided vehicle captured by an image capturing apparatus mounted on the guided vehicle.
It is understood that the road image includes environmental information such as vehicles, pedestrians, buildings on both sides of the road, and signboards, which travel or stay on the road, in addition to the road itself. The road image may be understood as an environmental image in front of the lead vehicle, the environmental image including road information therein.
In step 502, the electronic device uses the first frame of road image captured by the image capturing device as a detection image in a preset detection period, and identifies a detection frame of at least one target vehicle located around the guided vehicle from the detection image.
In some embodiments, the electronic device may input the detection image into the target detection network to obtain a target information set output by the target detection network, where each target information in the target information set includes a target type and a target detection frame, and further may screen out target information with a target type being a vehicle from the target information set to obtain a detection frame for guiding at least one target vehicle around the vehicle.
The input of the target detection network is an image, and the input is the type and the detection frame of different targets in the image. The detection box (bounding box) is a two-dimensional box, and can represent the position of the target in the image.
In some embodiments, the target detection network may be trained to output the detection boxes of the target vehicle directly, i.e., the input to the target detection network is a detection image and the output is the detection boxes of all vehicles in the detection image. In the embodiment, the detection frame comprises a full frame, a tail frame and a head frame, wherein the full frame is a rectangular frame capable of framing the whole vehicle of the target vehicle, the tail frame is a rectangular frame capable of framing the tail of the target vehicle, and the head frame is a rectangular frame capable of framing the head of the target vehicle. In some embodiments, the target detection network is trained using a deep learning target detection toolbox mmdetection.
In some embodiments, the target detection network may employ a different network, for example, the target detection network may be SSD (Single Shot MultiBox Detector) networks, and for example, the target detection network may be other target detection networks in the deep learning field.
In step 503, the electronic device uses other frame road images captured by the image capturing device in the detection period as tracking images, and for each frame tracking image, determines a detection frame of the target vehicle in the frame tracking image and a wheel grounding line parameter of the target vehicle based on a detection frame of the target vehicle identified before the frame tracking image and the frame tracking image. The wheel grounding wire is a connecting wire of a contact point between the wheel and the ground. The wheel ground line parameter may be understood as the slope and intercept of the wheel ground line.
In some embodiments, the electronic device determines a set of crop frames corresponding to the detected image based on a detected frame of at least one target vehicle in the detected image, further determines a set of crop images corresponding to each frame of the tracked image based on the set of crop frames corresponding to the detected image, and inputs the set of crop images corresponding to each frame of the tracked image into a vehicle tracking network to obtain a detected frame of the target vehicle in each frame of the tracked image and a wheel ground line parameter of the target vehicle output by the vehicle tracking network. The wheel grounding line parameters are the slope and intercept of the wheel grounding line in the clipping image.
In some embodiments, the electronic device determines, for any detection frame in the detection images, a center position of the detection frame as a center position of a cutting frame, and expands both a length and a width of the detection frame by a preset multiple as a length and a width of the cutting frame to obtain a cutting frame corresponding to the detection frame, where the cutting frame is used for cutting a next frame tracking image of the detection image, and further determines, based on a cutting frame set corresponding to the detection image, a cutting image set corresponding to the next frame tracking image of the detection image. The preset multiple is 1.2, 1.5 or 1.6, and it is understood that this embodiment is only illustrative, and the specific value of the preset multiple is not limited, and can be set by those skilled in the art according to actual needs.
In some embodiments, the electronic device determines, for any frame trace image (the next frame trace image that does not include a detection image), a crop frame set corresponding to the frame trace image based on a detection frame set corresponding to a first frame detection image preceding the frame trace image and based on detection frame sets corresponding to all trace images between the frame trace image and the first frame detection image, and crop the frame trace image based on the crop frame set corresponding to the frame trace image to obtain the crop image set corresponding to the frame trace image.
In some embodiments, the electronic device determines, for any frame tracking image (the next frame tracking image excluding the detection image), a center position of a detection frame of the target vehicle in a first frame detection image preceding the frame tracking image, and a center position of a crop frame corresponding to the frame tracking image based on the center positions of detection frames of the same target vehicle in all tracking images between the frame tracking image and the first frame detection image, and further expands both the length and the width of the detection frame of the same target vehicle in a last frame tracking image of the frame tracking image by a preset multiple as the length and the width of the crop frame corresponding to the frame tracking image, to obtain the crop frame corresponding to the detection frame. The preset multiple is 1.2, 1.5 or 1.6, and it is understood that this embodiment is only illustrative, and the specific value of the preset multiple is not limited, and can be set by those skilled in the art according to actual needs.
In some embodiments, the electronic device may input the clipping image set corresponding to each frame of tracking image into the vehicle tracking network, so as to obtain a detection frame of the target vehicle in each frame of tracking image and a wheel grounding line parameter of the target vehicle output by the vehicle tracking network.
In some embodiments, the vehicle tracking network may employ a CNN (Convolutional Neural Networks, convolutional neural network) regression network, or may employ any tracking network in computer vision. In some embodiments, the vehicle tracking network may employ a deep neural network reasoning framework MNN.
In some embodiments, the electronic device detects or tracks in real time. For example, each time the image acquisition device shoots a frame of road image, the electronic device performs detection or tracking once, that is, if the frame of road image is taken as a detection image, the electronic device performs target vehicle detection on the frame of road image, and if the frame of road image is taken as a tracking image, the electronic device performs target vehicle tracking on the frame of road image. The electronic equipment determines the detection frame of the target vehicle in the frame road image and the wheel grounding line parameters of the target vehicle based on the detection frame of the target vehicle which is identified before the frame road image and the frame road image.
In some embodiments, the electronic device detects and tracks in batches. For example, when the image acquisition device shoots multiple frames of road images in a preset detection period, the electronic device detects and tracks multiple frames of images in batches in the detection period, namely, the electronic device detects a target vehicle by taking a first frame of road image in the detection period as a detection image, and the electronic device tracks other frames of road images in the detection period as tracking images. The electronic equipment determines a detection frame of the target vehicle in each frame tracking image and wheel grounding line parameters of the target vehicle based on the detection frame of the target vehicle which is identified before the frame tracking image and the frame tracking image.
In step 504, the electronic device determines whether to alert the lead vehicle to pay attention to the target vehicle based on the ground equation of the road ahead of the lead vehicle determined in the detection period, the detection frame of the target vehicle corresponding to the tracking image in the detection period, and the wheel ground line parameter of the target vehicle.
For example, if the image acquisition frame rate of the image acquisition apparatus is 20 frames per second and the image detection frame rate is 2 frames per second, the road image as the detection image in one detection period is 1 frame, and the road image as the tracking image is 9 frames in total, and the electronic apparatus determines whether to remind the lead vehicle to pay attention to the target vehicle in combination with the detection and tracking results of the 10 frames.
In some embodiments, the electronic device may derive a ground equation for guiding the road ahead of the vehicle during the detection period by way of IPM (INVERSE PERSPECTIVE MAPPING, reverse perspective transformation).
In some embodiments, the electronic device determines a position and an orientation of a target vehicle based on a detection frame of the target vehicle corresponding to the tracking image and a wheel ground line parameter of the target vehicle in the detection period, detects a position and an orientation of a host vehicle, and further determines whether to remind the host vehicle of paying attention to the target vehicle based on a ground equation of a road in front of the host vehicle determined in the detection period, the wheel ground line parameter of the target vehicle corresponding to the tracking image in the detection period, the position and the orientation of the target vehicle, and the position and the orientation of the host vehicle.
In some embodiments, the electronic device converts the wheel ground wire parameters from an image coordinate system to a world coordinate system based on the internal parameters and the external parameters of the image sensor to obtain the orientation of the target vehicle, and further converts a detection frame of the target vehicle corresponding to the tracking image in the detection period from the image coordinate system to the world coordinate system based on the internal parameters and the external parameters of the image sensor to obtain the position of the target vehicle.
In some embodiments, the electronic device predicts a travel track of a target vehicle based on a position and an orientation of the target vehicle corresponding to the wheel ground line parameter for any wheel ground line parameter if the wheel ground line parameter is not parallel to the ground equation in a world coordinate system, predicts a track of a vehicle based on the position and the orientation of the vehicle, and determines whether to alert the guiding vehicle to the target vehicle based on the travel track of the target vehicle and the travel track of the vehicle.
In some embodiments, the electronic device may determine whether the target vehicle is a vehicle that affects the running behavior of the own vehicle (guiding vehicle) and is liable to cause a traffic accident, for example, including a parallel line vehicle, a stoppered vehicle, and a vehicle that is encountered when the own vehicle turns, runs ahead left, ahead right, and the like, based on the running track of the target vehicle and the running track of the own vehicle.
In some embodiments, the electronic device, after determining to alert the lead vehicle to the target vehicle, alerts visually, audibly, or a combination thereof, so that the driver can learn the alert in time.
In some embodiments, the image capture device may capture 20 frames of road images at an image capture frame rate of 20 frames per second, i.e., the image capture device may capture 20 frames of road images per second. The preset detection period is 500 milliseconds, that is, the image detection frame rate is 2 frames per second, that is, 2 frames of road images per second are used as the detection images. The road image as the tracking image for one detection period is 9 frames in total, that is, the electronic device can track the target vehicle for 18 frames of road images per second.
In some embodiments, in one detection period, the center position of the cutting frame of the 2 nd frame road image (serving as a tracking image) is the center position of the detection frame of the target vehicle in the 1 st frame road image (serving as a detection image), the center position of the cutting frame of the 3 rd frame road image (serving as a tracking image) is obtained by prediction based on the center position of the detection frame of the 1 st frame and the center position of the detection frame of the 2 nd frame of the same target vehicle, and the center position of the cutting frame of the 4 th frame road image (serving as a tracking image) is obtained by prediction based on the center position of the detection frame of the 1 st frame, the center position of the detection frame of the 2 nd frame and the center position of the detection frame of the 3 rd frame of the same target vehicle. The prediction mode can be that the center positions of the detection frames of the target vehicles in different frame images are connected, and prediction is carried out on the extension line of the connection.
In combination with the method for determining the prompt information provided in the above embodiment, taking the image detection frame rate as f frames per second as an example, the flow of the method for determining the prompt information is described, and the method includes the following 5 steps:
1. And (3) detecting the target vehicle by using a target detection network in frames 0, f. The object detection network is an SSD network in the field of deep learning.
2. And cutting images on the 0 th, f, & gt and nf+1 th frames, sending the images into a vehicle tracking network to obtain corresponding full frames in the 1 st, f+1 th, the 2 nd and nf+1 th frames, and simultaneously returning the slope and intercept of the grounding wire. Wherein the vehicle tracking network is a CNN regression network.
3. The central position of the full frame corresponding to the frames 2, f+1, and nf+2 is estimated according to the full frames obtained by the frames 0 and 1, f and f+1, and nf and nf+1, and the length and width of the full frame of the frames 1, f+1, and nf+1 are expanded by preset times, and cutting the images on the 2 nd, f+2..and nf+2 frames, sending the images into a vehicle tracking network to obtain corresponding full frames in the 2 nd, f+2..and nf+2 frames of images, and simultaneously returning the parameters of the wheel grounding wires of the target vehicle. The wheel grounding line parameters are the slope and intercept of the wheel grounding line in the clipping image.
4. And (3) repeating the step (3) when frames 3 to f-1, f+3 to 2f-1, and nf+3 to (n+1) f-1 are included, so as to obtain the corresponding full frame and wheel grounding line parameters.
5. The method comprises the steps of obtaining a ground equation of a road in front of a guided vehicle in a detection period in an IPM mode, determining the position and the orientation of the target vehicle under a three-dimensional coordinate system based on the whole frame of the target vehicle corresponding to a tracking image in the detection period and wheel grounding wire parameters of the target vehicle, detecting the position and the orientation of a self vehicle (guided vehicle) under the three-dimensional coordinate system, and determining whether to remind the guided vehicle of paying attention to the target vehicle based on the ground equation, the wheel grounding wire parameters, the position and the orientation of the target vehicle and the position and the orientation of the self vehicle.
Fig. 6 is an exemplary application scenario diagram provided by an embodiment of the present disclosure, and based on the method for determining prompt information provided by the foregoing embodiment, detection frames of different target vehicles in fig. 6 and wheel grounding wires of different target vehicles may be obtained. Fig. 7 is a schematic projection view of a wheel grounding wire in the application scene shown in fig. 6, and the schematic projection view is generated by transforming in fig. 6 in an IPM manner. Referring to fig. 7, the white line is a wheel ground line, and the black region can be understood to be independent of the wheel ground line.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but those skilled in the art can appreciate that the disclosed embodiments are not limited by the order of actions described, as some steps may occur in other orders or concurrently in accordance with the disclosed embodiments. In addition, those skilled in the art will appreciate that the embodiments described in the specification are all alternatives.
The embodiments of the present disclosure further provide a non-transitory computer readable storage medium storing a program or instructions that cause a computer to perform steps of the embodiments of the method for determining hint information, and for avoiding repetition of the description, the description will not be repeated here.
Embodiments of the present disclosure also provide a computer program product, where the computer program product includes a computer program, where the computer program is stored in a non-transitory computer readable storage medium, and at least one processor of the computer reads from the storage medium and executes the computer program, so that the computer performs steps of the embodiments of the method for determining prompt information, which are not described herein in detail for avoiding repetition of description.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the disclosure and form different embodiments.
Those skilled in the art will appreciate that the descriptions of the various embodiments are each focused on, and that portions of one embodiment that are not described in detail may be referred to as related descriptions of other embodiments.
Although embodiments of the present disclosure have been described with reference to the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the disclosure, and such modifications and variations fall within the scope defined by the appended claims.
Claims (13)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011284279.2A CN114511834B (en) | 2020-11-17 | 2020-11-17 | Method and device for determining prompt information, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011284279.2A CN114511834B (en) | 2020-11-17 | 2020-11-17 | Method and device for determining prompt information, electronic equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114511834A CN114511834A (en) | 2022-05-17 |
| CN114511834B true CN114511834B (en) | 2025-10-31 |
Family
ID=81547251
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011284279.2A Active CN114511834B (en) | 2020-11-17 | 2020-11-17 | Method and device for determining prompt information, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114511834B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116363631B (en) * | 2023-05-19 | 2023-09-05 | 小米汽车科技有限公司 | Three-dimensional target detection method and device and vehicle |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102737236A (en) * | 2012-07-06 | 2012-10-17 | 北京大学 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9696723B2 (en) * | 2015-06-23 | 2017-07-04 | GM Global Technology Operations LLC | Smart trailer hitch control using HMI assisted visual servoing |
| JP6473684B2 (en) * | 2015-11-11 | 2019-02-20 | 日立建機株式会社 | Wheel slip angle estimating apparatus and method |
| CN111220197B (en) * | 2016-09-12 | 2022-02-22 | 上海沃尔沃汽车研发有限公司 | Test system and test method for lane line deviation alarm system |
| US10196058B2 (en) * | 2016-11-28 | 2019-02-05 | drive.ai Inc. | Method for influencing entities at a roadway intersection |
-
2020
- 2020-11-17 CN CN202011284279.2A patent/CN114511834B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102737236A (en) * | 2012-07-06 | 2012-10-17 | 北京大学 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114511834A (en) | 2022-05-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11308357B2 (en) | Training data generation apparatus | |
| US10457294B1 (en) | Neural network based safety monitoring system for autonomous vehicles | |
| CN113561963B (en) | Parking method and device and vehicle | |
| JP7052174B2 (en) | Systems and methods for estimating future routes | |
| US10860868B2 (en) | Lane post-processing in an autonomous driving vehicle | |
| JP2022507995A (en) | Obstacle avoidance methods and equipment for unmanned vehicles | |
| US10369993B2 (en) | Method and device for monitoring a setpoint trajectory to be traveled by a vehicle for being collision free | |
| EP3577528B1 (en) | Enabling remote control of a vehicle | |
| CN118835863A (en) | Automatic parking system and automatic parking method | |
| US11628859B1 (en) | Vehicle placement on aerial views for vehicle control | |
| CN111994064A (en) | Vehicle control method, device, equipment, system and storage medium | |
| JP2025026911A (en) | Information processing device, information processing method, and information processing program | |
| US12071159B2 (en) | Aerial view generation for vehicle control | |
| KR102669061B1 (en) | System for detecting vehicle object collision risk based on aritificial intelligence | |
| CN110654380B (en) | Method and device for controlling a vehicle | |
| KR20230103368A (en) | Driving control apparatus and method | |
| KR20220081380A (en) | Traffic Light Detection and Classification for Autonomous Vehicles | |
| CN114511834B (en) | Method and device for determining prompt information, electronic equipment and storage medium | |
| JP7075273B2 (en) | Parking support device | |
| JP6854141B2 (en) | Vehicle control unit | |
| KR101875517B1 (en) | Method and apparatus for processing a image | |
| CN117184050A (en) | Parking path planning method and device, computer equipment and storage medium | |
| CN116872840A (en) | Vehicle collision avoidance warning method, device, vehicle and storage medium | |
| CN115056802A (en) | Automatic driving method, device, equipment and storage medium for vehicle | |
| CN119428648B (en) | Intelligent driving assistance method and intelligent driving assistance system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |