CN114353818A - Target object following method, apparatus, device, medium, and computer program product - Google Patents
Target object following method, apparatus, device, medium, and computer program product Download PDFInfo
- Publication number
- CN114353818A CN114353818A CN202111668239.2A CN202111668239A CN114353818A CN 114353818 A CN114353818 A CN 114353818A CN 202111668239 A CN202111668239 A CN 202111668239A CN 114353818 A CN114353818 A CN 114353818A
- Authority
- CN
- China
- Prior art keywords
- target object
- vehicle
- following
- motion track
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a target object following method, a device, equipment, a medium and a computer program product. The method comprises the steps of acquiring an image sequence of a target object in a moving process under the condition of a vehicle target object following mode; determining a motion track point set of the target object according to the image sequence; determining a quasi-motion track point set of the vehicle according to the motion track point set of the target object and the relative position information; planning a driving path of the vehicle according to the quasi-motion track point set of the vehicle; and following the target object according to the driving path. According to the embodiment of the application, the vehicle can automatically follow the target object.
Description
Technical Field
The present application belongs to the field of target tracking technologies, and in particular, relates to a target object following method, apparatus, device, medium, and computer program product.
Background
With the development of economic science and technology, the requirements of people on living quality are higher and higher, and when a user travels in a long-distance vehicle or goes off-road, the user hopes that a vehicle can actively follow the user when the user wants to get off and take pictures on a flat road because the vehicle is tired when the user drives the vehicle, so that the user does not need to return to a parking spot when the user wants to go off again.
Therefore, how to realize that the vehicle automatically follows the target object becomes a technical problem which needs to be solved urgently at present.
Disclosure of Invention
The target object following method, device, equipment, medium and computer program product provided by the embodiment of the application can realize that a vehicle automatically follows a target object.
In a first aspect, an embodiment of the present application provides a target object following method applied to a vehicle, including:
under the condition of a vehicle target object following mode, acquiring an image sequence of a target object in a moving process;
determining a motion track point set of the target object according to the image sequence;
determining a quasi-motion track point set of a vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is relative position information between the vehicle and the target object;
planning a driving path of the vehicle according to the quasi-motion track point set of the vehicle;
and following the target object according to the driving path.
In some embodiments, the determining a set of motion trajectory points of the target object according to the sequence of images includes:
and determining the motion track point of the target object according to the image sequence and the target object following regression model.
In some embodiments, the target object following regression model comprises a first feature extraction network and a second feature extraction network in parallel, and a fully connected layer;
determining a motion track point of the target object according to the image sequence and the target object following regression model, specifically comprising:
extracting a first target object feature of a first frame image through a first feature extraction network, and extracting a second target object feature of a second frame image through a second feature extraction network, wherein the first frame image is an image in the image sequence before the second frame image;
inputting the first target object characteristic and the second target object characteristic into the full-connection layer, and determining the relative motion trend of the target object through the full-connection layer;
determining a coordinate point of a preset feature point of a boundary frame of the target object in the second frame image according to the relative motion trend;
and determining a motion track point of the target object in the second frame image according to the coordinate point of the preset characteristic point of the boundary frame.
In some embodiments, extracting coordinate points of an aided and Rotated (ORB) feature point of the target object in the second frame image;
converting the coordinate point of the ORB characteristic point into a first pixel coordinate point under a pixel coordinate system according to a first coordinate conversion relation between coordinate systems;
the determining a motion track point of the target object in the second frame image according to the coordinate point of the preset feature point of the boundary frame includes:
converting the coordinate points of the preset feature points of the boundary frame into second pixel coordinate points under a pixel coordinate system according to a second coordinate conversion relation between coordinate systems;
and generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
In some embodiments, the planning the driving path of the vehicle according to the quasi-motion trajectory of the vehicle includes:
and when the number of the track points in the quasi-motion track point set of the vehicle is greater than a preset numerical value, planning the running path of the vehicle according to the quasi-motion track point set of the vehicle.
In some embodiments, after following the target object according to the travel path, further comprising:
in the process of following the target object, when the obstacle encountered in the driving path is detected, stopping following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are in communication connection, and the first prompt information is used for prompting the vehicle to stop following the target object.
In some embodiments, before acquiring the image sequence of the target object during the movement, the method further comprises:
acquiring an image to be identified acquired by a camera;
and identifying the target object according to the image to be identified.
In some embodiments, before acquiring the image to be recognized captured by the camera, the method further includes:
receiving second prompt information sent by the target terminal;
and starting a vehicle target object following mode according to the second prompt message.
In a second aspect, an embodiment of the present application provides a target object following apparatus, including:
the acquisition module is used for acquiring an image sequence of a target object in a motion process under the condition of a vehicle target object following mode;
the first determining module is used for determining a motion track point set of the target object according to the image sequence;
the second determining module is used for determining a quasi-motion track point set of the vehicle according to the motion track point set and relative position information of the target object, wherein the relative position information is the relative position information between the vehicle and the target object;
the planning module is used for planning the driving path of the vehicle according to the quasi-motion track point set of the vehicle;
and the following module is used for following the target object according to the driving path.
In a third aspect, an embodiment of the present application provides a vehicle, including:
the camera is used for acquiring an image sequence of a target object in the motion process under the condition of a vehicle target object following mode;
a processor for performing the target object following method as described in any of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a target object following device, where the device includes: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the target object following method described in any embodiment of the present application.
In a fifth aspect, the present application provides a computer storage medium having computer program instructions stored thereon, where the computer program instructions, when executed by a processor, implement a target object following method as described in any embodiment of the present application.
In a sixth aspect, the present application provides a computer program product, and when executed by a processor of an electronic device, the instructions of the computer program product cause the electronic device to execute a target object following method as described in any embodiment of the present application.
According to the target object following method, the target object following device, the target object following medium and the computer program product, under the condition of a vehicle target object following mode, after a motion track point set of a target object is determined according to an image sequence of the target object in a motion process, a quasi motion track point set of a vehicle is determined according to the motion track point set and relative position information, then a vehicle driving path planned according to the quasi motion track point set is a path matched with a target object motion path, the vehicle is enabled to accurately follow the target object according to the driving path, and automatic following of the target object by the vehicle is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a coordinate system transformation provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of a target object following method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of another target object following method provided in the embodiments of the present application;
FIG. 4 is a schematic flow chart illustrating a target object following method according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating a target object following method according to an embodiment of the present disclosure;
FIG. 6 is a schematic flowchart of another target object following method provided in an embodiment of the present application;
fig. 7a is a schematic flowchart of a target object following method in an application scenario according to an embodiment of the present application;
FIG. 7b is a schematic diagram of a vehicle following a target object in an application scenario provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a target object following method provided in an embodiment of the present application;
fig. 9 is a schematic diagram of a target object following method and apparatus provided in an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Before explaining technical solutions provided by the embodiments of the present application, in order to facilitate understanding of the embodiments of the present application, specific terms are first introduced in the present application.
Image coordinate system: and a plane direct coordinate system taking the center of the rectangular image collected by the camera as an origin, wherein the extending direction of the coordinate axis is consistent with the extending direction of the boundary of the image.
Pixel coordinate system: and a plane direct coordinate system taking the corner points of the rectangular images acquired by the camera as an original point, wherein the extending directions of the coordinate axes of the image coordinate system are consistent.
Camera coordinate system: and a three-dimensional Cartesian coordinate system with a preset position as an origin, wherein the preset position point is a preset position point representing the position of the vehicle.
As an example, the relationship between the image coordinate system and the pixel coordinate system may be as shown in FIG. 1, wherein the x-axis and the y-axis represent coordinate axes in the image coordinate system, the u-axis and the v-axis represent coordinate axes of the pixel coordinate system, and O1Representing the origin, O, in the image coordinate system0Representing the origin in the pixel coordinate system. It will be appreciated that the coordinate scales in the pixel coordinate system and the image coordinate system are generally not the same.
As described in the background, sometimes a user desires that a vehicle automatically follow himself, so that the user does not need to return to a parking spot again when he wants to continue driving the vehicle.
In order to solve the prior art problems, embodiments of the present application provide a target object following method, apparatus, device, medium, and computer program product for a vehicle.
First, a target object following method provided in an embodiment of the present application is described below.
Fig. 2 shows a schematic flowchart of a target object following method provided in an embodiment of the present application, where the method includes:
s110, under the condition of a vehicle target object following mode, acquiring an image sequence of a target object in the motion process.
And S120, determining a motion track point set of the target object according to the image sequence.
And S130, determining a quasi-motion track point set of the vehicle according to the motion track point set and the relative position information of the target object, wherein the relative position information is the relative position information between the vehicle and the target object.
And S140, planning the running path of the vehicle according to the quasi-motion track point set of the vehicle.
And S150, following the target object according to the driving path.
In the embodiment of the application, under the condition of a vehicle target object following mode, after a motion track point set of a target object is determined according to an image sequence of the target object in a motion process, a quasi motion track point set of a vehicle is determined according to the motion track point set and relative position information, and then a driving path of the vehicle planned according to the quasi motion track point set is a path which can be matched with a motion track of the target object, so that the vehicle can accurately follow the target object according to the driving path, and the vehicle can automatically follow the target object.
For convenience of description, a specific procedure of the target object following method is explained below with target object following as an execution subject.
In some embodiments, in S110, the target object may be a user of a current vehicle, and the vehicle performs tracking following on the user when in a vehicle target object following mode, where the target object following device obtains a sequence of images of the user during movement through images captured by a camera mounted on the vehicle.
In order to improve the accuracy of automatic following of the vehicle, the flowchart of another target object following method provided by the embodiment of the present application as shown in fig. 3 may further include, in some embodiments, before acquiring the image sequence of the target object in the moving process, S210-S220:
and S210, acquiring the image to be recognized acquired by the camera.
In some embodiments, the target object following device takes a current image captured by a camera mounted on the vehicle as the image to be recognized in S120. Specifically, the mode of capturing an image by a camera is not limited in the embodiment of the present application, and the image may be captured at a certain time interval, for example, one image is captured every w milliseconds, or an image is obtained by processing captured video data according to a video captured by the camera in real time.
In some embodiments, the camera may be a side view camera mounted on the vehicle.
S220, identifying the target object according to the image to be identified.
In some embodiments, in S220, the target object following device performs face recognition on the user through a face recognition model to determine the identity of the user, ensuring that the subsequent vehicle follows the specific user.
As an example, the target object following device may identify the user in the image to be identified after performing model training on face recognition through an open source library Libfacedetection of image face detection based on a Convolutional Neural Network (CNN).
In the embodiment of the application, the target object is identified before the target object is followed so as to ensure that the specific object is followed, the object which is followed wrongly is avoided, and the accuracy of vehicle following is improved.
In order to meet the requirement of the user for the application of the vehicle to follow different scenes, in some embodiments, as shown in fig. 4, a flowchart of a target object following method provided by the embodiment of the present application may further include, before acquiring the image to be recognized captured by the camera, S310-S320:
s310: and receiving second prompt information sent by the target terminal.
In some embodiments, in S310, the target terminal is a mobile terminal of the user. When a user needs vehicle following, after the mobile terminal selects to start the vehicle target object following mode, the mobile terminal sends second prompt information comprising information indicating the target object following device to start the vehicle target object following mode to the target object following device, and the target object following device receives the second prompt information. For example, the user can select the mode for starting the following of the vehicle target object by the mobile phone after drinking to avoid drunk driving, or the user can select the mode for starting the following of the vehicle target object by the mobile phone when the user wants to take pictures halfway during traveling.
S320: and starting a vehicle target object following mode according to the second prompt message.
In some embodiments, the target object following device starts the vehicle target object following mode upon receiving the second prompt information in S320.
In the embodiment of the application, the vehicle target following mode can be started according to the second prompt message only after the second prompt message sent by the target terminal is received, namely, when the user wants the vehicle to follow the user, the vehicle can be controlled to start following through the target terminal, and the requirement of the user for the vehicle to follow different scene applications is met.
In some embodiments, in S120, the image sequence includes a plurality of images, each image has a change in the position of the user compared to the previous image, the target object following device represents the user with a point, and the change in the position of the user is recorded on the same image to obtain a set of motion track points of the user during the user process.
To improve the acquisition efficiency of the motion trajectory point, in some embodiments, S120 may include:
and determining the motion track point of the target object according to the image sequence and the target object following regression model.
In this embodiment, the target object following device inputs each frame of image in the image sequence into the target object following regression model, and processes the image through the target object following regression model to output the motion track point set of the user.
In the embodiment of the application, the motion track point of the target object is determined by the image sequence in the motion process of the target object and the target following regression model, and the motion track point is autonomously determined by the model, so that the acquisition efficiency of the motion track point is improved.
In order to improve the acquisition efficiency of the motion track points, in some embodiments, the target object following regression model comprises a first feature extraction network and a second feature extraction network in parallel, and a full connection layer,
determining a motion trajectory of the target object according to the image sequence and a target object following regression model may include:
extracting a first target object feature of a first frame image through a first feature extraction network, and extracting a second target object feature of a second frame image through a second feature extraction network, wherein the first frame image is an image in the image sequence before the second frame image;
in some embodiments, the second frame image is an image of a current frame in the image sequence, the target object following device inputs an image of a frame preceding the image of the current frame in the image sequence to the first feature extraction network, the first feature extraction network and the second feature extraction network process the input image in parallel after the image of the current frame is input to the second feature extraction network, and the extracted features are input to the full connection layer after the same target object features in the image are extracted.
In some embodiments, the target object feature includes feature information indicating the target object and feature information of an object in an environment surrounding the target object.
Inputting the first target object characteristic and the second target object characteristic into the full-connection layer, and determining the relative motion trend of the target object through the full-connection layer;
in some embodiments, before inputting the first target object feature and the second target object feature into the fully-connected layer, the method may further include fusing the first target object feature and the second target object feature, and inputting the fused target object feature into the fully-connected layer.
As an example, the target object following regression model may be a CNN-based target object following model, and the first feature extraction network and the second feature extraction network may be two CNN-based sub-following models in the CNN-based target object following model, wherein one sub-following model is responsible for extracting features for identifying a user in an image of a current frame, and the other sub-following model is responsible for extracting features for identifying a user in an image of a previous frame. After the characteristics of the images of two adjacent frames are extracted, the extracted characteristics are fused and output to a full connection layer, and the full connection layer predicts the movement trend of the user according to the fused characteristics and outputs the relative movement trend of the user on the image of the current frame relative to the image of the previous frame.
Determining a coordinate point of a preset feature point of a boundary frame of the target object in the second frame image according to the relative motion trend;
in some embodiments, the preset feature point is one point in a preset selected bounding box, the target object following device defines a bounding box to include a fluctuation range of the user under the relative motion trend through the full-connection layer according to the predicted relative motion trend of the user in the image of the current frame, and outputs coordinates of the preset feature point of the bounding box under the image coordinate system.
As one example, the preset feature point may be a center point of the bounding box.
And determining a motion track point of the target object in the second frame image according to the coordinate point of the preset characteristic point of the boundary frame.
In the embodiment of the application, the processing of the two adjacent frames of images is completed through the first feature extraction network, the second feature extraction network and the full connection layer, so as to obtain the relative motion trend of the target object, then the boundary frame of the target object in the second frame of image is determined according to the relative motion trend, the displacement range of the target image in the second frame of image is defined, and then the motion track point determined by the coordinate point of the preset feature point of the boundary frame is not needed to be calculated through complicated steps after the target object moves, so that the efficiency of obtaining the motion track point is improved.
In order to improve the accuracy of automatic following of the vehicle, in some embodiments, as shown in fig. 5, a flowchart of a target object following method provided by the embodiment of the present application, the method may further include S410-S440:
and S410, extracting coordinate points of the ORB feature points of the target object in the second frame of image.
And S420, converting the coordinate point of the ORB characteristic point into a first pixel coordinate point in a pixel coordinate system according to the coordinate conversion relation among coordinate systems.
And S430, converting the coordinate point of the preset feature point of the boundary frame into a second pixel coordinate point in a pixel coordinate system according to the coordinate conversion relation among the coordinate systems.
And S440, generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
In the embodiment of the application, the motion track point of the target object in the second frame image is generated according to the second pixel coordinate point of the preset feature point of the boundary frame and the first pixel coordinate point of the ORB feature point of the target object in the second frame image, that is, the motion track point is obtained by further combining the ORB feature point of the target object on the basis of the motion range of the target object defined by the boundary frame in the second frame image, so that the accuracy of calculating the motion track point of the object is improved, and the accuracy of vehicle following is further improved.
In some embodiments, in S410, after the target object following device extracts the ORB feature point of the user according to the shading pixels of the pixels in the image of the second frame, the position of the ORB feature point relative to the origin in the camera coordinate system is calculated through a pose estimation algorithm, so as to obtain the coordinate point of the ORB feature point in the camera coordinate system. The pose estimation algorithm is the prior art and is not described herein.
In some embodiments, in S420, the first coordinate transformation relation is a three-dimensional coordinate transformation relation of a two-dimensional coordinate transformation system, and the target object following device transforms the coordinate point of the ORB feature point in the camera coordinate system into the first pixel coordinate point in the pixel coordinate system by transforming the transformation relation of the two-dimensional coordinate transformation system by the three-dimensional coordinate transformation system. The conversion relationship of converting the three-dimensional coordinate system into the two-dimensional coordinate system is the prior art, and is not described herein again.
In some embodiments, in S430, the second coordinate transformation relation is a transformation relation of transforming the image coordinate system into the pixel coordinate system, and the target object following device transforms the coordinate point of the preset feature point of the bounding box in the image coordinate system into the second pixel coordinate point in the pixel coordinate system through the transformation relation of transforming the image coordinate system into the pixel coordinate system.
In some embodiments, referring to fig. 1, the conversion relationship of the image coordinate system into the pixel coordinate system may be formula 1, and the coordinate point of the preset feature point of the bounding box in the image coordinate system is converted into the second pixel coordinate point in the pixel coordinate system by formula 1.
Wherein u represents an abscissa value in a pixel coordinate system; v represents a vertical coordinate value in the pixel coordinate system; u. of0And v0Respectively representing the number of horizontal pixels and the number of vertical pixels which are different between an origin in an image coordinate system and an origin in a pixel coordinate system under the image coordinate system; dx and dy denote unit lengths occupied by one pixel in the x-axis direction and the y-axis direction, respectively, in the image coordinate system.
In some embodiments, in S440, the target object following device generates a motion trajectory point of the user in the current image based on the first pixel coordinate point and the second pixel coordinate point, and adds the motion trajectory point to the motion trajectory point set.
In some embodiments, in S130, the relative position information is a preset coordinate offset in the pixel coordinate system, after obtaining the motion track point of the user in the current image, the target object following device offsets the coordinate of the motion track point in the pixel coordinate system according to the preset coordinate offset, where the motion track point after being offset is a quasi motion track point of the vehicle, and adds the quasi motion track point to the quasi motion track point set.
As one example, the preset coordinate offset may be a preset distance in a horizontal direction between the vehicle and the user.
In some embodiments, in S140, the target object following device combines the ORB feature points extracted from the image of the current frame, converts each quasi-motion trajectory point in the quasi-motion trajectory point set of the vehicle under the pixel coordinate system into a two-dimensional local map based on the camera coordinate system, connects each quasi-motion trajectory point to obtain a quasi-motion trajectory of the vehicle, and then plans the driving path of the vehicle based on the quasi-motion trajectory of the vehicle. The specific elements included in the travel path are determined according to actual conditions, and the embodiment of the present application is not limited, for example, the travel path may include, but is not limited to, information such as a following speed, a travel direction, and a travel track of the vehicle. The two-dimensional local map may be an image of a current frame or an image of a previous frame.
In order to improve the accuracy of vehicle following, in some embodiments, the planning the driving path of the vehicle according to the quasi-motion trajectory of the vehicle may include:
and when the number of the track points in the quasi-motion track point set of the vehicle is greater than a preset numerical value, planning the running path of the vehicle according to the quasi-motion track point set of the vehicle.
In some embodiments, the preset value may be 100.
In this application embodiment, accumulate earlier behind the track point of a certain quantity, according to the accurate movement track of vehicle, plan the route of traveling of vehicle again, that is to say, just begin to plan the route of traveling of vehicle after the target object has moved a period of time, make the route of traveling more laminate user's movement track, improved the accuracy that the vehicle was followed.
In order to improve the safety of automatic following of the vehicle, in some implementations, as shown in fig. 6, a flowchart of a target object following method provided in an embodiment of the present application may further include, after following the target object according to the driving path, S510:
s510: in the process of following the target object, when the obstacle encountered in the driving path is detected, stopping following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are in communication connection, and the first prompt information is used for prompting the vehicle to stop following the target object.
In some embodiments, in S510, the target terminal may be a mobile terminal of a user, the target object following device detects a road condition in real time during the process of following the user, stops vehicle following immediately after detecting that there is an obstacle obstructing the vehicle from advancing in the driving path, and sends a prompt message prompting the user that the vehicle stops following currently to the mobile terminal of the user. The embodiment of the application is not limited, and the device for tracking the target object can scan the surrounding environment of the vehicle through physical wave signals such as Bluetooth, radar, infrared or ultrasonic waves and the like to detect the obstacle, and can also detect the surrounding environment of the vehicle by combining artificial intelligence and images acquired by a camera of the vehicle. It is understood that obstacles include, but are not limited to, living beings such as pedestrians in front of the vehicle in the path of travel and objects that impede the progress of the vehicle, such as walls, road studs, and the like.
In some embodiments, the user may also select to terminate the vehicle target object following mode at the mobile terminal when the user does not want the vehicle to follow, and the mobile terminal sends the prompt information to the target object following device. The target object following device stops following the user upon receiving the prompt message.
In the embodiment of the application, the target object is stopped being followed when the obstacle is detected, and meanwhile, the prompt message for stopping being followed is sent to the target object, so that the target object can be reminded that the vehicle meets the obstacle and stops being followed currently. On one hand, the obstacle is immediately stopped from following when the obstacle is detected, so that the scratch of the obstacle to the vehicle can be avoided, on the other hand, when the target object or other objects are in front of the vehicle, the obstacle can be detected, the following of the vehicle is stopped, the vehicle is prevented from knocking over the target object or other objects, and the safety of the vehicle during automatic following is improved.
In an application scenario, as shown in fig. 7a and 7b, taking a camera as a side-view camera of a vehicle as an example, the user a feels like to stop and walk while driving the vehicle, and at this time, the user a needs to follow the vehicle. The target object following device firstly detects whether the mobile terminal of the user A is mutually connected with the vehicle, and if the mobile terminal of the user A is not mutually connected with the vehicle, a vehicle target object following mode cannot be started.
After a user A selects to start a vehicle target object following mode on a mobile terminal, the mobile terminal sends prompt information to a target object following device, after the target object following device receives the prompt information indicating to start the vehicle target object following mode, the target object following device carries out face recognition on the user A through a side-view camera of a vehicle under the condition that the face recognition is passed, the target object following device recognizes a coordinate point of a central point of a boundary frame of the user A in a current frame image under an image coordinate system through a CNN-based target object following model, and converts the coordinate point of the central point under the image coordinate system into a coordinate point under a pixel coordinate system. Meanwhile, the target object following device extracts ORB feature points of the user A in the image, calculates coordinate points of the ORB feature points in a camera coordinate system through a pose estimation algorithm, and then converts the coordinate points of the ORB feature points in the camera coordinate system into coordinate points in a pixel coordinate system. Then, the target object following device determines the motion track point of the user a according to the coordinate point of the pixel coordinate system of the center point and the coordinate point of the pixel coordinate system of the ORB feature point, and obtains a motion track point set (WP _ xy) { (x _1, y _1), (x _2, y _2), (x _3, y _3) … … (x _ n, y _ n) } of the user a. And calculating the quasi-motion track point of the vehicle in real time according to the coordinate offset d based on the motion track point of the user A to obtain a quasi-motion track point set (WPC _ xy) { (x _1, y _1), (x _2, y _2), (x _3, y _3) … … (x _ n, y _ n) } of the vehicle.
And then, when the number of the quasi-motion track points in the quasi-motion track point set of the vehicle is greater than 100, the target object following device converts each quasi-motion track point under a pixel coordinate system in the quasi-motion track point set into an image of a current frame based on a camera coordinate system, and connects each quasi-motion track point to obtain the quasi-motion track of the vehicle. And then planning a following path of the vehicle, so that the vehicle runs along the same motion track as the user A at a certain following speed to follow the user A. As shown in fig. 7a, the dotted line is the motion trajectory of the user a, and the solid line is the trajectory of the vehicle following the user a.
In the following process, when the target object following device detects that an obstacle exists in the path, the target object following device exits the vehicle target object following mode to stop following the user A, and sends prompt information to the mobile terminal of the user A to prompt the user A to stop following the vehicle.
In the following process, the user A feels that the vehicle is unnecessary to follow at present, the vehicle following is stopped through the selection of the mobile terminal, the mobile terminal sends prompt information to the target object following device, and the target object following device exits from the vehicle target object following mode after receiving an instruction.
Based on the target object following method provided by any of the above embodiments, the present application further provides an embodiment of a target object following method device, which is specifically shown in fig. 8.
Fig. 8 shows a schematic diagram of a target object following apparatus according to an embodiment of the present application. As shown in fig. 8, the target object following apparatus 800 may include:
an obtaining module 810, configured to obtain an image sequence of the target object during a motion process when the vehicle target object following mode is in the vehicle target object following mode.
A first determining module 820, configured to determine a motion track point set of the target object according to the image sequence.
The second determining module 830 is configured to determine a quasi-motion track point set of the vehicle according to the motion track point set of the target object and the relative position information, where the relative position information is relative position information between the vehicle and the target object.
And the planning module 840 is used for planning the driving path of the vehicle according to the quasi-motion track point set of the vehicle.
A following module 850, configured to follow the target object according to the driving path.
The device in the embodiment of the application, under the condition of a vehicle target object following mode, after a motion track point set of a target object is determined according to an image sequence of the target object in a motion process, a quasi motion track point set of a vehicle is determined according to the motion track point set and relative position information, and then a driving path of the vehicle planned according to the quasi motion track point set is a path which can be matched with a motion track of the target object, so that the vehicle can accurately follow the target object according to the driving path, and the vehicle can automatically follow the target object.
In some embodiments, the first determining module 820 may include, in order to improve the acquisition efficiency of the motion trajectory point:
and the first determining submodule is used for determining the motion track point of the target object according to the image sequence and the target object following regression model.
According to the device in the embodiment of the application, the motion track point of the target object is determined by the image sequence in the motion process of the target object and the target following regression model, and the motion track point is autonomously determined through the model, so that the obtaining efficiency of the motion track point is improved.
In some embodiments, to improve the efficiency of obtaining the motion trajectory points, the first determining sub-module may include:
the first extraction unit is used for extracting a first target object feature of a first frame image through a first feature extraction network and extracting a second target object feature of a second frame image through a second feature extraction network, wherein the first frame image is an image which is positioned in front of the second frame image in the image sequence.
And the first determining unit is used for inputting the first target object characteristic and the second target object characteristic into the full connection layer, and determining the relative motion trend of the target object through the full connection layer.
And the second determining unit is used for determining the coordinate point of the preset characteristic point of the boundary frame of the target object in the second frame image according to the relative motion trend.
And the third determining unit is used for determining the motion track point of the target object in the second frame image according to the coordinate point of the preset characteristic point of the boundary frame.
According to the device in the embodiment of the application, the processing of two adjacent frames of images is completed through the first feature extraction network, the second feature extraction network and the full connection layer, so that the relative motion trend of the target object is obtained, the boundary frame of the target object in the second frame of image is determined according to the relative motion trend, the displacement range of the target image in the second frame of image is defined, the motion track point determined by the coordinate point of the preset feature point of the boundary frame is further used, the position of the target object after motion is not required to be calculated through complex steps, and the efficiency of obtaining the motion track point is improved.
In some embodiments, in order to improve the accuracy of the automatic following of the vehicle, the target object following device 800 may further include:
the extraction module is used for extracting a coordinate point of a rapid directional rotation ORB characteristic point of the target object in the second frame image;
the first conversion module is used for converting the coordinate points of the ORB characteristic points into first pixel coordinate points in a pixel coordinate system according to a first coordinate conversion relation between coordinate systems;
the second conversion module is used for converting the coordinate points of the preset characteristic points of the boundary frame into second pixel coordinate points under a pixel coordinate system according to a second coordinate conversion relation between coordinate systems;
and generating a module. And the motion track point of the target object in the second frame image is generated according to the first pixel coordinate point and the second pixel coordinate point.
According to the device in the embodiment of the application, the motion track point of the target object in the second frame image is generated according to the second pixel coordinate point of the preset characteristic point of the boundary frame and the first pixel coordinate point of the ORB characteristic point of the target object in the second frame image, namely, the motion track point is obtained by further combining the ORB characteristic point of the target object on the basis of the motion range of the target object defined by the boundary frame in the second frame image, the accuracy of calculating the motion track point of the object is improved, and the accuracy of vehicle following is further improved.
In some embodiments, to improve the accuracy of vehicle following, the planning module 840 may include:
and the planning submodule is used for planning the running path of the vehicle according to the quasi-motion track of the vehicle when the number of the track points in the quasi-motion track point set of the vehicle is greater than a preset value.
The device in this application embodiment, after accumulating certain number of track points earlier, according to the accurate movement track of vehicle, plan the route of traveling of vehicle again, that is to say, just begin to plan the route of traveling of vehicle after the target object has moved a period of time, make the route of traveling more laminate user's movement track, improved the accuracy that the vehicle was followed.
In some embodiments, to improve the safety of the vehicle following automatically, the following module 850 may further include:
the detection submodule is used for stopping following the target object when the detection submodule detects that an obstacle is encountered in the running path in the process of following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are provided with terminals in communication connection, and the first prompt information is used for prompting the vehicle to stop following the target object.
The device in the embodiment of the application stops following the target object when the obstacle is detected, and sends the prompt message for stopping following to the target object, so that the target object can be reminded that the vehicle meets the obstacle and stops following at present. On one hand, the obstacle is immediately stopped from following when the obstacle is detected, so that the scratch of the obstacle to the vehicle can be avoided, on the other hand, when the target object or other objects are in front of the vehicle, the obstacle can be detected, the following of the vehicle is stopped, the vehicle is prevented from knocking over the target object or other objects, and the safety of the vehicle during automatic following is improved.
In some embodiments, in order to improve the accuracy of the automatic following of the vehicle, the target object following device 800 may further include:
the first acquisition module is used for acquiring the image to be identified acquired by the camera.
And the identification module is used for identifying the target object according to the image to be identified.
The device in the embodiment of the application also identifies the target object before the target object is followed so as to ensure that the specific object is followed, thereby avoiding the object which is followed wrongly and improving the accuracy of vehicle following.
In some embodiments, to meet the user's requirements for vehicle following different scene applications, the target object following device 800 may further include:
and the receiving module is used for receiving the second prompt message sent by the target terminal.
And the starting module is used for starting a vehicle target object following mode according to the second prompt message.
The device in the embodiment of the application can start the vehicle target following mode according to the second prompt message only after receiving the second prompt message sent by the target terminal, namely, when the user wants the vehicle to follow the user, the vehicle can be controlled to start following through the target terminal, and the requirement of the user for the vehicle to follow different scene applications is met.
Based on the target object following method provided by any one of the embodiments, the application further provides a vehicle embodiment.
The vehicle includes a camera and a processor.
The camera is used for acquiring an image sequence of a target object in the motion process under the condition of a vehicle target object following mode;
a processor for performing the target object following method as described in any of the embodiments of the present application.
The vehicle has the functions of realizing the steps in the method embodiment, and can achieve the corresponding technical effects, and the description is omitted for brevity. In addition, in combination with the filtering model updating method in the foregoing embodiment, as shown in fig. 9, the present application may provide a target object following apparatus, which may include a processor 910 and a memory 920 storing computer program instructions.
Specifically, the processor 910 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
The processor 910 realizes any one of the target object following methods in the above embodiments by reading and executing computer program instructions stored in the memory 920.
In one example, the electronic device may also include a communication interface 930 and a bus 940. As shown in fig. 9, the processor 910, the memory 920, and the communication interface 930 are connected via a bus 940 to complete communication therebetween.
The communication interface 930 is mainly used to implement communication between modules, devices, units and/or devices in this embodiment.
The bus 940 includes hardware, software, or both to couple the components of the electronic device to one another. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 940 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The target object following device, when executing the computer program instructions, implements the target object following method described in any of the above embodiments.
In addition, in combination with the target object following method described above, an embodiment of the present application may provide a computer storage medium having computer program instructions stored thereon, where the computer program instructions, when executed by a processor, implement the target object following method described in any of the above embodiments.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.
Claims (13)
1. A target object following method, applied to a vehicle, the method comprising:
under the condition of a vehicle target object following mode, acquiring an image sequence of a target object in a moving process;
determining a motion track point set of the target object according to the image sequence;
determining a quasi-motion track point set of a vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is relative position information between the vehicle and the target object;
planning a driving path of the vehicle according to the quasi-motion track point set of the vehicle;
and following the target object according to the driving path.
2. The method according to claim 1, wherein the determining a set of motion trajectory points of the target object according to the image sequence specifically includes:
and determining the motion track point of the target object according to the image sequence and the target object following regression model.
3. The method of claim 2, wherein the target object follow regression model comprises a first feature extraction network and a second feature extraction network in parallel, and a fully connected layer;
determining a motion track point of the target object according to the image sequence and the target object following regression model, specifically comprising:
extracting a first target object feature of a first frame image through a first feature extraction network, and extracting a second target object feature of a second frame image through a second feature extraction network, wherein the first frame image is an image in the image sequence before the second frame image;
inputting the first target object characteristic and the second target object characteristic into the full-connection layer, and determining the relative motion trend of the target object through the full-connection layer;
determining a coordinate point of a preset feature point of a boundary frame of the target object in the second frame image according to the relative motion trend;
and determining a motion track point of the target object in the second frame image according to the coordinate point of the preset characteristic point of the boundary frame.
4. The method of claim 3, further comprising: extracting a coordinate point of a rapid directional rotation ORB characteristic point of the target object in the second frame image;
converting the coordinate point of the ORB characteristic point into a first pixel coordinate point under a pixel coordinate system according to a first coordinate conversion relation between coordinate systems;
the determining a motion track point of the target object in the second frame image according to the coordinate point of the preset feature point of the boundary frame includes:
converting the coordinate points of the preset feature points of the boundary frame into second pixel coordinate points under a pixel coordinate system according to a second coordinate conversion relation between coordinate systems;
and generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
5. The method according to any one of claims 1 to 4, wherein the planning of the travel path of the vehicle according to the quasi-motion trajectory of the vehicle specifically comprises:
and when the number of the track points in the quasi-motion track point set of the vehicle is greater than a preset numerical value, planning the running path of the vehicle according to the quasi-motion track point set of the vehicle.
6. The method according to any one of claims 1-4, further comprising, after following the target object according to the travel path:
in the process of following the target object, when the obstacle encountered in the driving path is detected, stopping following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are in communication connection, and the first prompt information is used for prompting the vehicle to stop following the target object.
7. The method of claim 1, further comprising, prior to acquiring the sequence of images of the target object during motion:
acquiring an image to be identified acquired by a camera;
and identifying the target object according to the image to be identified.
8. The method of claim 7, further comprising, prior to acquiring the image to be recognized captured by the camera:
receiving second prompt information sent by the target terminal;
and starting a vehicle target object following mode according to the second prompt message.
9. A target object following apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an image sequence of a target object in a motion process under the condition of a vehicle target object following mode;
the first determining module is used for determining a motion track point set of the target object according to the image sequence;
the second determining module is used for determining a quasi-motion track point set of the vehicle according to the motion track point set and relative position information of the target object, wherein the relative position information is the relative position information between the vehicle and the target object;
the planning module is used for planning the driving path of the vehicle according to the quasi-motion track point set of the vehicle;
and the following module is used for following the target object according to the driving path.
10. An electric vehicle, characterized by comprising:
the camera is used for acquiring an image sequence of a target object in the motion process under the condition of a vehicle target object following mode;
a processor for performing the method of any one of claims 1-9.
11. A target object following device, characterized in that the device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a target object following method as claimed in any of claims 1-8.
12. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the target object following method according to any one of claims 1 to 8.
13. A computer program product, wherein instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the target object following method of any one of claims.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111668239.2A CN114353818B (en) | 2021-12-31 | 2021-12-31 | Target object following method, apparatus, device, medium and computer program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111668239.2A CN114353818B (en) | 2021-12-31 | 2021-12-31 | Target object following method, apparatus, device, medium and computer program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114353818A true CN114353818A (en) | 2022-04-15 |
CN114353818B CN114353818B (en) | 2024-05-14 |
Family
ID=81105716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111668239.2A Active CN114353818B (en) | 2021-12-31 | 2021-12-31 | Target object following method, apparatus, device, medium and computer program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114353818B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024131647A1 (en) * | 2022-12-20 | 2024-06-27 | 江苏瑞布特智能科技有限公司 | Following control method and apparatus, device, and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160379074A1 (en) * | 2015-06-25 | 2016-12-29 | Appropolis Inc. | System and a method for tracking mobile objects using cameras and tag devices |
CN108470353A (en) * | 2018-03-01 | 2018-08-31 | 腾讯科技(深圳)有限公司 | A kind of method for tracking target, device and storage medium |
US20190096069A1 (en) * | 2016-02-26 | 2019-03-28 | SZ DJI Technology Co., Ltd. | Systems and methods for visual target tracking |
CN110570460A (en) * | 2019-09-06 | 2019-12-13 | 腾讯云计算(北京)有限责任公司 | Target tracking method and device, computer equipment and computer readable storage medium |
CN111597965A (en) * | 2020-05-13 | 2020-08-28 | 广州小鹏车联网科技有限公司 | Vehicle following method and device |
CN111612823A (en) * | 2020-05-21 | 2020-09-01 | 云南电网有限责任公司昭通供电局 | Robot autonomous tracking method based on vision |
CN112507859A (en) * | 2020-12-05 | 2021-03-16 | 西北工业大学 | A Visual Tracking Method for Mobile Robots |
CN112634368A (en) * | 2020-12-26 | 2021-04-09 | 西安科锐盛创新科技有限公司 | Method and device for generating space and OR graph model of scene target and electronic equipment |
CN113157003A (en) * | 2021-01-19 | 2021-07-23 | 恒大新能源汽车投资控股集团有限公司 | Vehicle following method and device |
-
2021
- 2021-12-31 CN CN202111668239.2A patent/CN114353818B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160379074A1 (en) * | 2015-06-25 | 2016-12-29 | Appropolis Inc. | System and a method for tracking mobile objects using cameras and tag devices |
US20190096069A1 (en) * | 2016-02-26 | 2019-03-28 | SZ DJI Technology Co., Ltd. | Systems and methods for visual target tracking |
CN108470353A (en) * | 2018-03-01 | 2018-08-31 | 腾讯科技(深圳)有限公司 | A kind of method for tracking target, device and storage medium |
CN110570460A (en) * | 2019-09-06 | 2019-12-13 | 腾讯云计算(北京)有限责任公司 | Target tracking method and device, computer equipment and computer readable storage medium |
CN111597965A (en) * | 2020-05-13 | 2020-08-28 | 广州小鹏车联网科技有限公司 | Vehicle following method and device |
CN111612823A (en) * | 2020-05-21 | 2020-09-01 | 云南电网有限责任公司昭通供电局 | Robot autonomous tracking method based on vision |
CN112507859A (en) * | 2020-12-05 | 2021-03-16 | 西北工业大学 | A Visual Tracking Method for Mobile Robots |
CN112634368A (en) * | 2020-12-26 | 2021-04-09 | 西安科锐盛创新科技有限公司 | Method and device for generating space and OR graph model of scene target and electronic equipment |
CN113157003A (en) * | 2021-01-19 | 2021-07-23 | 恒大新能源汽车投资控股集团有限公司 | Vehicle following method and device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024131647A1 (en) * | 2022-12-20 | 2024-06-27 | 江苏瑞布特智能科技有限公司 | Following control method and apparatus, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114353818B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Toulminet et al. | Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis | |
Broggi et al. | Visual perception of obstacles and vehicles for platooning | |
Teoh et al. | Symmetry-based monocular vehicle detection system | |
US12103517B2 (en) | In-vehicle processing device and movement support system | |
WO2018170472A1 (en) | Joint 3d object detection and orientation estimation via multimodal fusion | |
CN108027877A (en) | System and method for the detection of non-barrier | |
CN110717445B (en) | Front vehicle distance tracking system and method for automatic driving | |
CN108877269A (en) | A kind of detection of intersection vehicle-state and V2X broadcasting method | |
EP3690802A1 (en) | Vehicle exterior recognition device | |
CN112540606A (en) | Obstacle avoidance method and device, scheduling server and storage medium | |
CN113486850A (en) | Traffic behavior recognition method and device, electronic equipment and storage medium | |
JP2008310440A (en) | Pedestrian detection device | |
CN112572471B (en) | Automatic driving method, device, electronic equipment and computer storage medium | |
CN113910224A (en) | Robot following method and device and electronic equipment | |
CN114353818B (en) | Target object following method, apparatus, device, medium and computer program product | |
CN111168685B (en) | Robot control method, robot, and readable storage medium | |
CN115790568A (en) | Map generation method based on semantic information and related equipment | |
JP2014062415A (en) | Trajectory detector and trajectory monitoring device | |
CN113808077A (en) | A target detection method, device, equipment and storage medium | |
CN113569812A (en) | Method, device and electronic device for identifying unknown obstacles | |
Kim et al. | Traffic Accident Detection Based on Ego Motion and Object Tracking | |
US20230316539A1 (en) | Feature detection device, feature detection method, and computer program for detecting feature | |
Pathirana et al. | Robust video/ultrasonic fusion-based estimation for automotive applications | |
Nashashibi et al. | Vehicle recognition and tracking using a generic multisensor and multialgorithm fusion approach | |
CN115342811A (en) | A path planning method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |