[go: up one dir, main page]

CN114353818B - Target object following method, apparatus, device, medium and computer program product - Google Patents

Target object following method, apparatus, device, medium and computer program product Download PDF

Info

Publication number
CN114353818B
CN114353818B CN202111668239.2A CN202111668239A CN114353818B CN 114353818 B CN114353818 B CN 114353818B CN 202111668239 A CN202111668239 A CN 202111668239A CN 114353818 B CN114353818 B CN 114353818B
Authority
CN
China
Prior art keywords
target object
vehicle
following
image
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111668239.2A
Other languages
Chinese (zh)
Other versions
CN114353818A (en
Inventor
李洁辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Rox Intelligent Technology Co Ltd
Original Assignee
Shanghai Rox Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Rox Intelligent Technology Co Ltd filed Critical Shanghai Rox Intelligent Technology Co Ltd
Priority to CN202111668239.2A priority Critical patent/CN114353818B/en
Publication of CN114353818A publication Critical patent/CN114353818A/en
Application granted granted Critical
Publication of CN114353818B publication Critical patent/CN114353818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a target object following method, a target object following device, a target object following medium and a target object following computer program product. The method comprises the steps of acquiring an image sequence of a target object in a motion process under the condition of being in a vehicle target object following mode; determining a motion trail point set of the target object according to the image sequence; determining a quasi-motion track point set of the vehicle according to the motion track point set of the target object and the relative position information; planning a running path of the vehicle according to the quasi-motion track point set of the vehicle; and following the target object according to the driving path. According to the embodiment of the application, the vehicle automatically follows the target object.

Description

Target object following method, apparatus, device, medium and computer program product
Technical Field
The present application relates to the field of target tracking technology, and in particular, to a target object following method, apparatus, device, medium, and computer program product.
Background
With the development of economic technology, people have higher and higher requirements on life quality, and users want vehicles to actively follow themselves when driving on a relatively flat road to get off and take pictures because the users are tired when driving on long-distance driving to travel or off-road, so that the users do not need to return to a parking spot again when wanting to go back to the road.
Therefore, how to realize automatic following of the vehicle by the target object becomes a technical problem to be solved urgently at present.
Disclosure of Invention
The target object following method, the device, the equipment, the medium and the computer program product provided by the embodiment of the application can realize that the vehicle automatically follows the target object.
In a first aspect, an embodiment of the present application provides a target object following method applied to a vehicle, including:
under the condition of being in a vehicle target object following mode, acquiring an image sequence of a target object in a motion process;
determining a motion trail point set of the target object according to the image sequence;
Determining a quasi-motion track point set of a vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is the relative position information between the vehicle and the target object;
planning a running path of the vehicle according to the quasi-motion track point set of the vehicle;
And following the target object according to the driving path.
In some embodiments, the determining the set of motion trajectory points of the target object according to the image sequence includes:
and determining the motion trail point of the target object according to the image sequence and the target object following regression model.
In some embodiments, the target object following regression model includes first and second feature extraction networks in parallel, and a fully connected layer;
the determining the motion trail point of the target object according to the image sequence and the target object following regression model specifically comprises the following steps:
extracting a first target object feature of a first frame image through a first feature extraction network, and extracting a second target object feature of a second frame image through the second feature extraction network, wherein the first frame image is an image positioned before the second frame image in the image sequence;
inputting the first target object characteristics and the second target object characteristics into the full-connection layer, and determining the relative movement trend of the target object through the full-connection layer;
determining coordinate points of preset feature points of a boundary frame of the target object in the second frame image according to the relative motion trend;
And determining the motion track point of the target object in the second frame image according to the coordinate point of the preset feature point of the boundary box.
In some embodiments, extracting coordinate points of fast directional rotation (Oriented fast and Rotated Brie, ORB) feature points of the target object in the second frame image;
Converting the coordinate points of the ORB characteristic points into first pixel coordinate points in a pixel coordinate system according to a first coordinate conversion relation between coordinate systems;
the determining the motion trail point of the target object in the second frame image according to the coordinate point of the preset feature point of the boundary box comprises:
Converting coordinate points of preset feature points of the boundary frame into second pixel coordinate points in a pixel coordinate system according to a second coordinate conversion relation between coordinate systems;
And generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
In some embodiments, the planning the driving path of the vehicle according to the quasi-motion track of the vehicle includes:
and when the number of the track points in the quasi-motion track point set of the vehicle is larger than a preset value, planning a running path of the vehicle according to the quasi-motion track point set of the vehicle.
In some embodiments, following the target object according to the travel path, further comprising:
And stopping following the target object when detecting that an obstacle is encountered in the driving path in the process of following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are provided with communication connection terminals, and the first prompt information is used for prompting the vehicle to stop following the target object.
In some embodiments, before acquiring the image sequence of the target object during the motion, further comprising:
acquiring an image to be identified acquired by a camera;
And identifying the target object according to the image to be identified.
In some embodiments, before acquiring the image to be identified acquired by the camera, further comprising:
Receiving second prompt information sent by a target terminal;
and starting a vehicle target object following mode according to the second prompt information.
In a second aspect, an embodiment of the present application provides a target object following apparatus, including:
The acquisition module is used for acquiring an image sequence of the target object in the motion process under the condition of being in a vehicle target object following mode;
The first determining module is used for determining a motion trail point set of the target object according to the image sequence;
the second determining module is used for determining a quasi-motion track point set of the vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is the relative position information between the vehicle and the target object;
The planning module is used for planning the running path of the vehicle according to the quasi-motion track point set of the vehicle;
And the following module is used for following the target object according to the driving path.
In a third aspect, an embodiment of the present application provides a vehicle including:
The camera is used for acquiring an image sequence of the target object in the motion process under the condition of being in a vehicle target object following mode;
a processor, configured to execute the target object following method according to any one of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a target object following apparatus, including: a processor and a memory storing computer program instructions;
The processor, when executing the computer program instructions, implements the target object following method described in any of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a target object following method as described in any of the embodiments of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, instructions in which, when executed by a processor of an electronic device, cause the electronic device to perform a target object following method as described in any of the embodiments of the present application.
In the embodiment of the application, under the condition of being in a vehicle target object following mode, after a motion track point set of a target object is determined according to an image sequence of the target object in a motion process, a quasi-motion track point set of a vehicle is determined according to the motion track point set and relative position information, and then a running path of the vehicle planned according to the quasi-motion track point set is a path which can be matched with a motion track of the target object, so that the vehicle can accurately follow the target object according to the running path, and the automatic following of the vehicle to the target object is realized.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are needed to be used in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic diagram of coordinate system transformation according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a target object following method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of another target object following method according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for following a target object according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for following a target object according to an embodiment of the present application;
FIG. 6 is a flowchart of another method for following a target object according to an embodiment of the present application;
Fig. 7a is a schematic flow chart of a target object following method in an application scenario according to an embodiment of the present application;
fig. 7b is a schematic diagram of a vehicle following target object in an application scenario provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a target object following method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a target object following method device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Before describing the technical solution provided by the embodiments of the present application, for the convenience of understanding the embodiments of the present application, the present application will be described with specific terms.
Image coordinate system: and a plane direct coordinate system taking the center of the rectangular image acquired by the camera as an origin, wherein the extending direction of the coordinate axis is consistent with the extending direction of the boundary of the image.
Pixel coordinate system: and taking the angular point of the rectangular image acquired by the camera as an origin, wherein the extending direction of the coordinate axes of the coordinate system of the image in the extending direction of the coordinate axes is consistent.
Camera coordinate system: and a three-dimensional Cartesian coordinate system taking the preset position as an origin, wherein the preset position point is a preset position point representing the position of the vehicle.
As an example, the relationship of the image coordinate system and the pixel coordinate system may be as shown in fig. 1, where the x-axis and the y-axis represent coordinate axes in the image coordinate system, the u-axis and the v-axis represent coordinate axes of the pixel coordinate system, O 1 represents an origin in the image coordinate system, and O 0 represents an origin in the pixel coordinate system. It will be appreciated that the coordinate dimensions in the pixel coordinate system and the image coordinate system are generally not the same.
As described in the background art, it is sometimes desirable for a user to have the vehicle automatically follow itself so that the user does not have to return to the parking spot again when he wants to continue driving the vehicle.
To solve the problems of the prior art, embodiments of the present application provide a target object following method, apparatus, device, medium, and computer program product applied to a vehicle.
The following first describes a target object following method provided by an embodiment of the present application.
Fig. 2 shows a flow chart of a target object following method according to an embodiment of the present application, where the method includes:
S110, under the condition that the vehicle target object follows a mode, acquiring an image sequence of the target object in the motion process.
S120, determining a motion trail point set of the target object according to the image sequence.
S130, determining a quasi-motion track point set of the vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is the relative position information between the vehicle and the target object.
And S140, planning a running path of the vehicle according to the quasi-motion track point set of the vehicle.
S150, following the target object according to the driving path.
In the embodiment of the application, under the condition of being in the vehicle target object following mode, after the motion track point set of the target object is determined according to the image sequence of the target object in the motion process, the quasi motion track point set of the vehicle is determined according to the motion track point set and the relative position information, and then the running path of the vehicle planned according to the quasi motion track point set is a path which can be matched with the motion track of the target object, thereby realizing accurate following of the target object by the vehicle according to the running path and automatic following of the target object by the vehicle.
For convenience of description, a specific procedure of the target object following method will be described below with the target object following as an execution subject.
In some embodiments, in S110, the target object may be a user of the current vehicle, and the vehicle tracks and follows the user while in the vehicle target object following mode, where the target object following device obtains a sequence of images of the user during movement from images captured by a camera mounted on the vehicle.
In order to improve accuracy of automatic following of a vehicle, another flowchart of a target object following method according to an embodiment of the present application shown in fig. 3 may further include S210-S220 before acquiring an image sequence of a target object during a motion process in some embodiments:
S210, acquiring an image to be identified acquired by a camera.
In some embodiments, in S120, the target object following apparatus regards a current image acquired by a camera mounted on the vehicle as an image to be recognized. Specifically, the mode of capturing images by the camera is not limited, and the embodiment of the application may capture images at a certain time interval, for example, capture an image every w milliseconds, or may capture images by processing image processing on captured video data according to video captured by the camera in real time.
In some embodiments, the camera may be a side view camera mounted on the vehicle.
S220, identifying the target object according to the image to be identified.
In some embodiments, in S220, the target object following apparatus performs face recognition on the user through the face recognition model to determine the identity of the user, so as to ensure that the following vehicle follows the specific user.
As one example, the target object following device may identify the user in the image to be identified after model training the face recognition through an open source library Libfacedetection of image face detection based on convolutional neural networks (Convolutional Neural Networks, CNN).
In the embodiment of the application, the target object is also identified before the target object is followed so as to ensure that the specific object is followed, avoid the following of the wrong object and improve the accuracy of vehicle following.
In order to meet the requirements of the user for the application of the vehicle to follow different scenes, in some embodiments, the flowchart of the following method for the target object according to the embodiment of the present application as shown in fig. 4 may further include S310-S320 before the image to be identified acquired by the camera is acquired:
s310: and receiving second prompt information sent by the target terminal.
In some embodiments, in S310, the target terminal is a mobile terminal of the user. After the user needs the vehicle to follow up and selects to start the vehicle target object following mode through the mobile terminal, the mobile terminal sends second prompt information comprising information for indicating the target object following device to start the vehicle target object following mode to the target object following device, and the target object following device receives the second prompt information. For example, when the user drinks wine and avoids drunk driving, the mobile phone selects to start the vehicle target object following mode, or when the user wants to take a picture midway during travel, the mobile phone also can select to start the vehicle target object following mode.
S320: and starting a vehicle target object following mode according to the second prompt information.
In some embodiments, in S320, the target object following device turns on the vehicle target object following mode upon receiving the second prompt.
In the embodiment of the application, the vehicle target following mode is started according to the second prompt information sent by the target terminal only after the second prompt information is received, namely, when the user wants the vehicle to follow the vehicle, the vehicle can be controlled to start to follow through the target terminal, so that the requirements of the user on the application of the vehicle to follow different scenes are met.
In some embodiments, in S120, the image sequence includes a plurality of images, each image has a change in a position of the user compared to a previous image, the target object following device represents the user with a point, and the change in the position of the user is recorded on the same image to obtain a set of motion track points of the user in the user process.
In order to improve the efficiency of acquiring the motion trajectory point, in some embodiments, S120 may include:
and determining the motion trail point of the target object according to the image sequence and the target object following regression model.
In the embodiment, the target object following device inputs each frame of image in the image sequence into the target object following regression model, and the target object following regression model processes the image so as to output a motion trail point set of a user.
In the embodiment of the application, the motion trail point of the target object is determined by the image sequence and the target following regression model in the motion process of the target object, and the motion trail point is determined autonomously by the model, so that the acquisition efficiency of the motion trail point is improved.
In order to improve the efficiency of the acquisition of the motion trail points, in some embodiments, the target object following regression model includes a first feature extraction network and a second feature extraction network in parallel, and a full connection layer,
Determining the motion trajectory of the target object according to the image sequence and the target object following regression model may include:
extracting a first target object feature of a first frame image through a first feature extraction network, and extracting a second target object feature of a second frame image through the second feature extraction network, wherein the first frame image is an image positioned before the second frame image in the image sequence;
In some embodiments, the second frame image is an image of a current frame in the image sequence, the target object following device inputs a previous frame image of the current frame in the image sequence to the first feature extraction network, inputs the image of the current frame to the second feature extraction network, and then the first feature extraction network and the second feature extraction network process the input image in parallel, and inputs the extracted feature to the full-connection layer after extracting the same target object feature in the image.
In some embodiments, the target object features include feature information indicative of the target object and feature information of objects surrounding the target object.
Inputting the first target object characteristics and the second target object characteristics into the full-connection layer, and determining the relative movement trend of the target object through the full-connection layer;
in some embodiments, before inputting the first target object feature and the second target object feature to the fully connected layer, fusing the first target object feature and the second target object feature, and inputting the fused target object feature to the fully connected layer.
As an example, the target object following regression model may be a CNN-based target object following model, and the first feature extraction network and the second feature extraction network may be two CNN-based sub-following models of the CNN-based target object following model, wherein one sub-following model is responsible for extracting features for identifying a user in an image of a current frame, and the other sub-following model is responsible for extracting features for identifying a user in an image of a previous frame. After the features of the images of two adjacent frames are extracted, the extracted features are fused and output to a full-connection layer, and the full-connection layer predicts the motion trend of the user according to the fused features and then outputs the relative motion trend of the user on the image of the current frame relative to the image of the previous frame.
Determining coordinate points of preset feature points of a boundary frame of the target object in the second frame image according to the relative motion trend;
In some embodiments, the preset feature point is a point in a preset selected bounding box, the target object following device defines a bounding box through the full-connection layer according to the predicted relative motion trend of the user in the image of the current frame so as to include the fluctuation range of the relative motion trend of the user, and outputs the coordinates of the preset feature point of the bounding box under the image coordinate system.
As one example, the preset feature point may be a center point of the bounding box.
And determining the motion track point of the target object in the second frame image according to the coordinate point of the preset feature point of the boundary box.
In the embodiment of the application, the processing of the two adjacent frames of images is completed through the first feature extraction network, the second feature extraction network and the full-connection layer to obtain the relative motion trend of the target object, then the boundary frame of the target object in the second frame of image is determined according to the relative motion trend, the displacement range of the target image in the second frame of image is defined, and then the motion track point determined by the coordinate point of the preset feature point of the boundary frame is used, so that the position of the target object after the motion is calculated without complex steps, and the efficiency of acquiring the motion track point is improved.
In order to improve accuracy of automatic following of a vehicle, in some embodiments, a flowchart of another target object following method provided by an embodiment of the present application as shown in fig. 5, where the method may further include S410-S440:
s410, extracting coordinate points of the fast directional rotation ORB characteristic points of the target object in the second frame image.
S420, converting the coordinate points of the ORB feature points into first pixel coordinate points in a pixel coordinate system according to the coordinate conversion relation among the coordinate systems.
And S430, converting coordinate points of the preset feature points of the boundary frame into second pixel coordinate points in a pixel coordinate system according to the coordinate conversion relation between the coordinate systems.
S440, generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
In the embodiment of the application, the motion track point of the target object in the second frame image is generated according to the second pixel coordinate point of the preset feature point of the boundary frame and the first pixel coordinate point of the ORB feature point of the target object in the second frame image, that is, the motion track point is obtained by further combining the ORB feature point of the target object on the basis of the motion range of the target object defined by the boundary frame in the second frame image, thereby improving the accuracy of calculating the motion track point of the object and further improving the accuracy of vehicle following.
In some embodiments, in S410, the target object following device extracts the ORB feature point of the user according to the shading degree of the pixel in the second frame image, and calculates the position of the ORB feature point relative to the origin of the camera coordinate system by using the pose estimation algorithm, so as to obtain the coordinate point of the ORB feature point in the camera coordinate system. The pose estimation algorithm is the prior art and will not be described in detail herein.
In some embodiments, in S420, the first coordinate transformation relationship is a transformation relationship of transforming a three-dimensional coordinate system into a two-dimensional coordinate system, and the target object following device transforms the coordinate point of the ORB feature point under the camera coordinate system into the first pixel coordinate point under the pixel coordinate system through the transformation relationship of transforming the three-dimensional coordinate system into the two-dimensional coordinate system. The conversion relationship of converting the three-dimensional coordinate system into the two-dimensional coordinate system is the prior art, and will not be described herein.
In some embodiments, in S430, the second coordinate transformation relationship is a transformation relationship of transforming the image coordinate system into the pixel coordinate system, and the target object following device transforms the coordinate point of the preset feature point of the bounding box under the image coordinate system into the second pixel coordinate point under the pixel coordinate system through the transformation relationship of transforming the image coordinate system into the pixel coordinate system.
In some embodiments, referring to fig. 1, the conversion relationship of converting the image coordinate system into the pixel coordinate system may be equation 1, and the coordinate point of the preset feature point of the bounding box under the image coordinate system is converted into the second pixel coordinate point under the pixel coordinate system by equation 1.
Wherein u represents an abscissa value in the pixel coordinate system; v represents an ordinate value in the pixel coordinate system; u 0 and v 0 denote the number of horizontal pixels and the number of vertical pixels, respectively, of the phase difference between the origin in the image coordinate system and the origin in the pixel coordinate system in the image coordinate system; dx and dy represent the unit lengths occupied by one pixel in the x-axis direction and the y-axis direction, respectively, in the image coordinate system.
In some embodiments, in S440, the target object following apparatus generates a motion trajectory point of the user in the current image based on the first pixel coordinate point and the second pixel coordinate point, and adds the motion trajectory point to the motion trajectory point set.
In some embodiments, in S130, the relative position information is a preset coordinate offset in the pixel coordinate system, after the target object following device obtains the motion track point of the user in the current image, the target object following device offsets the coordinates of the motion track point in the pixel coordinate system according to the preset coordinate offset, and the offset motion track point is the quasi-motion track point of the vehicle, and adds the quasi-motion track point to the quasi-motion track point set.
As one example, the preset coordinate offset may be a preset distance in a horizontal direction between the vehicle and the user.
In some embodiments, in S140, the target object following device combines the ORB feature points extracted from the image of the current frame, converts each quasi-motion track point in the pixel coordinate system of the quasi-motion track point set of the vehicle into a two-dimensional local map based on the camera coordinate system, and connects each quasi-motion track point to obtain a quasi-motion track of the vehicle, and then plans a travel path of the vehicle based on the quasi-motion track of the vehicle. The elements specifically included in the driving path are determined according to actual situations, and the embodiment of the present application is not limited, for example, the driving path may include, but is not limited to, information such as a following speed, a driving direction, and a driving track of the vehicle. The two-dimensional local map may be an image of the current frame or an image of a previous frame.
In order to improve accuracy of vehicle following, in some embodiments, the planning the driving path of the vehicle according to the quasi-motion track of the vehicle may include:
and when the number of the track points in the quasi-motion track point set of the vehicle is larger than a preset value, planning a running path of the vehicle according to the quasi-motion track point set of the vehicle.
In some embodiments, the preset value may be 100.
In the embodiment of the application, after a certain number of track points are accumulated, the running path of the vehicle is planned according to the quasi-motion track of the vehicle, that is, the running path of the vehicle is planned after the target object moves for a period of time, so that the running path is more attached to the motion track of the user, and the following accuracy of the vehicle is improved.
In order to improve the safety of automatic following of a vehicle, in some implementations, a flowchart of a target object following method provided by an embodiment of the present application as shown in fig. 6 may further include S510 after following the target object according to the driving path:
S510: and stopping following the target object when detecting that an obstacle is encountered in the driving path in the process of following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are provided with communication connection terminals, and the first prompt information is used for prompting the vehicle to stop following the target object.
In some embodiments, in S510, the target terminal may be a mobile terminal of the user, the target object following device detects the road condition in real time during the following of the user, stops the vehicle following immediately after detecting that there is an obstacle in the driving path that prevents the vehicle from advancing, and sends a prompt message to the mobile terminal of the user to prompt the user that the vehicle is currently stopped following. The specific mode that the target object following device detects the obstacle in real time in the process of following the user is not limited, and the vehicle surrounding environment can be scanned by physical wave signals such as Bluetooth, radar, infrared or ultrasonic waves to detect the obstacle, and the vehicle surrounding environment can be detected by combining with artificial intelligence and images acquired by a vehicle camera. It is understood that obstacles include, but are not limited to, creatures such as passers-by in front of the vehicle in the path of travel, and objects such as walls, posts, etc. that interfere with the advancement of the vehicle.
In some embodiments, the user may also select to terminate the vehicle target object following mode at the mobile terminal when the user does not want the vehicle to follow, and the mobile terminal sends the prompt message to the target object following device. The target object following device stops following the user after receiving the prompt message.
In the embodiment of the application, the following of the target object is stopped when the obstacle is detected, and the prompt information for stopping the following is sent to the target object, so that the target object can be reminded of the vehicle to encounter the obstacle, and the following is stopped currently. On the one hand, when the obstacle is detected, the vehicle is immediately stopped from being scratched by the obstacle, on the other hand, when the target object or other objects are in front of the vehicle, the target object or other objects can be detected as the obstacle, so that the vehicle is stopped from being followed, the vehicle is prevented from being knocked over the target object or other objects, and the safety of the vehicle during the automatic following period is improved.
In one application scenario, as shown in fig. 7a and 7b, taking a side view camera of a vehicle as an example, the user a wants to stop and walk during the running of the vehicle, and the user a needs the vehicle to follow itself. The target object following device firstly detects whether the mobile terminal of the user A and the vehicle are mutually connected, and if the mobile terminal of the user A and the vehicle are not mutually connected, the vehicle target object following mode cannot be started.
After the user A selects to start the vehicle target object following mode on the mobile terminal, the mobile terminal sends prompt information to the target object following device, after the target object following device receives the prompt information indicating to start the vehicle target object following mode, the target object following device carries out face recognition on the user A through a side view camera of the vehicle, and under the condition that face recognition passes, the target object following device recognizes a coordinate point of a central point of a boundary box of the user A in a current frame image under an image coordinate system through a CNN-based target object following model and converts the coordinate point of the central point under the image coordinate system into a coordinate point under a pixel coordinate system. Meanwhile, the target object following device extracts ORB characteristic points of the user A in the image, calculates coordinate points of the ORB characteristic points in a camera coordinate system through a pose estimation algorithm, and then converts the coordinate points of the ORB characteristic points in the camera coordinate system into coordinate points in a pixel coordinate system. Then, the target object following device determines the motion track point of the user a according to the coordinate point of the coordinate point under the pixel coordinate system of the center point and the coordinate point under the pixel coordinate system of the ORB feature point, so as to obtain a motion track point set (wp_xy) = { (x_1, y_1), (x_2, y_2), (x_3, y_3) … … (x_n, y_n) } of the user a. And calculating the quasi-motion track points of the vehicle in real time according to the coordinate offset d based on the motion track points of the user A to obtain a quasi-motion track point set (WPC_xy) = { (x_1, y_1), (x_2, y_2), (x_3, y_3) … … (x_n, y_n) } of the vehicle.
And when the number of the quasi-motion track points in the quasi-motion track point set of the vehicle is more than 100, the target object following device converts each quasi-motion track point in the quasi-motion track point set under the pixel coordinate system into an image based on the current frame under the camera coordinate system, and connects each quasi-motion track point to obtain the quasi-motion track of the vehicle. Then planning a following path of the vehicle, so that the vehicle follows the user A along the same movement track as the user A according to a certain following speed. As shown in fig. 7a, the dashed line is the motion trajectory of the user a, and the solid line is the trajectory of the vehicle following the user a.
In the following process, when the target object following device detects that an obstacle exists in the path, the vehicle exits from the vehicle target object following mode to stop following the user A, and prompt information is sent to the mobile terminal of the user A to prompt the user A to stop following the vehicle.
In the following process, the user A perceives that the vehicle does not need to follow at present, the vehicle is selected to stop following through the mobile terminal, the mobile terminal sends prompt information to the target object following device, and the target object following device receives the instruction of exiting the vehicle target object following mode.
Based on the target object following method provided by any of the above embodiments, the present application further provides an embodiment of a target object following method apparatus, and in particular, see fig. 8.
Fig. 8 shows a schematic diagram of a target object following apparatus according to an embodiment of the application. As shown in fig. 8, the target object following apparatus 800 may include:
The acquiring module 810 is configured to acquire, in a case where the vehicle target object follows a pattern, a sequence of images of the target object during movement.
A first determining module 820, configured to determine a set of motion trajectory points of the target object according to the image sequence.
The second determining module 830 is configured to determine a quasi-motion trajectory point set of the vehicle according to the motion trajectory point set of the target object and relative position information, where the relative position information is relative position information between the vehicle and the target object.
The planning module 840 is configured to plan a driving path of the vehicle according to the set of quasi-motion track points of the vehicle.
A following module 850 for following the target object according to the travel path.
In the embodiment of the application, under the condition of being in the vehicle target object following mode, after the motion track point set of the target object is determined according to the image sequence of the target object in the motion process, the quasi-motion track point set of the vehicle is determined according to the motion track point set and the relative position information, and then the running path of the vehicle planned according to the quasi-motion track point set is a path which can be matched with the motion track of the target object, thereby realizing the accurate following of the target object by the vehicle according to the running path and the automatic following of the target object by the vehicle.
In some embodiments, to improve the efficiency of acquiring the motion trajectory point, the first determining module 820 may include:
And the first determining submodule is used for determining the motion track point of the target object according to the image sequence and the target object following regression model.
According to the device provided by the embodiment of the application, the motion track point of the target object is determined by the image sequence and the target following regression model in the motion process of the target object, and the motion track point is determined autonomously through the model, so that the acquisition efficiency of the motion track point is improved.
In some embodiments, to improve the efficiency of acquiring the motion trajectory point, the first determining sub-module may include:
The first extraction unit is used for extracting first target object features of a first frame image through a first feature extraction network and extracting second target object features of a second frame image through the second feature extraction network, wherein the first frame image is an image positioned before the second frame image in the image sequence.
And the first determining unit is used for inputting the first target object characteristic and the second target object characteristic into the full-connection layer, and determining the relative movement trend of the target object through the full-connection layer.
And the second determining unit is used for determining coordinate points of preset feature points of a boundary frame of the target object in the second frame image according to the relative motion trend.
And the third determining unit is used for determining the motion track point of the target object in the second frame image according to the coordinate point of the preset characteristic point of the boundary box.
According to the device, the first feature extraction network, the second feature extraction network and the full-connection layer are used for processing the two adjacent frames of images to obtain the relative motion trend of the target object, the boundary frame of the target object in the second frame of image is determined according to the relative motion trend, the displacement range of the target image in the second frame of image is defined, the motion track point determined by the coordinate point of the preset feature point of the boundary frame is further defined, the position of the target object after the motion is calculated without complex steps, and the efficiency of obtaining the motion track point is improved.
In some embodiments, in order to improve the accuracy of the automatic following of the vehicle, the target object following apparatus 800 may further include:
The extraction module is used for extracting coordinate points of the rapid directional rotation ORB characteristic points of the target object in the second frame image;
the first conversion module is used for converting the coordinate points of the ORB characteristic points into first pixel coordinate points in a pixel coordinate system according to a first coordinate conversion relation among coordinate systems;
The second conversion module is used for converting coordinate points of preset feature points of the boundary frame into second pixel coordinate points in a pixel coordinate system according to a second coordinate conversion relation among coordinate systems;
and generating a module. And generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
According to the device in the embodiment of the application, the motion track point of the target object in the second frame image is generated according to the second pixel coordinate point of the preset feature point of the boundary frame and the first pixel coordinate point of the ORB feature point of the target object in the second frame image, that is, the motion track point is obtained by further combining the ORB feature point of the target object on the basis of the motion range of the target object defined by the boundary frame in the second frame image, so that the accuracy of calculating the motion track point of the target object is improved, and the accuracy of vehicle following is further improved.
In some embodiments, to improve accuracy of vehicle following, the planning module 840 may include:
and the planning sub-module is used for planning the running path of the vehicle according to the quasi-motion track of the vehicle when the number of track points in the quasi-motion track point set of the vehicle is larger than a preset value.
According to the device provided by the embodiment of the application, after a certain number of track points are accumulated, the running path of the vehicle is planned according to the quasi-motion track of the vehicle, namely, the running path of the vehicle is planned after the target object moves for a period of time, so that the running path is more attached to the motion track of a user, and the following accuracy of the vehicle is improved.
In some embodiments, to improve the safety of the vehicle's automatic following, the following module 850 may further include:
And the detection sub-module is used for stopping following the target object and sending first prompt information to a target terminal when detecting that an obstacle is encountered in the driving path in the process of following the target object, wherein the target terminal is in communication connection with the vehicle, and the first prompt information is used for prompting the vehicle to stop following the target object.
According to the device provided by the embodiment of the application, the following of the target object is stopped when the obstacle is detected, and the prompt information for stopping the following is sent to the target object, so that the target object can be reminded of the vehicle to encounter the obstacle, and the following is stopped currently. On the one hand, when the obstacle is detected, the vehicle is immediately stopped from being scratched by the obstacle, on the other hand, when the target object or other objects are in front of the vehicle, the target object or other objects can be detected as the obstacle, so that the vehicle is stopped from being followed, the vehicle is prevented from being knocked over the target object or other objects, and the safety of the vehicle during the automatic following period is improved.
In some embodiments, in order to improve the accuracy of the automatic following of the vehicle, the target object following apparatus 800 may further include:
the first acquisition module is used for acquiring the image to be identified acquired by the camera.
And the identification module is used for identifying the target object according to the image to be identified.
The device in the embodiment of the application also identifies the target object before following the target object so as to ensure that a specific object is followed, avoid the following of an incorrect object and improve the accuracy of vehicle following.
In some embodiments, to meet the user's demand for a vehicle to follow different scene applications, the target object following apparatus 800 may further include:
And the receiving module is used for receiving the second prompt information sent by the target terminal.
And the starting module is used for starting a vehicle target object following mode according to the second prompt information.
According to the device provided by the embodiment of the application, the vehicle target following mode is started according to the second prompt information sent by the target terminal only after the second prompt information is received, namely, when a user wants the vehicle to follow the device, the vehicle can be controlled to start to follow through the target terminal, and the requirements of the user on application of the vehicle to follow different scenes are met.
Based on the target object following method provided by any one of the embodiments, the application further provides a vehicle embodiment.
The vehicle includes a camera and a processor.
The camera is used for acquiring an image sequence of the target object in the motion process under the condition of being in a vehicle target object following mode;
a processor, configured to execute the target object following method according to any one of the embodiments of the present application.
The vehicle has the functions of realizing the steps in the method embodiment and can achieve the corresponding technical effects, and for brevity description, the description is omitted herein. Furthermore, in combination with the filtering model updating method in the above embodiment, as shown in fig. 9, an embodiment of the present application may provide a target object following apparatus, which may include a processor 910 and a memory 920 storing computer program instructions.
In particular, the processor 910 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 920 may include mass storage for data or instructions. By way of example, and not limitation, memory 920 may include a hard disk drive (HARD DISK DRIVE, HDD), floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of these. Memory 920 may include removable or non-removable (or fixed) media where appropriate. Memory 920 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 920 is a non-volatile solid state memory. In a particular embodiment, the memory 920 includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor 910 implements any of the target object following methods of the above embodiments by reading and executing computer program instructions stored in the memory 920.
In one example, the electronic device may also include a communication interface 930 and a bus 940. As shown in fig. 9, the processor 910, the memory 920, and the communication interface 930 are connected and communicate with each other through a bus 940.
The communication interface 930 is mainly used to implement communication between modules, devices, units and/or devices in the embodiments of the present application.
Bus 940 includes hardware, software, or both that couple components of the electronic device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 940 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The target object following apparatus implements the target object following method according to any of the embodiments described above when executing the computer program instructions.
In addition, in combination with the above target object following method, the embodiment of the present application may provide a computer storage medium having stored thereon computer program instructions that when executed by a processor implement the target object following method according to any of the above embodiments.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present application are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. The present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (12)

1. A target object following method, characterized by being applied to a vehicle, the method comprising:
under the condition of being in a vehicle target object following mode, acquiring an image sequence of a target object in a motion process;
determining a motion trail point set of the target object according to the image sequence;
Determining a quasi-motion track point set of a vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is the relative position information between the vehicle and the target object;
planning a running path of the vehicle according to the quasi-motion track point set of the vehicle;
Following the target object according to the travel path;
the determining the motion trail point set of the target object according to the image sequence comprises the following steps:
determining a relative motion trend of the target object on a second frame image relative to the first frame image according to the first frame image and the second frame image, wherein the first frame image is an image positioned before the second frame image in the image sequence;
determining coordinate points of preset feature points of a boundary frame of the target object in the second frame image according to the relative motion trend;
extracting coordinate points of fast directional rotation ORB characteristic points of the target object in the second frame image;
Converting the coordinate points of the ORB characteristic points into first pixel coordinate points in a pixel coordinate system according to a first coordinate conversion relation between coordinate systems;
Converting coordinate points of preset feature points of the boundary frame into second pixel coordinate points in a pixel coordinate system according to a second coordinate conversion relation between coordinate systems;
And generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
2. The method according to claim 1, wherein the determining the set of motion trajectory points of the target object according to the image sequence specifically comprises:
and determining the motion trail point of the target object according to the image sequence and the target object following regression model.
3. The method of claim 2, wherein the target object following regression model comprises parallel first and second feature extraction networks, and a fully connected layer;
the determining, according to a first frame image and a second frame image, a relative motion trend of the target object on the second frame image relative to that on the first frame image specifically includes:
extracting a first target object feature of a first frame image through a first feature extraction network, and extracting a second target object feature of a second frame image through the second feature extraction network, wherein the first frame image is an image positioned before the second frame image in the image sequence;
And inputting the first target object characteristic and the second target object characteristic into the full connection layer, and determining the relative movement trend of the target object on the second frame image relative to the first frame image through the full connection layer.
4. A method according to any one of claims 1-3, characterized in that the planning of the travel path of the vehicle according to the quasi-motion trajectory of the vehicle comprises in particular:
and when the number of the track points in the quasi-motion track point set of the vehicle is larger than a preset value, planning a running path of the vehicle according to the quasi-motion track point set of the vehicle.
5. A method according to any one of claims 1-3, further comprising, after following the target object according to the travel path:
And stopping following the target object when detecting that an obstacle is encountered in the driving path in the process of following the target object, and sending first prompt information to a target terminal, wherein the target terminal and the vehicle are provided with communication connection terminals, and the first prompt information is used for prompting the vehicle to stop following the target object.
6. The method of claim 1, further comprising, prior to acquiring the sequence of images of the target object during motion:
acquiring an image to be identified acquired by a camera;
And identifying the target object according to the image to be identified.
7. The method of claim 6, further comprising, prior to acquiring the image to be identified acquired by the camera:
Receiving second prompt information sent by a target terminal;
and starting a vehicle target object following mode according to the second prompt information.
8. A target object following apparatus, the apparatus comprising:
The acquisition module is used for acquiring an image sequence of the target object in the motion process under the condition of being in a vehicle target object following mode;
The first determining module is used for determining a motion trail point set of the target object according to the image sequence;
the second determining module is used for determining a quasi-motion track point set of the vehicle according to the motion track point set of the target object and relative position information, wherein the relative position information is the relative position information between the vehicle and the target object;
The planning module is used for planning the running path of the vehicle according to the quasi-motion track point set of the vehicle;
the following module is used for following the target object according to the driving path;
The first determining module is specifically configured to:
Determining a relative motion trend of the target object on a second frame image relative to the first frame image according to the first frame image and the second frame image, wherein the first frame image is an image positioned before the second frame image in the image sequence;
determining coordinate points of preset feature points of a boundary frame of the target object in the second frame image according to the relative motion trend;
extracting coordinate points of fast directional rotation ORB characteristic points of the target object in the second frame image;
Converting the coordinate points of the ORB characteristic points into first pixel coordinate points in a pixel coordinate system according to a first coordinate conversion relation between coordinate systems;
Converting coordinate points of preset feature points of the boundary frame into second pixel coordinate points in a pixel coordinate system according to a second coordinate conversion relation between coordinate systems;
And generating a motion track point of the target object in the second frame image according to the first pixel coordinate point and the second pixel coordinate point.
9. An electric vehicle, characterized by comprising:
The camera is used for acquiring an image sequence of the target object in the motion process under the condition of being in a vehicle target object following mode;
a processor for performing the method of any of claims 1-7.
10. A target object following apparatus, the apparatus comprising: a processor and a memory storing computer program instructions;
The processor, when executing the computer program instructions, implements the target object following method according to any of claims 1-7.
11. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the target object following method according to any of claims 1-7.
12. A computer program product, characterized in that instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the target object following method according to any of claims 1-7.
CN202111668239.2A 2021-12-31 2021-12-31 Target object following method, apparatus, device, medium and computer program product Active CN114353818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111668239.2A CN114353818B (en) 2021-12-31 2021-12-31 Target object following method, apparatus, device, medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111668239.2A CN114353818B (en) 2021-12-31 2021-12-31 Target object following method, apparatus, device, medium and computer program product

Publications (2)

Publication Number Publication Date
CN114353818A CN114353818A (en) 2022-04-15
CN114353818B true CN114353818B (en) 2024-05-14

Family

ID=81105716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111668239.2A Active CN114353818B (en) 2021-12-31 2021-12-31 Target object following method, apparatus, device, medium and computer program product

Country Status (1)

Country Link
CN (1) CN114353818B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115793660A (en) * 2022-12-20 2023-03-14 上海邦邦机器人有限公司 Following control method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470353A (en) * 2018-03-01 2018-08-31 腾讯科技(深圳)有限公司 A kind of method for tracking target, device and storage medium
CN110570460A (en) * 2019-09-06 2019-12-13 腾讯云计算(北京)有限责任公司 Target tracking method and device, computer equipment and computer readable storage medium
CN111597965A (en) * 2020-05-13 2020-08-28 广州小鹏车联网科技有限公司 Vehicle following method and device
CN111612823A (en) * 2020-05-21 2020-09-01 云南电网有限责任公司昭通供电局 Robot autonomous tracking method based on vision
CN112507859A (en) * 2020-12-05 2021-03-16 西北工业大学 A Visual Tracking Method for Mobile Robots
CN112634368A (en) * 2020-12-26 2021-04-09 西安科锐盛创新科技有限公司 Method and device for generating space and OR graph model of scene target and electronic equipment
CN113157003A (en) * 2021-01-19 2021-07-23 恒大新能源汽车投资控股集团有限公司 Vehicle following method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379074A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
CN108351654B (en) * 2016-02-26 2021-08-17 深圳市大疆创新科技有限公司 System and method for visual target tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470353A (en) * 2018-03-01 2018-08-31 腾讯科技(深圳)有限公司 A kind of method for tracking target, device and storage medium
CN110570460A (en) * 2019-09-06 2019-12-13 腾讯云计算(北京)有限责任公司 Target tracking method and device, computer equipment and computer readable storage medium
CN111597965A (en) * 2020-05-13 2020-08-28 广州小鹏车联网科技有限公司 Vehicle following method and device
CN111612823A (en) * 2020-05-21 2020-09-01 云南电网有限责任公司昭通供电局 Robot autonomous tracking method based on vision
CN112507859A (en) * 2020-12-05 2021-03-16 西北工业大学 A Visual Tracking Method for Mobile Robots
CN112634368A (en) * 2020-12-26 2021-04-09 西安科锐盛创新科技有限公司 Method and device for generating space and OR graph model of scene target and electronic equipment
CN113157003A (en) * 2021-01-19 2021-07-23 恒大新能源汽车投资控股集团有限公司 Vehicle following method and device

Also Published As

Publication number Publication date
CN114353818A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN113554698B (en) Vehicle pose information generation method and device, electronic equipment and storage medium
Toulminet et al. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis
Broggi et al. Visual perception of obstacles and vehicles for platooning
EP2570993B1 (en) Egomotion estimation system and method
Suhr et al. Automatic parking space detection and tracking for underground and indoor environments
Bensrhair et al. A cooperative approach to vision-based vehicle detection
CN108877269B (en) A method for vehicle status detection and V2X broadcasting at intersections
CN110942038B (en) Traffic scene recognition method and device based on vision, medium and electronic equipment
KR20180056685A (en) System and method for non-obstacle area detection
JP2001242934A (en) Obstacle detection equipment, method therefor, and recording medium containing an obstacle detection program
CN113997931B (en) Overhead image generation device, overhead image generation system, and automatic parking device
KR101544021B1 (en) Apparatus and method for generating 3d map
CN110717445B (en) Front vehicle distance tracking system and method for automatic driving
CN110443178A (en) A kind of monitoring system and its method of vehicle violation parking
KR20150049529A (en) Apparatus and method for estimating the location of the vehicle
CN113793297A (en) Pose determination method and device, electronic equipment and readable storage medium
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
CN113486850A (en) Traffic behavior recognition method and device, electronic equipment and storage medium
CN114353818B (en) Target object following method, apparatus, device, medium and computer program product
KR101456172B1 (en) Localization of a mobile robot device, method and mobile robot
CN115790568A (en) Map generation method based on semantic information and related equipment
CN113569812A (en) Unknown obstacle identification method and device and electronic equipment
CN111380535A (en) Navigation method and device based on visual label, mobile machine and readable medium
CN117523914A (en) Collision early warning method, device, equipment, readable storage medium and program product
CN115547031A (en) Track evidence obtaining method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant