[go: up one dir, main page]

CN113591720A - Lane departure detection method, apparatus and computer storage medium - Google Patents

Lane departure detection method, apparatus and computer storage medium Download PDF

Info

Publication number
CN113591720A
CN113591720A CN202110880287.1A CN202110880287A CN113591720A CN 113591720 A CN113591720 A CN 113591720A CN 202110880287 A CN202110880287 A CN 202110880287A CN 113591720 A CN113591720 A CN 113591720A
Authority
CN
China
Prior art keywords
lane
information
vehicle
road
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110880287.1A
Other languages
Chinese (zh)
Other versions
CN113591720B (en
Inventor
胡博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202110880287.1A priority Critical patent/CN113591720B/en
Publication of CN113591720A publication Critical patent/CN113591720A/en
Application granted granted Critical
Publication of CN113591720B publication Critical patent/CN113591720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种车道偏离检测方法、装置及计算机存储介质,所述方法包括:获取包括有道路的车道信息的至少两帧图像;所述至少两帧图像是由并排设置于车辆预设方向的至少两个图像捕获装置同时拍摄的;按照拍摄位置顺序对所述至少两帧图像进行横向拼接,并基于获得的道路直拼图像确定所述车辆在所述道路中所处目标车道对应的第一车道信息;根据所述车辆的定位信息,获取所述车辆在所述道路中所处定位车道对应的第二车道信息;根据所述第一车道信息和所述第二车道信息检测所述目标车道与所述定位车道是否为同一车道,以基于检测结果判断所述车辆是否偏离车道。如此,能够对车辆是否出现车道偏离进行及时和准确检测,且操作便捷、可靠。

Figure 202110880287

The present application discloses a lane departure detection method, device and computer storage medium. The method includes: acquiring at least two frames of images including lane information of a road; the at least two frames of images are arranged side by side in a preset direction of the vehicle The at least two image capture devices are simultaneously photographed; the at least two frames of images are stitched horizontally in the order of the photographing positions, and the first image corresponding to the target lane where the vehicle is located on the road is determined based on the obtained straight stitched images of the road. a lane information; obtain the second lane information corresponding to the positioning lane where the vehicle is located on the road according to the positioning information of the vehicle; detect the target according to the first lane information and the second lane information Whether the lane and the positioning lane are the same lane, so as to determine whether the vehicle deviates from the lane based on the detection result. In this way, whether the vehicle deviates from the lane can be detected timely and accurately, and the operation is convenient and reliable.

Figure 202110880287

Description

Lane departure detection method, apparatus and computer storage medium
Technical Field
The present invention relates to the field of vehicle driving technologies, and in particular, to a lane departure detection method, apparatus, and computer storage medium.
Background
The traditional lane departure detection method is used for carrying out lane-level matching on a sensing result of a single-frame front view shot by a single camera and a high-precision map, under the normal condition, the sensing result of only ten meters in the length of the current lane can be obtained on the basis of the single-frame front view shot by the single camera, the information of adjacent lanes cannot be accurately detected due to the shooting visual angle, and a plurality of lanes with similar or even the same patterns usually exist in a multi-lane road, so that once the positioning result of a vehicle is inaccurate, the accurate detection result cannot be obtained on the basis of the traditional lane departure detection method. However, how to detect whether the vehicle has a lane departure in a timely and accurate manner is under investigation.
Disclosure of Invention
In view of the above technical problems, the present application provides a lane departure detection method, apparatus, and computer storage medium, which can detect whether a vehicle has a lane departure in time and accurately, and are convenient and reliable to operate.
In order to solve the above technical problem, the present application provides a lane departure detection method, including the steps of:
acquiring at least two frames of images including lane information of a road; wherein the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle;
transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images;
acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle;
and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result.
Optionally, the image capturing device includes three and photographed regions respectively corresponding to a right front region, a left front region, and a right front region of the vehicle, and the acquiring at least two frames of images including lane information of a road includes:
controlling the image capturing device to simultaneously capture images of a right front area, a left front area, and a right front area of the vehicle, respectively, to obtain at least three frame images including lane information of a road.
Optionally, the transversely stitching the at least two frames of images according to the shooting position sequence includes:
acquiring at least two frames of top view conversion graphs corresponding to the at least two frames of images;
removing the transverse offset information in the at least two frames of overhead converted graphs by taking the driving direction of the vehicle as the longitudinal direction;
and transversely splicing the at least two frames of overhead converted graphs according to the shooting position sequence to obtain a road straight splicing image.
Optionally, the obtaining of the at least two top conversion maps corresponding to the at least two frames of images includes:
acquiring internal parameters, external parameters and distortion parameters of the image capturing device; wherein the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter, and/or the external parameter is selected from at least one of a pitch angle, a yaw angle, a ground height;
and performing inverse perspective conversion on the at least two frames of images according to the internal parameters, the external parameters and the distortion parameters of the image capturing device.
Optionally, the transversely stitching the at least two frames of top-view transformation graphs according to the shooting position sequence to obtain a road straight graph image includes:
acquiring pose information of the image capturing device;
and sequentially staggering, covering and splicing the at least two frames of the overlooking conversion graphs in a staggered manner according to the pose information, and/or splicing the frames of the overlooking conversion graphs after cutting according to the pose information to obtain a road straight splicing image.
Optionally, the determining, based on the obtained road straight mosaic image, first lane information corresponding to a target lane where the vehicle is located in the road includes:
responding to the road straight jigsaw image for visual perception, and obtaining a road characteristic point bitmap;
according to the pose information, the optical center and the pixel source of each frame of the overlook conversion map, performing geometric recovery on the road feature point bitmap to obtain a road feature image;
and acquiring first lane information corresponding to a target lane of the vehicle on the road according to the road characteristic image and the pose information of the vehicle.
Optionally, the obtaining, according to the positioning information of the vehicle, second lane information corresponding to a lane where the vehicle is located in the road includes:
according to the positioning information of the vehicle, local map information corresponding to the positioning information is obtained;
and determining second lane information corresponding to the positioning lane of the vehicle in the road according to the local map information.
Optionally, the detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information to determine whether the vehicle deviates from the lane based on the detection result includes:
comparing whether the first lane information and the second lane information are consistent;
if the lane is consistent with the positioning lane, the target lane and the positioning lane are judged to be the same lane;
and if the lane is not consistent with the positioning lane, judging that the target lane and the positioning lane are not the same lane.
Accordingly, the present application provides a lane departure detection apparatus that performs the above method, comprising: a processor and a memory storing a computer program, the steps of the lane departure detection method being implemented when the processor runs the computer program.
Accordingly, the present application provides a computer storage medium having a computer program stored therein, which when executed by a processor, implements the steps of the above-described lane departure detection method.
As described above, the lane departure detection method, apparatus, and computer storage medium of the present application include: acquiring at least two frames of images including lane information of a road; the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle; transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images; acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle; and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result. Therefore, the multi-frame images comprising the lane information of the road are spliced, the information of the target lane of the vehicle in the road is determined based on the obtained road straight splicing images, and whether the vehicle deviates from the lane or not is detected by combining the positioning information of the vehicle, so that whether the vehicle deviates from the lane or not can be detected timely and accurately, and the operation is convenient and reliable.
Drawings
Fig. 1 is a schematic flow chart of a lane departure detection method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a lane departure detection method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of the present invention before image stitching;
FIG. 4 is a schematic diagram of image stitching according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating visual perception and geometric recovery in an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a process of detecting whether a vehicle deviates from a lane according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a lane departure detection apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that step numbers such as S101 and S102 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S102 first and then S101 in specific implementations, but these steps should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, for the lane departure detection method provided in the embodiment of the present application, the lane departure detection method may be executed by the lane departure detection apparatus provided in the embodiment of the present application, the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be specifically applied to a cloud server and/or a terminal device, in which the lane departure detection method is applied to a vehicle-mounted terminal in the embodiment, for example, the lane departure detection method includes the following steps:
step S101: acquiring at least two frames of images including lane information of a road; wherein the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle;
alternatively, the lane information includes, but is not limited to, lane line information, lane marking information, speed limit information, etc., the image capturing devices may be cameras, onboard cameras, etc., and the number of the image capturing devices may be equal to or greater than two. The preset direction may be set according to actual requirements, for example, the preset direction may be a front direction of the vehicle, or a rear direction of the vehicle. If there are two image capturing devices, the shooting areas of the two image capturing devices respectively correspond to the left area and the right area in front of the vehicle, that is, the images respectively shot by the two image capturing devices should include the area in front of the vehicle after being transversely seamlessly spliced, and the shooting areas of the two image capturing devices may be partially overlapped. If there are three image capturing devices, the shooting areas of the three image capturing devices respectively correspond to a region right in front of the vehicle, a region left in front of the vehicle, and a region right in front of the vehicle, and the acquiring at least two frames of images including lane information of a road includes: controlling the image capturing device to simultaneously capture images of an area directly in front of the vehicle, an area left in front of the vehicle, and an area right in front of the vehicle, respectively, to obtain at least three frame images including lane information of a road. It can be understood that by photographing different areas in front of the vehicle by a plurality of image capturing devices, not only the current driving lane information of the vehicle but also the surrounding lane information of the vehicle can be obtained, and more useful information can be obtained compared with a manner of photographing based on a single image capturing device, thereby further improving the accuracy of lane departure detection.
Step S102: transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images;
optionally, the transversely stitching the at least two frames of images according to the shooting position sequence includes: acquiring at least two frames of top view conversion graphs corresponding to the at least two frames of images; removing the transverse offset information in the at least two frames of overhead converted graphs by taking the driving direction of the vehicle as the longitudinal direction; and transversely splicing the at least two frames of overhead converted graphs according to the shooting position sequence to obtain a road straight splicing image.
Optionally, the obtaining of the at least two top conversion maps corresponding to the at least two frames of images includes: acquiring internal parameters, external parameters and distortion parameters of the image capturing device; wherein the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter, and/or the external parameter is selected from at least one of a pitch angle, a yaw angle, a ground height; and performing inverse perspective conversion on the at least two frames of images according to the internal parameters, the external parameters and the distortion parameters of the image capturing device. It will be appreciated that in order to capture information such as the lane lines of a road, the image capture device may be inclined from the road surface rather than facing directly vertically downwards (orthographic projection), and that it may be desirable to correct the image to an orthographic form, requiring the use of perspective transformation. The inverse perspective conversion may perform inverse perspective conversion on the multi-frame image according to internal parameters, external parameters, and distortion parameters of the image capturing apparatus using an ipm (intelligent peripheral mapping) algorithm. In an embodiment, the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter. In another embodiment, the external parameter is selected from at least one of pitch angle, yaw angle, ground height. In this way, by performing the top view conversion on at least two frames of images taken by different image capturing devices at the same time, the top view conversion maps of the orthographic projections corresponding to the at least two frames of images can be obtained, that is, based on the parameters of the image capturing devices, the actual road data with specific dimensions can be obtained by measuring the feature points in the images.
Alternatively, since the lateral offset information of the road is not needed in the visual perception, before the images are transversely spliced, the at least two overhead conversion maps may be transversely processed, that is, the lateral offset information in the curved road may be removed with the vehicle driving direction as the longitudinal direction.
Alternatively, since different images are captured by different image capturing devices and the positions of the image capturing devices are fixed, that is, the sequence of the capturing positions of the at least two frames of images is fixed, the at least two frames of top view conversion maps can be transversely stitched according to the sequence of the capturing positions to obtain a road straight stitched image. Optionally, the transversely stitching the at least two frames of top-view transformation graphs according to the shooting position sequence to obtain a road straight graph image includes: acquiring pose information of the image capturing device; and sequentially staggering, covering and splicing the at least two frames of the overlooking conversion graphs in a staggered manner according to the pose information, and/or splicing the frames of the overlooking conversion graphs after cutting according to the pose information to obtain a road straight splicing image. Here, the pose information includes a position and a posture, the position being three-dimensional information in space, and the posture being three-dimensional rotation. And after the pose information of the image capturing device is calculated, copying the corresponding images to a specific position and splicing according to a specific angle. The spliced road straight mosaic image not only comprises driving lane information, but also comprises surrounding lane information, and can visually perceive a wider road image contained in a plurality of frames of images within a larger physical scale.
Optionally, the determining, based on the obtained road straight mosaic image, first lane information corresponding to a target lane where the vehicle is located in the road includes: responding to the road straight jigsaw image for visual perception, and obtaining a road characteristic point bitmap; according to the pose information, the optical center and the pixel source of each frame of the overlook conversion map, performing geometric recovery on the road feature point bitmap to obtain a road feature image; and acquiring first lane information corresponding to a target lane of the vehicle on the road according to the road characteristic image and the pose information of the vehicle. Here, the visually perceiving the road straight mosaic image may be visually perceiving the road straight mosaic image based on a preset visual perception model, that is, the road straight mosaic image is used as an input of the visual perception model, an output of the visual perception model is a road feature point bitmap, and the visual perception model may be obtained by training in advance based on non-real-time images of the same vehicle or different vehicles. And visually perceiving the road straight picture mosaic to obtain a road characteristic point bitmap comprising information such as lane lines, pedestrian crossings, speed limit signs, lane marks and the like. And because the road straight mosaic image for visual perception is obtained by transversely mosaicing at least two frames of overhead view conversion maps with the transverse offset information of the road removed, the road feature point bitmap for visual perception is lack of the transverse offset information. At this time, the road feature point bitmap needs to be geometrically restored, so that the pose information, the optical center and the pixel source of each frame of image are used for restoring the lateral offset information of the road, and the road feature image is obtained. It can be understood that, based on the road feature image, the number of lanes, the number of lane lines, the position and type of each lane line, and whether there is a lane mark, a speed limit mark or not in the lane included in the road feature image may be known, and then, according to the road feature image and the pose information of the vehicle, the position relationship between the target lane where the vehicle is located and each lane in the road, that is, the position relationship between the target lane where the vehicle is located and each lane line in the road, such as whether the target lane where the vehicle is located in the road is a lane between the first lane line and the second lane line, may be obtained. Specifically, if the number of all lanes included in the road feature image is smaller than the number of lanes in the road, it is determined that the at least two frames of images are obtained by capturing images of a part of the lanes in the road, and at this time, the first lane information corresponding to the target lane in which the vehicle is located in the road includes the number of surrounding lanes and/or the type of surrounding lane lines, for example, the left side and the right side of the target lane each include several lanes, the type of lane lines forming the target lane, the number and the type of lane lines on the left side and the right side of the target lane, and the like. If the number of all lanes contained in the road characteristic image is the same as the number of lanes in the road, it is described that the at least two frames of images are taken of all the lanes in the road, and at this time, which lane of the road the vehicle is in can be directly determined according to the road characteristic image and the pose information of the vehicle, that is, the first lane information corresponding to the target lane of the vehicle in the road is the lane identification of the target lane. Therefore, the lane information corresponding to the target lane of the vehicle in the road is determined according to the road straight jigsaw image, the operation is convenient and fast, and the identification accuracy rate is high.
Step S103: acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle;
optionally, the obtaining, according to the positioning information of the vehicle, the positioning lane information of the vehicle on the road includes: according to the positioning information of the vehicle, local map information corresponding to the positioning information is obtained; and determining second lane information corresponding to the positioning lane of the vehicle in the road according to the local map information. It is understood that after determining the positioning information of the vehicle, such as coordinates, the positioning information may be matched with the existing map data to obtain local map information corresponding to the positioning information, such as map information with a radius of 50 meters or 100 meters and taking the vehicle as a center of a circle, and then determine second lane information of a positioning lane where the vehicle is located in the road according to the local map information and the positioning information of the vehicle. It should be noted that the second lane information may include one or more of lane identification (i.e., the number of lanes), the number of surrounding lanes, and the type of surrounding lane lines, for example, there are several lanes on the left side and the right side of the positioning lane, the type of lane lines forming the positioning lane, the number and the type of lane lines on the left side and the right side of the positioning lane, and the like.
Step S104: and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result.
Optionally, the lane information includes at least one of a lane identification, a number of surrounding lanes, and a type of surrounding lane lines, and the detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information to determine whether the vehicle deviates from the lane based on a detection result includes: comparing whether the first lane information and the second lane information are consistent; if the lane is consistent with the positioning lane, the target lane and the positioning lane are judged to be the same lane; and if the lane is not consistent with the positioning lane, judging that the target lane and the positioning lane are not the same lane. Specifically, if the lane information includes lane marks, whether the first lane information and the second lane information are consistent refers to whether the first lane mark and the second lane mark are the same; if the lane information includes the number of surrounding lanes, whether the first lane information and the second lane information are consistent refers to whether the distribution and the number of the first surrounding lane number and the second surrounding lane number are the same, for example, if it is known that the total lanes of the road are four, it is determined that there are two lanes on the left side and one lane on the right side of the target lane, and there are two lanes on the left side and one lane on the right side of the positioning lane, it is determined that the first lane information and the second lane information are consistent; if the lane information includes surrounding lane types, whether the first lane information and the second lane information are consistent means whether the distribution and the number of the first surrounding lane type and the second surrounding lane type are the same. It can be understood that when the target lane and the positioning lane are the same lane, it indicates that the vehicle does not deviate from the lane, and also indicates that the positioning of the vehicle is accurate; when the target lane and the positioning lane are not the same lane, the vehicle is indicated to deviate from the lane, and the positioning of the vehicle is also indicated to be inaccurate.
In summary, the lane departure detection method provided by the above embodiment includes: acquiring at least two frames of images including lane information of a road; the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle; transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images; acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle; and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result. Therefore, the multi-frame images comprising the lane information of the road are spliced, the information of the target lane of the vehicle in the road is determined based on the obtained road straight splicing images, and whether the vehicle deviates from the lane or not is detected by combining the positioning information of the vehicle, so that whether the vehicle deviates from the lane or not can be detected timely and accurately, and the operation is convenient and reliable.
The foregoing embodiments will be specifically described below by way of a specific example based on the same inventive concept as the foregoing embodiments. The image capturing device is taken as a camera in this example as an example.
Referring to fig. 2, a specific flowchart of the lane departure detection method according to the embodiment of the present invention is shown, which includes the following steps:
step S201: acquiring images acquired by a plurality of cameras on the road surface of the same road;
referring to fig. 3, in this embodiment, taking an example that a left camera, a main camera and a right camera are used to respectively acquire one frame of image on the road surface of the same road, fig. 3 (a) is taken by the left camera, fig. 3 (b) is taken by the middle camera, and fig. 3 (c) is taken by the right camera.
Step S202: converting the images into top views, and then combining the top views to obtain a road straight splicing image;
referring to fig. 4, (a) in fig. 4 is a top view image obtained by top-view converting (a) in fig. 3, (b) in fig. 4 is a top view image obtained by top-view converting (b) in fig. 3, and (c) in fig. 4 is a top view image obtained by top-view converting (c) in fig. 3. Then, the (a), (b) and (c) in fig. 4 are transversely spliced, i.e. merged, to obtain a road straight-spliced image, as shown in fig. 4 (d).
Step S203: visually perceiving the road straight picture mosaic to obtain a road characteristic point bitmap;
here, the road straight figure may be input into a preset visual perception model to obtain a road feature point bitmap including information of a lane line and the like.
Step S204: geometrically recovering the road characteristic point bitmap to obtain a perception result of a lane line around a vehicle;
here, the road feature point bitmap may be geometrically restored according to the pose information, the optical center, and the pixel source of each frame of the overhead image, so as to obtain the sensing result of the lane line around the vehicle.
As shown in fig. 5, for a schematic diagram of visual perception and geometric restoration, for a curved road shown in image (a) in fig. 5, a plurality of frames of images shot by a vehicle-mounted camera during traveling are first subjected to image straight stitching in sequence after road curvature information is removed, so as to obtain image (b) in fig. 5. And performing visual perception based on the image (b) in the figure 5 to obtain a road feature point bitmap, namely the image (c) in the figure 5. And finally, geometrically restoring the transverse bending information in the road feature point bitmap according to the pose information, the optical center and the pixel source of each frame of image to obtain a feature image of a real bending angle, namely a feature image shown in (d) in fig. 5.
Step S205: determining first lane information of the vehicle in the sensing result according to the sensing result of the lane lines around the vehicle and the pose information of the vehicle;
step S205: and detecting whether the vehicle deviates from a lane or not according to the first lane information and second lane information where the vehicle is located in the map data.
Here, the second lane information of the vehicle in the map data may be obtained according to the positioning data of the vehicle, for example, first obtaining local high-precision map data of the position of the vehicle, and then obtaining the lane position corresponding to the positioning data by comparing the coordinate corresponding to the positioning data with the coordinate of the high-precision map data.
In the lane departure judging process, whether images including lanes shot by the multiple cameras include all lanes of the road can be judged according to the real-time position information, if so, the positions of the lanes where the vehicle is located are directly analyzed by using the collected images so as to obtain the current specific lane information; if not, the lane position of the vehicle is further determined by analyzing which lanes are shot according to the lanes in the collected image or other information on the road.
Referring to fig. 6, a specific flowchart for detecting whether the vehicle deviates from the lane according to the embodiment of the present invention includes the following steps:
step S301: judging whether the lane number characteristics corresponding to the first lane information and the second lane information are consistent, if not, executing a step S302, otherwise, executing a step S304;
for example, if it is known that there are two lanes on the left side and one lane on the right side of the vehicle according to the first lane information, and it is known that there are two lanes on the left side and one lane on the right side of the vehicle according to the first lane information, it may be determined that the number characteristics of the lanes corresponding to the first lane information and the second lane information are the same, otherwise, it is determined that the number characteristics are not the same.
Step S302: judging whether the lane line type characteristics corresponding to the first lane information and the second lane information are consistent, if not, executing a step S303, otherwise, executing a step S304;
it can be understood that when the first lane information is used to know that 1 lane is on both sides of the vehicle, and the second lane information is used to know that 2 lanes are on both sides of the vehicle, it indicates that the first lane information may only include partial information of a road, and at this time, the lane type characteristics corresponding to the first lane information and the second lane information may be compared, for example, whether the types of the surrounding lane lines are both a dotted line, a solid line, and the like.
Step S303: determining that the vehicle deviates from a lane;
step S304: it is determined that the vehicle does not deviate from the lane.
In summary, in the lane departure detection method provided in the above embodiment, the actual lane position of the vehicle is determined by merging the images acquired by the multiple cameras, and the actual lane position of the vehicle is compared with the located lane position of the vehicle, so as to determine whether the location is correct, and the method is convenient and fast to operate and has high accuracy.
Based on the same inventive concept as the previous embodiment, an embodiment of the present invention provides a map loading device, as shown in fig. 7, the lane departure detection device including: a processor 310 and a memory 311 storing computer programs; the processor 310 illustrated in fig. 7 is not used to refer to the number of the processors 310 as one, but is only used to refer to the position relationship of the processor 310 relative to other devices, and in practical applications, the number of the processors 310 may be one or more; similarly, the memory 311 shown in fig. 7 has the same meaning, i.e. it is only used to refer to the position relationship of the memory 311 with respect to other devices, and in practical applications, the number of the memory 311 may be one or more. The lane departure detection method applied to the above-described lane departure detection apparatus is implemented when the processor 310 runs the computer program.
The lane departure detection apparatus may further include: at least one network interface 312. The various components of the map loader are coupled together by a bus system 313. It will be appreciated that the bus system 313 is used to enable communications among the components connected. The bus system 313 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 313 in FIG. 7.
The memory 311 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 311 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 311 in the embodiment of the present invention is used to store various types of data to support the operation of the lane departure detection apparatus. Examples of such data include: any computer program for operating on the lane departure detection apparatus, such as an operating system and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, the program that implements the method of the embodiment of the present invention may be included in an application program.
Based on the same inventive concept of the foregoing embodiments, this embodiment further provides a computer storage medium, where a computer program is stored in the computer storage medium, where the computer storage medium may be a Memory such as a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM), and the like; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc. The computer program stored in the computer storage medium, when executed by a processor, implements the lane departure detection method applied to the above-described lane departure detection apparatus. Please refer to the description of the embodiment shown in fig. 1 for a specific step flow realized when the computer program is executed by the processor, which is not described herein again.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1.一种车道偏离检测方法,其特征在于,所述方法包括:1. a lane departure detection method, is characterized in that, described method comprises: 获取包括有道路的车道信息的至少两帧图像;其中,所述至少两帧图像是由并排设置于车辆预设方向的至少两个图像捕获装置同时拍摄的;Acquiring at least two frames of images including lane information of the road; wherein, the at least two frames of images are simultaneously photographed by at least two image capture devices arranged side by side in a preset direction of the vehicle; 按照拍摄位置顺序对所述至少两帧图像进行横向拼接,并基于获得的道路直拼图像确定所述车辆在所述道路中所处目标车道对应的第一车道信息;Perform horizontal splicing of the at least two frames of images according to the sequence of shooting positions, and determine the first lane information corresponding to the target lane where the vehicle is located on the road based on the obtained straight spliced images; 根据所述车辆的定位信息,获取所述车辆在所述道路中所处定位车道对应的第二车道信息;According to the positioning information of the vehicle, obtain second lane information corresponding to the positioning lane where the vehicle is located on the road; 根据所述第一车道信息和所述第二车道信息检测所述目标车道与所述定位车道是否为同一车道,以基于检测结果判断所述车辆是否偏离车道。Whether the target lane and the positioning lane are the same lane is detected according to the first lane information and the second lane information, so as to determine whether the vehicle deviates from the lane based on the detection result. 2.根据权利要求1所述的车道偏离检测方法,其特征在于,所述图像捕获装置包括三个且拍摄区域分别对应所述车辆的正前方区域、左前方区域和右前方区域,所述获取包括有道路的车道信息的至少两帧图像,包括:2 . The lane departure detection method according to claim 1 , wherein the image capturing device comprises three and the shooting areas correspond to the front area, the left front area and the right front area of the vehicle respectively, and the acquisition At least two frames of images with lane information of the road, including: 控制所述图像捕获装置同时对所述车辆的正前方区域、左前方区域和右前方区域分别拍摄的图像,以得到包括有道路的车道信息的至少三帧图像。The image capturing device is controlled to simultaneously capture images of the front area, the front left area and the front right area of the vehicle, respectively, to obtain at least three frames of images including lane information of the road. 3.根据权利要求1或2所述的车道偏离检测方法,其特征在于,所述按照拍摄位置顺序对所述至少两帧图像进行横向拼接,包括:3. The lane departure detection method according to claim 1 or 2, wherein the horizontal splicing of the at least two frames of images according to the sequence of shooting positions, comprising: 获取所述至少两帧图像对应的至少两帧俯视转换图;acquiring at least two frames of top-down transformation maps corresponding to the at least two frames of images; 以车辆行驶方向为纵向,去除所述至少两帧俯视转换图中的横向偏移信息;Taking the driving direction of the vehicle as the longitudinal direction, remove the lateral offset information in the at least two frames of the top-view transformation map; 按照拍摄位置顺序对所述至少两帧俯视转换图进行横向拼接,以得到道路直拼图像。Horizontally splicing the at least two frames of the top-view transformation images in the sequence of shooting positions, so as to obtain a road straight splicing image. 4.根据权利要求3所述的车道偏离检测方法,其特征在于,所述获取所述至少两帧图像对应的至少两帧俯视转换图,包括:4 . The lane departure detection method according to claim 3 , wherein the acquiring at least two frames of top-down transformation images corresponding to the at least two frames of images comprises: 5 . 获取所述图像捕获装置的内部参数、外部参数和畸变参数;其中,所述内部参数选自焦距、光学中心、畸变参数中的至少一种,和/或,所述外部参数选自俯仰角、偏航角、地面高度中的至少一种;Acquiring internal parameters, external parameters and distortion parameters of the image capture device; wherein the internal parameters are selected from at least one of focal length, optical center, and distortion parameters, and/or the external parameters are selected from pitch angle, At least one of yaw angle and ground height; 根据所述图像捕获装置的内部参数、外部参数和畸变参数,对所述至少两帧图像进行逆透视转换。Inverse perspective transformation is performed on the at least two frames of images according to internal parameters, external parameters and distortion parameters of the image capturing device. 5.根据权利要求3所述的车道偏离检测方法,其特征在于,所述按照拍摄位置顺序对所述至少两帧俯视转换图进行横向拼接,以得到道路直拼图像,包括:5 . The lane departure detection method according to claim 3 , wherein the horizontal splicing of the at least two frames of top-down transformation images according to the sequence of shooting positions to obtain a road straight splicing image, comprising: 6 . 获取所述图像捕获装置的位姿信息;acquiring pose information of the image capture device; 按照所述位姿信息将所述至少两帧俯视转换图顺序错位覆盖拼接,和/或,按照所述位姿信息对所述多帧俯视转换图剪切后拼接,以得到道路直拼图像。According to the pose information, the at least two frames of the bird's-eye transformation images are sequentially dislocated and overlayed, and/or the multiple frames of the bird's-eye transformation images are cut and spliced according to the pose information, so as to obtain a road straight mosaic image. 6.根据权利要求1所述的车道偏离检测方法,其特征在于,所述基于获得的道路直拼图像确定所述车辆在所述道路中所处目标车道对应的第一车道信息,包括:6 . The lane departure detection method according to claim 1 , wherein the determining the first lane information corresponding to the target lane where the vehicle is located on the road based on the obtained road straight-through image, comprises: 6 . 响应于对所述道路直拼图像进行视觉感知,获得道路特征点位图;Obtaining a road feature point bitmap in response to the visual perception of the road straight mosaic image; 根据每帧俯视转换图的位姿信息、光学中心和像素来源,对所述道路特征点位图进行几何恢复,以得到道路特征图像;According to the pose information, optical center and pixel source of each frame of the top-down transformation map, geometrically restore the road feature point bitmap to obtain a road feature image; 根据所述道路特征图像和所述车辆的位姿信息,获取所述车辆在所述道路中所处目标车道对应的第一车道信息。According to the road feature image and the pose information of the vehicle, first lane information corresponding to the target lane where the vehicle is located on the road is acquired. 7.根据权利要求1所述的车道偏离检测方法,其特征在于,所述根据所述车辆的定位信息,获取所述车辆在所述道路中所处定位车道对应的第二车道信息,包括:7 . The lane departure detection method according to claim 1 , wherein, according to the positioning information of the vehicle, acquiring the second lane information corresponding to the positioning lane where the vehicle is located on the road comprises: 8 . 根据所述车辆的定位信息,获取与所述定位信息对应的局部地图信息;obtaining local map information corresponding to the positioning information according to the positioning information of the vehicle; 根据所述局部地图信息确定所述车辆在所述道路中所处定位车道对应的第二车道信息。The second lane information corresponding to the positioning lane where the vehicle is located on the road is determined according to the local map information. 8.根据权利要求1所述的方法,其特征在于,所述车道信息包括车道标识、周围车道数量和周围车道线类型中的至少一种,所述根据所述第一车道信息和所述第二车道信息检测所述目标车道与所述定位车道是否为同一车道,以基于检测结果判断所述车辆是否偏离车道,包括:8 . The method according to claim 1 , wherein the lane information includes at least one of lane markings, the number of surrounding lanes, and the type of surrounding lane lines, and the lane information is based on the first lane information and the first lane information. 9 . Two-lane information detects whether the target lane and the positioning lane are the same lane, so as to determine whether the vehicle deviates from the lane based on the detection result, including: 比较所述第一车道信息和所述第二车道信息是否一致;comparing whether the first lane information and the second lane information are consistent; 若一致,则判定所述目标车道与所述定位车道为同一车道;If they are consistent, it is determined that the target lane and the positioning lane are the same lane; 若不一致,则判定所述目标车道与所述定位车道不为同一车道。If not, it is determined that the target lane and the positioning lane are not the same lane. 9.一种车道偏离检测装置,其特征在于,包括:处理器和存储有计算机程序的存储器,在所述处理器运行所述计算机程序时,实现权利要求1至8中任一项所述的车道偏离检测方法。9 . A lane departure detection device, comprising: a processor and a memory storing a computer program, when the processor runs the computer program, the device according to any one of claims 1 to 8 is implemented. 10 . Lane departure detection method. 10.一种计算机存储介质,其特征在于,存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1至8中任一项所述的车道偏离检测方法。10 . A computer storage medium, wherein a computer program is stored, and when the computer program is executed by a processor, the lane departure detection method according to any one of claims 1 to 8 is implemented. 11 .
CN202110880287.1A 2021-08-02 2021-08-02 Lane departure detection method, device and computer storage medium Active CN113591720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110880287.1A CN113591720B (en) 2021-08-02 2021-08-02 Lane departure detection method, device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110880287.1A CN113591720B (en) 2021-08-02 2021-08-02 Lane departure detection method, device and computer storage medium

Publications (2)

Publication Number Publication Date
CN113591720A true CN113591720A (en) 2021-11-02
CN113591720B CN113591720B (en) 2025-01-10

Family

ID=78253683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110880287.1A Active CN113591720B (en) 2021-08-02 2021-08-02 Lane departure detection method, device and computer storage medium

Country Status (1)

Country Link
CN (1) CN113591720B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694107A (en) * 2022-03-24 2022-07-01 商汤集团有限公司 Image processing method and device, electronic equipment and storage medium
CN117575878A (en) * 2023-11-16 2024-02-20 杭州众诚咨询监理有限公司 Intelligent management method and device for traffic facility asset data, electronic equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101992778A (en) * 2010-08-24 2011-03-30 上海科世达-华阳汽车电器有限公司 Lane deviation early warning and driving recording system and method
CN106347363A (en) * 2016-10-12 2017-01-25 深圳市元征科技股份有限公司 Lane keeping method and lane keeping device
CN106355951A (en) * 2016-09-22 2017-01-25 深圳市元征科技股份有限公司 Device and method for controlling vehicle traveling
CN107292214A (en) * 2016-03-31 2017-10-24 比亚迪股份有限公司 Deviation detection method, device and vehicle
CN107512264A (en) * 2017-07-25 2017-12-26 武汉依迅北斗空间技术有限公司 The keeping method and device of a kind of vehicle lane
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM
CN110329253A (en) * 2018-03-28 2019-10-15 比亚迪股份有限公司 Lane Departure Warning System, method and vehicle
US20190340732A1 (en) * 2016-12-29 2019-11-07 Huawei Technologies Co., Ltd. Picture Processing Method and Apparatus
CN111553319A (en) * 2020-05-14 2020-08-18 北京百度网讯科技有限公司 Method and device for acquiring information
CN112200917A (en) * 2020-09-30 2021-01-08 北京零境科技有限公司 High-precision augmented reality method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101992778A (en) * 2010-08-24 2011-03-30 上海科世达-华阳汽车电器有限公司 Lane deviation early warning and driving recording system and method
CN107292214A (en) * 2016-03-31 2017-10-24 比亚迪股份有限公司 Deviation detection method, device and vehicle
CN106355951A (en) * 2016-09-22 2017-01-25 深圳市元征科技股份有限公司 Device and method for controlling vehicle traveling
CN106347363A (en) * 2016-10-12 2017-01-25 深圳市元征科技股份有限公司 Lane keeping method and lane keeping device
US20190340732A1 (en) * 2016-12-29 2019-11-07 Huawei Technologies Co., Ltd. Picture Processing Method and Apparatus
CN107512264A (en) * 2017-07-25 2017-12-26 武汉依迅北斗空间技术有限公司 The keeping method and device of a kind of vehicle lane
CN110329253A (en) * 2018-03-28 2019-10-15 比亚迪股份有限公司 Lane Departure Warning System, method and vehicle
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM
CN111553319A (en) * 2020-05-14 2020-08-18 北京百度网讯科技有限公司 Method and device for acquiring information
CN112200917A (en) * 2020-09-30 2021-01-08 北京零境科技有限公司 High-precision augmented reality method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694107A (en) * 2022-03-24 2022-07-01 商汤集团有限公司 Image processing method and device, electronic equipment and storage medium
CN117575878A (en) * 2023-11-16 2024-02-20 杭州众诚咨询监理有限公司 Intelligent management method and device for traffic facility asset data, electronic equipment and medium
CN117575878B (en) * 2023-11-16 2024-04-26 杭州众诚咨询监理有限公司 Intelligent management method, device, electronic device and medium for transportation facility asset data

Also Published As

Publication number Publication date
CN113591720B (en) 2025-01-10

Similar Documents

Publication Publication Date Title
CN111160172A (en) Parking space detection method and device, computer equipment and storage medium
WO2020102944A1 (en) Point cloud processing method and device and storage medium
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
CN110516665A (en) Identify the neural network model construction method and system of image superposition character area
WO2022237272A1 (en) Road image marking method and device for lane line recognition
CN111539484B (en) Method and device for training neural network
CN111178215A (en) Sensor data fusion processing method and device
CN113591720B (en) Lane departure detection method, device and computer storage medium
CN109741241A (en) Fisheye image processing method, device, device and storage medium
CN115235493B (en) A method and device for automatic driving positioning based on vector map
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN114842446A (en) Parking space detection method, device and computer storage medium
CN114897684A (en) Vehicle image splicing method and device, computer equipment and storage medium
CN114445794A (en) Parking space detection model training method, parking space detection method and device
CN114897683A (en) Method and device, system and computer equipment for acquiring vehicle side image
JP2022546880A (en) Object association method and device, system, electronic device, storage medium and computer program
CN114821497B (en) Method, device, equipment and storage medium for determining target position
CN115330695A (en) A parking information determination method, electronic device, storage medium and program product
CN110827340B (en) Map updating method, device and storage medium
JP2011170400A (en) Program, method, and apparatus for identifying facility
CN113284194A (en) Calibration method, device and equipment for multiple RS (remote sensing) equipment
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
CN118823082A (en) Laser point cloud and panoramic image registration method, device, equipment and program product
WO2025102493A1 (en) Diversion line region detection method and system
CN117184075A (en) Vehicle lane change detection method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant