[go: up one dir, main page]

CN109635737B - Auxiliary vehicle navigation and positioning method based on road marking line visual recognition - Google Patents

Auxiliary vehicle navigation and positioning method based on road marking line visual recognition Download PDF

Info

Publication number
CN109635737B
CN109635737B CN201811523941.8A CN201811523941A CN109635737B CN 109635737 B CN109635737 B CN 109635737B CN 201811523941 A CN201811523941 A CN 201811523941A CN 109635737 B CN109635737 B CN 109635737B
Authority
CN
China
Prior art keywords
image
road
vehicle
line
rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811523941.8A
Other languages
Chinese (zh)
Other versions
CN109635737A (en
Inventor
张绍成
周润楠
郭均然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201811523941.8A priority Critical patent/CN109635737B/en
Publication of CN109635737A publication Critical patent/CN109635737A/en
Application granted granted Critical
Publication of CN109635737B publication Critical patent/CN109635737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

本发明提供一种基于道路标记线视觉识别辅助车辆导航定位方法,该方法包括:实时获取车辆行进中的道路彩色图像,将其处理为灰度图像;对灰度图像进行二值化处理,得到二值化图像;对二值化图像进行去噪处理,得到道路图像,每条道路线对应一个矩形;提取矩形中心线,得到道路标记线;结合车载GNSS提供的车辆位置信息以及相应地图数据库中的道路信息,确定车辆所在车道。本发明的有益效果:本发明利用消费级行车记录仪影像实时进行道路线提取的算法,模型简单、易于实现、且运算效率高,适用于辅助车辆实时导航定位数据处理;算法采取动态阈值进行图像处理,即使在光照条件较差的环境下也能有较好的道路线提取效果,从而最终实现车道级导航定位。

Figure 201811523941

The present invention provides a method for assisting vehicle navigation and positioning based on visual recognition of road marking lines. The method includes: acquiring a road color image in real time when the vehicle is traveling, and processing it into a grayscale image; performing binary processing on the grayscale image to obtain Binarize the image; perform denoising processing on the binarized image to obtain a road image, each road line corresponds to a rectangle; extract the center line of the rectangle to obtain the road marking line; combine the vehicle position information provided by the vehicle GNSS and the corresponding map database road information to determine the lane where the vehicle is located. Beneficial effects of the present invention: the present invention utilizes the algorithm of real-time road line extraction using consumer-grade driving recorder images, the model is simple, easy to implement, and has high computational efficiency, and is suitable for assisting vehicle real-time navigation and positioning data processing; the algorithm adopts dynamic thresholds to image Even in the environment with poor lighting conditions, it can have better road line extraction effect, so as to finally realize lane-level navigation and positioning.

Figure 201811523941

Description

Auxiliary vehicle navigation positioning method based on road marking line visual identification
Technical Field
The invention relates to the field of automobile navigation positioning, in particular to a method for assisting vehicle navigation positioning based on road marking line visual identification.
Background
The accurate position information in the driving process of the vehicle is one of the decisive factors for realizing the auxiliary driving of the vehicle and even developing the auxiliary driving of the vehicle to the automatic driving, but in the driving process of the vehicle, an independent GNSS positioning signal is easily interfered by the shielding or refraction signals of trees and high-rise buildings at two sides of a traffic lane, the positioning accuracy is limited to ten meters, the navigation positioning at the lane level is difficult to realize through the GNSS signal, and the requirement of the automatic driving of the vehicle cannot be met.
At present, researches on improving the positioning and navigation accuracy of vehicles include real-time map matching assisted GPS navigation, GNSS/industrial NS combined navigation, application of LED positioning technology in tunnel environment, vehicle navigation based on 3G technology and the like, but the method needs to be implemented by installing additional equipment, is high in cost and difficult to popularize and apply, and visual information becomes important observation data for distinguishing lanes along with popularization of vehicle driving recorders.
Disclosure of Invention
In view of the above, the present invention provides a method for assisting vehicle navigation and positioning based on road marking line visual identification, which uses monocular visual information to extract any point on a road line and a deflection angle of the road line, and can be used for assisting vehicle navigation and positioning.
The invention provides a method for assisting vehicle navigation and positioning based on road marking line visual identification, which comprises the following steps:
s1, acquiring a color image of a road in which a vehicle is running in real time, and processing the color image to obtain a gray image;
further, the specific method of step S1 is as follows:
s11, obtaining a color image of the road on which the vehicle is running by utilizing the camera;
s12, the color image is processed into a gray scale image, and the gray scale value of an arbitrary point P (x, y) in the pixel array of m rows and n columns of the image captured by the camera is represented as g (x, y).
S2, carrying out binarization processing on the gray level image to obtain a binarized image;
further, the specific method of step S2 is as follows:
s21, solving according to the inner and outer orientation elements of the camera to obtain a projection point of a parallel line parallel to the main longitudinal line direction in the image, namely a main point of coincidence;
s22, determining the lower half part of the image by taking the main joint obtained in the step S21 as a reference;
s23, regarding any point P (x, y) in the lower half of the image determined in step S22, the gray scale value of the point is recorded as g (x, y), and when the gray scale value of the point satisfies the following condition:
Figure BDA0001902362260000021
in the formula
Figure BDA0001902362260000022
L is the projection width, i.e. the set of the width of the road mark line projected on the photo, and 0.025 times of the number of image columns is taken as LmaxE is an additional normal number, preferably e-5, D is the contrast threshold obtained by the lighting conditions,
Figure BDA0001902362260000023
the gray scale value of the point P (x, y) is represented by 255, and if the gray scale value is not satisfied, the gray scale value is 0, so that the image formed in the way is a binary image;
and S24, determining the part of the front hood in the image according to the relative position of the front hood and the camera for the binary image obtained in the step S23, and rejecting the part of the image.
S3, denoising the binary image to obtain a road line image, wherein each road line corresponds to a rectangle;
further, the specific method of step S3 is as follows:
s31, detecting each image block of the binary image obtained in the step S2 and extracting the boundary of each image block;
s32, calculating the area of the image blocks according to the extracted boundary, wherein the image blocks smaller than the critical value are regarded as noise and filtered, and the critical value S ismin0.00015 m · n, wherein m is the number of pixel rows corresponding to the road color image, and n is the number of pixel columns corresponding to the road color image;
s33, regarding the adjacent white pixel points as points on the same image block, wrapping the image block by a minimum rectangle, and processing by replacing the image block with the rectangle;
s34, regarding a straight line passing through the middle point of the rectangle and along the long side direction as a projection of the center line of the road, the algorithm identifies as the road line the segment replaced by the rectangle satisfying the following two conditions:
(1) the inclination angle of the rectangular central line is between 10 and 70 degrees (or between 70 and 10 degrees) or vertical and is positioned at the center of the image;
(2) the rectangular aspect ratio is greater than 2.
S4, extracting the center line of the rectangle in the step S3 to obtain a road marking line;
further, the calculation formula of the rectangular center line in step S4 is:
y=tanα(x-x0)+y0
where α is the angle between the long side of the rectangle and the x-axis, x0、y0Is the center point of the rectangle.
And S5, determining the lane where the vehicle is located according to the road marking line extracted in the step S4 by combining the vehicle position information provided by the vehicle-mounted GNSS and the road information in the corresponding map database.
Further, the specific method of step S5 is as follows:
s51, extracting road information of corresponding positions from a map database through vehicle coordinate information provided by a vehicle-mounted GNSS, wherein the road information comprises the number of lanes of a road and the quasi-driving direction information of each lane;
and S52, comparing the road information in the step S51 according to the road marking line extracted in the step S4, determining the specific lane information of the vehicle, and assisting the navigation and positioning of the vehicle.
The technical scheme provided by the invention has the beneficial effects that: the method utilizes the consumption-level automobile data recorder image to carry out the algorithm of road route extraction in real time, has simple model, easy realization and high operation efficiency, and is suitable for assisting the vehicle to carry out real-time navigation positioning data processing; the algorithm adopts a dynamic threshold value to perform image processing, so that the method has a good road route extraction effect even in an environment with poor illumination conditions, and finally realizes the lane-level navigation and positioning service for the running vehicle.
Drawings
Fig. 1 is a schematic flow chart of a method for extracting a road marking line according to an embodiment of the present invention;
FIG. 2 is a schematic view of a processed gray scale of a photograph taken from a driver's perspective of a vehicle traveling on a highway according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the effect of determining and eliminating a portion of the hood in an image based on the relative positions of the hood and the camera, as provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of the use of a minimum rectangle in place of a spot provided by embodiments of the present invention;
FIG. 5 is a schematic diagram of a projected distribution of road routes provided by an embodiment of the present invention;
FIG. 6 is a comparison graph of a road line extraction graph and an original gray scale value image according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a relationship between mathematical expressions of a road route and a corresponding rectangle according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a process of extracting a road route at night according to an embodiment of the present invention;
FIG. 9 is an example diagram of equation parameters for a route taken during night driving according to an embodiment of the present invention;
FIG. 10 is an exemplary diagram of an overhead driving time lane route extraction according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating an example of extracting a route of a road driving time according to an embodiment of the present invention;
fig. 12 is an exemplary diagram of information for determining a lane where a vehicle is located when the vehicle travels at night according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention selects the pictures shot by the consumption-level automobile data recorder in the process of driving the vehicle on the highway, the flow of the invention is shown in figure 1, and the invention comprises the following steps:
s1, acquiring a color image of a road in which a vehicle is running in real time by using a consumer-grade automobile data recorder, and processing the color image to obtain a gray image;
this is because the gray scale map can simultaneously recognize images under different illumination conditions in the day and night, and the gray scale value of the processed gray scale image is represented as g (x, y) for any point P (x, y) in the pixel array of n rows and m columns of the image captured by the drive recorder as shown in fig. 2.
S2, carrying out binarization processing on the gray level image to obtain a binarized image;
and solving to obtain projection joint points of parallel lines parallel to the main longitudinal line direction in the image, namely main joint points according to the inner and outer orientation elements of the camera. Determining the lower half part of the image by taking the main joint as a reference;
the reason for this is that, for an image perpendicular to the ground plane, the principal point of convergence is the image principal point, and the projections of all the points on the ground should be below this point, so it can be seen that the projections of all the road lines on the ground are in the lower half of the image.
The lane where the vehicle is traveling is usually dark (small gray scale value) and the road line on the road surface is usually white or yellow (large gray scale value), so that any line satisfying the condition of darker sides and lighter middle may be a road line. The threshold condition for binarization is thus determined: the grayscale value of any point P (x, y) in the lower half of the image is denoted as g (x, y), and when the grayscale value of the point satisfies the following condition:
Figure BDA0001902362260000051
in the formula
Figure BDA0001902362260000052
L is the projection width, i.e. the set of the width of the road marking line projected on the photo, and 0.025 times of the number of image columns is taken as L after testmaxWhere ε is an additional normal number, preferably tested as 5, D is the contrast threshold obtained by the light conditions,
Figure BDA0001902362260000061
it should be noted that, 5 is adopted as the minimum limit of D, which aims to ensure that a region with poor lighting conditions does not generate a large amount of noise;
the gradation value of the point is represented by 255, otherwise the gradation value thereof is represented by 0, thereby outputting a binarized image.
It should be noted that, when a single image is processed, it is found that the reflection occurs at the bonnet, and since the reflection brightness is too high and is related to the weather at that time, it is difficult to remove the reflection effectively by the denoising means. Therefore, the binary image is processed by determining the position of the hood in the image according to the relative position of the hood and the camera, and skipping the image processing of the position, the effect is shown in fig. 3.
S3, denoising the binary image to obtain a road line image, wherein each road line corresponds to a rectangle;
for the binarized image obtained in step S2, detecting each block of image block and extracting its boundary;
usually, the noise exists in the form of small spots, and the road path is a long and regular pattern and is approximately rectangular. Calculating the area of the image blocks according to the extracted boundary, wherein the image blocks smaller than a critical value are regarded as noise and filtered, and the critical value SminM is 0.00015. m.n, m is the number of pixel rows corresponding to the road color image, and n is the number of columns of pixels corresponding to the road color image;
the road surface is shaded by trees or railings beside the road, and the road line is also shaded by the shading, so that black dots exist in white blocks of the binary image of the road line, and the direction interpretation of the road line is not facilitated. Therefore, it is necessary to extract the boundary nodes of the image blocks, regard the adjacent white pixel points as the points on the same image block, wrap the image block with a minimum rectangle, and use this rectangle to replace the image block for processing, as shown in fig. 4, the white image block is the binarized image block, the white rectangle is the minimum rectangle that can contain the whole image block, and the rectangle is used to replace the image block, so that a plurality of geometric parameters of the image block can be accurately and conveniently obtained;
taking a straight line passing through the middle point of the rectangle and along the long side direction as a projection of the center line of the road route, it can be known from the basic principle of center projection that the projection of the road route is not horizontal, and only the road line at the center of the image is vertical, as shown in fig. 5. Therefore, a line having an inclination angle between 10 and 70 degrees (or between 70 and 10 degrees) and a line perpendicular and at the center of the image are identified as the lane lines;
it should be noted that the length of the image block is limited, and the road line has a slender characteristic and can be distinguished from the noise such as puddles, road signs and the like, so the aspect ratio of the rectangle representing the road line should be larger than 2;
the image after the denoising process is the road line image extracted in this embodiment, and as shown in fig. 6, the comparison between the road extraction image and the original gray scale value image can find that all the road lines in the driver's sight line range have been accurately extracted.
S4, extracting the center line of the rectangle in the step S3 to obtain a road marking line;
because the precision demand of vehicle navigation location does not reach centimetre level, consequently can ignore the width of lane, regard it as the straight line that does not have the width with it, can directly calculate the rectangle central line as road line projection image expression:
y=tanα(x-x0)+y0
where α is the angle between the long side of the rectangle and the x-axis, x0、y0Is the center point of the rectangle, as shown in fig. 7, the relationship between the mathematical expression of the road line and the corresponding rectangle is shown schematically.
In the embodiment, videos recorded by the automobile data recorders under different road conditions and different illumination conditions are used as research data for testing, the different resolutions of the videos recorded by the automobile data recorders of different brands are considered, and in order to ensure that data extracted from different videos have comparability during testing, the resolutions of the videos are unified to 320 × 480 and the number of frames per second of the videos is 29.97.
In this embodiment, three common adverse conditions in daily driving are selected and the road marking line extraction test is performed by using the above steps:
(1) the image of the road route extraction process during driving at night is shown in fig. 8, the current driving lane is a 4-lane road, the driving direction is two lanes, 3 lanes of the road route exist, the road condition is good, and a large amount of vehicles do not block the road route. In the figure, the upper left corner is a gray scale image obtained after a video picture shot by a driving recorder is processed, the lower left corner is a binary image, the lower right corner is road line extraction information after processing is finished, and the upper right corner is comparison between an extracted road line and an original image. It can be seen that 3 road routes are extracted by the algorithm, all the road routes within the shooting range of the current camera are extracted, the equation parameters of the extracted road routes are shown in fig. 9, and the directions of the road routes are represented by the central points and the inclination angles of the road routes;
(2) an extracted image of a certain overhead driving time road route is shown in fig. 10, the current intersection is wide, and the ground marking line is dark in color due to abrasion. It can be seen that the algorithm extracts the road lines, but the road lines of the far lanes cannot be identified, and the identification of the road lines is difficult for the areas with darker colors or more serious wear due to the limitation of the algorithm binarization method;
(3) the road route extraction image when a road is driving is shown in fig. 11, and the current road route is clear but has vehicle interference. It can be seen that the algorithm extracts 3 road routes on the right side, and the vehicle is not mistakenly judged as a road route due to vehicle interference.
And S5, determining the lane where the vehicle is located according to the road marking line extracted in the step S4 by combining the vehicle position information provided by the vehicle-mounted GNSS and the road information in the corresponding map database.
Specifically, according to vehicle coordinate information provided by the vehicle-mounted GNSS, road information of corresponding positions is extracted from a map database, wherein the road information comprises the number of lanes of a road and quasi-driving direction information of each lane. According to the road marking lines extracted in step S4, the road information is compared, as shown in fig. 12, an exemplary diagram of specific lane information where the vehicle is located is determined during night driving, and finally the determined specific lane information is used to assist vehicle navigation and positioning.
In the embodiment, the road surface road line identification under different conditions is tested, so that the method can adapt to the illumination conditions in the day and at night, has higher resistance to the interference of factors such as vehicles and the like, and the road line extraction of the method is interfered when the road line is shallow or characters with the same color as the road line are printed on the ground.
The embodiment provides the algorithm for extracting the road lines by utilizing the monocular video of the automobile data recorder in the driving process of the vehicle, the algorithm is quick and effective, the real-time resolving capability is realized, and the quantitative information of the road marking lines extracted by the algorithm can be applied to data fusion positioning resolving with GNSS observation signals in subsequent work.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (6)

1. A method for assisting vehicle navigation and positioning based on road marking line visual identification is characterized by comprising the following steps:
s1, acquiring a color image of a road in which a vehicle is running in real time, and processing the color image to obtain a gray image;
s2, carrying out binarization processing on the gray level image to obtain a binarized image; step S2 includes:
let g (x, y) be the gray scale value of any point P (x, y) in the gray scale image, the point P (x, y) is in the lower half of the image and the gray scale value g (x, y) satisfies the following condition:
Figure FDA0002747701740000011
in the formula (I), the compound is shown in the specification,
Figure FDA0002747701740000012
m is the number of lines of pixels corresponding to the road color image, L is the projection width, LmaxRepresenting 0.025 times the number of columns of pixels corresponding to said road color image, epsilon being 5, D being the contrast threshold obtained by the illumination conditions,
Figure FDA0002747701740000013
the image formed by the points P (x, y) is a binary image;
s3, denoising the binary image by screening the area and the shape of the image block to obtain a road line image, wherein each road line corresponds to a rectangle; the specific process is as follows:
s31, screening the area of the image blocks, wherein the critical value of the area of the image blocks is as follows:Smin0.00015 m.n, wherein SminThe image block with the area smaller than the critical value is taken as noise to be filtered;
s32, extracting the boundary nodes of the image blocks, wrapping the image blocks by using a rectangle, and replacing the image blocks with the rectangle for processing;
s33, regarding a straight line passing through the middle point of the rectangle and along the long side direction as a projection of the center line of the road, the algorithm identifies as the road line the segment replaced by the rectangle satisfying the following two conditions:
the inclination angle of the rectangular central line is between 10 and 70 degrees or-70 and-10 degrees, or the rectangular central line is vertical and is positioned in the center of the image;
the rectangular aspect ratio is greater than 2;
s4, extracting the center line of the rectangle in the step S3 to obtain a road marking line;
and S5, determining the lane where the vehicle is located according to the road marking line extracted in the step S4 by combining the vehicle position information provided by the vehicle-mounted GNSS and the road information in the corresponding map database.
2. The method for assisting navigation and positioning of a vehicle based on visual identification of a road marking line as claimed in claim 1, wherein in step S1, a color image of the road on which the vehicle is traveling is captured by a camera.
3. The method for assisting navigation and positioning of a vehicle based on visual identification of a road marking line as claimed in claim 1, wherein in step S2, the projected joint of the parallel lines parallel to the main longitudinal line direction in the image is obtained by solving according to the internal and external orientation elements of the camera, and the lower half of the image is determined with reference to the projected joint.
4. The visual identification aided vehicle navigation and positioning method based on the road marking line as claimed in claim 1, wherein in step S2, the part of the front cover in the binary image is determined according to the relative position of the front cover and the camera, and the image of the front cover part is removed.
5. The method for assisting navigation and positioning of a vehicle based on visual identification of a road marking line as claimed in claim 1, wherein in step S4, the calculation formula of the rectangular center line is:
y=tanα(x-x0)+y0
where α is the angle between the long side of the rectangle and the x-axis, x0、y0Is the center point of the rectangle.
6. The method for assisting navigation and positioning of a vehicle based on visual identification of a road marking line as claimed in claim 1, wherein the specific process of step S5 is as follows:
s51, extracting road information of corresponding positions from a map database according to vehicle coordinate information provided by the vehicle-mounted GNSS, wherein the road information comprises the number of lanes of a road and the quasi-driving direction information of each lane;
and S52, comparing the road information in the step S51 according to the road marking line extracted in the step S4, determining the specific lane information of the vehicle, and assisting the navigation and positioning of the vehicle.
CN201811523941.8A 2018-12-12 2018-12-12 Auxiliary vehicle navigation and positioning method based on road marking line visual recognition Active CN109635737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811523941.8A CN109635737B (en) 2018-12-12 2018-12-12 Auxiliary vehicle navigation and positioning method based on road marking line visual recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811523941.8A CN109635737B (en) 2018-12-12 2018-12-12 Auxiliary vehicle navigation and positioning method based on road marking line visual recognition

Publications (2)

Publication Number Publication Date
CN109635737A CN109635737A (en) 2019-04-16
CN109635737B true CN109635737B (en) 2021-03-26

Family

ID=66073454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811523941.8A Active CN109635737B (en) 2018-12-12 2018-12-12 Auxiliary vehicle navigation and positioning method based on road marking line visual recognition

Country Status (1)

Country Link
CN (1) CN109635737B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523360B (en) * 2019-09-09 2023-06-13 毫末智行科技有限公司 Method, device and monocular camera for recognizing road markings
CN113819910B (en) * 2019-09-29 2024-10-11 百度在线网络技术(北京)有限公司 Viaduct area identification method and device in vehicle navigation
CN112825196B (en) * 2019-11-20 2023-04-25 阿里巴巴集团控股有限公司 Method and device for determining road sign, storage medium and editing platform
CN113032500B (en) * 2019-12-25 2023-10-17 沈阳美行科技股份有限公司 Vehicle positioning method, device, computer equipment and storage medium
CN112699765B (en) * 2020-12-25 2024-11-08 北京百度网讯科技有限公司 Method, device, electronic device and storage medium for evaluating visual positioning algorithm
US11951992B2 (en) 2021-01-05 2024-04-09 Guangzhou Automobile Group Co., Ltd. Vehicle positioning method and apparatus, storage medium, and electronic device
CN113453144B (en) * 2021-05-18 2023-06-30 广东电网有限责任公司广州供电局 Network resource allocation method, device, computer equipment and storage medium
CN114445570B (en) * 2021-12-28 2025-04-25 武汉中海庭数据技术有限公司 A method for quickly extracting strip-shaped local map elements from high-precision maps
CN115171430B (en) * 2022-08-15 2024-03-12 中通服建设有限公司 Intelligent public transport emergency processing system based on image recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009240683A (en) * 2008-03-31 2009-10-22 Railway Technical Res Inst Driver sleepiness detector
CN105678285A (en) * 2016-02-18 2016-06-15 北京大学深圳研究生院 Adaptive road aerial view transformation method and road lane detection method
CN107862290A (en) * 2017-11-10 2018-03-30 智车优行科技(北京)有限公司 Method for detecting lane lines and system
CN108205667A (en) * 2018-03-14 2018-06-26 海信集团有限公司 Method for detecting lane lines and device, lane detection terminal, storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100462047C (en) * 2007-03-21 2009-02-18 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN102314599A (en) * 2011-10-11 2012-01-11 东华大学 Identification and deviation-detection method for lane
US9429437B2 (en) * 2012-06-08 2016-08-30 Apple Inc. Determining location and direction of travel using map vector constraints
CN103389733A (en) * 2013-08-02 2013-11-13 重庆市科学技术研究院 Vehicle line walking method and system based on machine vision
CN104392212B (en) * 2014-11-14 2017-09-01 北京工业大学 A Vision-Based Road Information Detection and Front Vehicle Recognition Method
CN105674992A (en) * 2014-11-20 2016-06-15 高德软件有限公司 Navigation method and apparatus
CN106919915B (en) * 2017-02-22 2020-06-12 武汉极目智能技术有限公司 Map road marking and road quality acquisition device and method based on ADAS system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009240683A (en) * 2008-03-31 2009-10-22 Railway Technical Res Inst Driver sleepiness detector
CN105678285A (en) * 2016-02-18 2016-06-15 北京大学深圳研究生院 Adaptive road aerial view transformation method and road lane detection method
CN107862290A (en) * 2017-11-10 2018-03-30 智车优行科技(北京)有限公司 Method for detecting lane lines and system
CN108205667A (en) * 2018-03-14 2018-06-26 海信集团有限公司 Method for detecting lane lines and device, lane detection terminal, storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vision-based approach towards lane line detection and vehicle localization;Xinxin Du.et.;《Machine Vision and Applications》;20151119;第27卷;第175-191页 *
基于计算机视觉的车道线检测与交通路标识别;詹海浪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20151215(第12期);第I138-838页 *

Also Published As

Publication number Publication date
CN109635737A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109635737B (en) Auxiliary vehicle navigation and positioning method based on road marking line visual recognition
US11854272B2 (en) Hazard detection from a camera in a scene with moving shadows
Dhiman et al. Pothole detection using computer vision and learning
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN109034047B (en) Lane line detection method and device
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN105260699B (en) A kind of processing method and processing device of lane line data
US8750567B2 (en) Road structure detection and tracking
CN104246821B (en) Three-dimensional body detection device and three-dimensional body detection method
US9123242B2 (en) Pavement marker recognition device, pavement marker recognition method and pavement marker recognition program
US9569673B2 (en) Method and device for detecting a position of a vehicle on a lane
CN107590470B (en) Lane line detection method and device
CN103714538B (en) Road edge detection method and device and vehicle
CN109791598A (en) The image processing method of land mark and land mark detection system for identification
CN106203398A (en) A kind of detect the method for lane boundary, device and equipment
CN104902261A (en) Device and method for road surface identification in low-definition video streaming
Samadzadegan et al. Automatic lane detection in image sequences for vision-based navigation purposes
TW202022804A (en) Method and system for road image reconstruction and vehicle positioning
CN115082701B (en) Multi-water-line cross identification positioning method based on double cameras
Kaddah et al. Road marking features extraction using the VIAPIX® system
CN115984772A (en) Road ponding detection method and terminal based on video monitoring
CN113963324A (en) Road boundary identification method, device and electronic device
Tsai et al. Automatic roadway geometry measurement algorithm using video images
CN119360336A (en) Lane information detection method, system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant