[go: up one dir, main page]

CN112793571A - A lane line recognition device and method based on FPGA system - Google Patents

A lane line recognition device and method based on FPGA system Download PDF

Info

Publication number
CN112793571A
CN112793571A CN202110121765.0A CN202110121765A CN112793571A CN 112793571 A CN112793571 A CN 112793571A CN 202110121765 A CN202110121765 A CN 202110121765A CN 112793571 A CN112793571 A CN 112793571A
Authority
CN
China
Prior art keywords
lane line
video
driving state
fpga
fpga system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110121765.0A
Other languages
Chinese (zh)
Inventor
娄小平
张鑫
刘锋
张文玥
周玉婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202110121765.0A priority Critical patent/CN112793571A/en
Publication of CN112793571A publication Critical patent/CN112793571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及提出了一种基于FPGA系统的车道线识别装置和方法,装置包括FPGA系统和ARM系统:所述FPGA系统,用于接收包含车道线的待处理视频,根据所述待处理视频,提取车道线信息,将所述车道线信息和实时行驶视频进行拟合后,得到车辆当前行驶状态图像,并将所述当前行驶状态图像输出至所述ARM系统;所述ARM系统,用于根据所述当前行驶状态图像,得到所述车辆的行驶状态。本发明将神经网络移植到速度快、并发性高的FPGA中,且利用了嵌入式系统的灵活便捷,在检测速度上高于PC端的实现方式,同时具备深度学习检测方案的鲁棒性和健壮性,可有效检测复杂道路条件下的车道线信息,提高检测效率。

Figure 202110121765

The invention relates to and proposes a lane line recognition device and method based on an FPGA system. The device includes an FPGA system and an ARM system: the FPGA system is used to receive a to-be-processed video including lane lines, and extract the video according to the to-be-processed video. Lane line information, after fitting the lane line information and the real-time driving video, an image of the current driving state of the vehicle is obtained, and the current driving state image is output to the ARM system; the ARM system is used to The current driving state image is obtained to obtain the driving state of the vehicle. The invention transplants the neural network into the FPGA with high speed and high concurrency, and utilizes the flexibility and convenience of the embedded system, the detection speed is higher than that of the PC-side implementation, and at the same time, it has the robustness and robustness of the deep learning detection scheme. It can effectively detect lane line information under complex road conditions and improve detection efficiency.

Figure 202110121765

Description

Lane line recognition device and method based on FPGA system
Technical Field
The invention relates to the technical field of image processing and computer vision, in particular to a lane line identification device and method based on an FPGA system.
Background
The automobile is one of the most mainstream vehicles at present, so that the life style of people is changed, and the travel efficiency of people is improved. Automatic driving is the main research direction for automobiles to enter the AI world, and lane line identification is one of the key research projects based on the automatic driving technology. In the existing lane line detection technology, the traditional identification scheme has poor robustness and cannot adapt to more complicated scene road sections.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a lane line identification device and method based on an FPGA system.
The technical scheme for solving the technical problems is as follows:
the utility model provides a lane line recognition device based on FPGA system, the device includes FPGA system and ARM system:
the FPGA system is used for receiving a video to be processed containing a lane line, extracting lane line information according to the video to be processed, fitting the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputting the current driving state image to the ARM system;
and the ARM system is used for obtaining the driving state of the vehicle according to the current driving state image.
The invention has the beneficial effects that: the utility model provides a lane line recognition device based on FPGA system, the device includes FPGA system and ARM system: the FPGA system is used for receiving a video to be processed containing a lane line, extracting lane line information according to the video to be processed, fitting the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputting the current driving state image to the ARM system; and the ARM system is used for obtaining the driving state of the vehicle according to the current driving state image. The invention transplants the neural network into the FPGA with high speed and high concurrency, utilizes the flexible and convenient realization mode of the embedded system, has higher detection speed than the PC end, has the robustness and robustness of the deep learning detection scheme, can effectively detect the lane line information under the complex road condition and improves the detection efficiency.
On the basis of the technical scheme, the invention can be further improved as follows.
Furthermore, the FPGA system is also used for denoising the video to be processed by utilizing an atmospheric scattering model in combination with a multi-scale convolutional neural network.
Further, the FPGA system is specifically configured to select an interested region in each frame of image in the video to be processed, amplify the interested region, input the amplified region into a Darknet network, extract a lane line feature map, perform K-means clustering on the lane line feature map to extract a sharpened feature cluster, and obtain the lane line information according to the feature cluster.
Further, the FPGA system is specifically configured to perform fitting prediction on the lane line information and the real-time driving video using a TINY-YOLO network model, and output the current driving state image.
Further, the ARM system is specifically configured to determine whether a vehicle is crossing a lane according to the current driving state image, and send an alarm message when a distance between the vehicle and the lane reaches a preset warning threshold.
Another technical solution of the present invention for solving the above technical problems is as follows:
a lane line identification method based on an FPGA system comprises the following steps:
the FPGA system receives a video to be processed containing a lane line, extracts lane line information according to the video to be processed, fits the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputs the current driving state image to an ARM system;
and the ARM system obtains the driving state of the vehicle according to the current driving state image.
The invention has the beneficial effects that: a lane line identification method based on an FPGA system is provided, and the method comprises the following steps: the FPGA system receives a video to be processed containing a lane line, extracts lane line information according to the video to be processed, fits the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputs the current driving state image to the ARM system; and the ARM system obtains the driving state of the vehicle according to the current driving state image. The invention transplants the neural network into the FPGA with high speed and high concurrency, utilizes the flexible and convenient realization mode of the embedded system, has higher detection speed than the PC end, has the robustness and robustness of the deep learning detection scheme, can effectively detect the lane line information under the complex road condition and improves the detection efficiency.
Further, the FPGA system utilizes an atmospheric scattering model in combination with a multi-scale convolutional neural network to perform denoising processing on the video to be processed.
Further, the FPGA system extracts lane line information according to the video to be processed, and specifically includes:
the FPGA system selects an interested area in each frame of image in the video to be processed, amplifies the interested area and inputs the amplified interested area into a Darknet network, extracts a lane line feature map, performs K-means clustering on the lane line feature map to extract a sharpened feature cluster, and obtains the lane line information according to the feature cluster.
Further, the FPGA system fits the lane line information and the real-time driving video to obtain a current driving state image of the vehicle, and specifically includes:
and the FPGA system performs fitting prediction on the lane line information and the real-time driving video by using a TINY-YOLO network model, and outputs the current driving state image.
Further, the ARM system obtains the driving state of the vehicle according to the current driving state image, and specifically includes:
the ARM system is specifically used for judging whether a vehicle crosses a lane in driving according to the current driving state image, and sending out alarm information when the distance between the vehicle and the lane reaches a preset warning threshold value.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention or in the description of the prior art will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a system schematic diagram of a lane line recognition device based on an FPGA system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a lane line identification method based on an FPGA system according to an embodiment of the present invention;
fig. 3 is a system schematic diagram of a lane line recognition device based on an FPGA system according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of lane line detection in a lane line recognition device based on an FPGA system according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of image defogging of the lane line identification device based on the FPGA system according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
As shown in fig. 1, a system diagram of a lane line recognition device based on an FPGA system according to an embodiment of the present invention is shown, where the lane line recognition device based on the FPGA system includes an FPGA system and an ARM system:
the FPGA system is used for receiving a video to be processed containing a lane line, extracting lane line information according to the video to be processed, fitting the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputting the current driving state image to the ARM system;
and the ARM system is used for obtaining the driving state of the vehicle according to the current driving state image.
It should be understood that the conventional embedded integrated circuit application level chip is commonly used in DSP, ARM, PowerPC, MIPS, FPGA, etc., and the FPGA has the advantages of good flexibility, abundant resources, and fast (parallel) speed of repeated programming (programmability). In the past application, an ARM is often used as a main control in a scene, an FPGA is externally hung on a peripheral parallel RAM (random access memory) bus of the ARM, and the FPGA is used for carrying out high-speed data acquisition or operation. A special CPU hard core and an FPGA are integrated in one chip, and a brand-new heterogeneous platform called an All Programmable system on chip (SoC) is generated. On one hand, due to the creation of the brand-new platform, the design structure of the embedded system is more flexible, the size is obviously reduced, the power consumption of the system is reduced, and the reliability and the overall performance of the system are obviously improved; on the other hand, the FPGA can enter the application field of the embedded system, and the application range of the FPGA is greatly expanded. In the occasion with higher real-time requirement of lane line detection, the execution mode of CPU software obviously cannot meet the requirement, and at the moment, hardware logic is required to realize partial functions. To make custom IP cores accessible to the CPU, a bus interface is necessary. ZYNQ adopts AXI BUS to realize high-speed low-delay data interaction between PS and PL, and ensures the reliability of data transmission of internal equipment.
Currently, although a lane line detection technique in a conventional road and a structured road is innovative, a lane line is detected using, for example, an image processing method of machine learning. However, during the running of the vehicle, various interference factors affect the normal detection effect, so that the reliability of the whole system is greatly reduced. For example, a lane line area with a shadow, a partial occlusion, a heavy fog weather, a weak light at night, a broken road line, etc., may have a great influence on the result of the detection algorithm. In addition, obstacles on the lane line and vehicles traveling ahead also affect the accuracy of lane line detection. These problems bring difficulties to the detection of lane lines, so that lane line detection has been a hot spot and difficulty in research in academia and industry. Meanwhile, how to recover the real road scene by using the detection result and realize the lane departure early warning is a more popular target identification detection direction, so the application aims to provide a real-time lane line detection system with high identification efficiency and good feedback effect.
As shown in fig. 3, which is a system schematic diagram of a lane line recognition device based on an FPGA system according to an embodiment of the present invention, the device includes a heterogeneous embedded processor composed of two large modules (subsystems) of an ARM cortex xa9 processor and a programmable logic FPGA, the embedded processor is connected with an OV7670 camera and an LCD7C-D liquid crystal display, the camera is used for collecting image video information and uploading the information to the FPGA system through a USB interface, and the FPGA system integrates a programmable logic PL module and a processing system PS module; after video image data information is acquired, defogging is carried out on the acquired image through a fog image denoising IP in a defined PL, the image information after defogging is input into a lane line detection IP to acquire lane line characteristic information, the lane line characteristic information is input and output into a fitting lane line IP, and then the fitted lane line information is uploaded to a PS module through an AXI bus interface protocol; and after the lane line in the image is judged to cross the line by using a Linux operating system in the PS, warning reminding is sent out, and the fitted lane and early warning information are displayed through an LCD.
Through tests, the embodiment has the following advantages compared with the prior art, if a dual-core Cortex _ A9 processor in an FPGA system is used for detecting 720P images, the resource occupancy rate of the processor is 100%, and the processing speed is 7-8 frames of images per second, so that the video output pause and frustration are very obvious; when the ARM processor and the FPGA are used for accelerating common processing, ARM is released, under the condition that less than 50% of on-chip FPGA resources are occupied, 24 frames of images are completed per second at the processing speed of 720P images, and the real-time requirement of lane line detection is met. In the embodiment, the FPGA is used for replacing a processor to realize repeated convolution multiply-add operation of a neural network, hardware acceleration is carried out on three parts of fog image defogging, lane line detection and lane line fitting, and meanwhile, ARM is used for processing complex control and display interface driving.
All internal equipment of the FPGA system is provided with an AXI interface, and the internal equipment can carry out high-speed low-delay communication through an AXI bus protocol, namely the ARM and the FPGA can ensure high-speed data transmission, namely the interaction of the internal data cannot become a factor of detection speed.
The method and the device realize the detection early warning of the lane lines in the real-time road scene and the image optimization in the foggy day, and improve the detection speed and the robustness.
The lane line recognition device based on the FPGA system provided by the embodiment comprises the FPGA system and an ARM system: the FPGA system is used for receiving a video to be processed containing a lane line, extracting lane line information according to the video to be processed, fitting the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputting the current driving state image to the ARM system; and the ARM system is used for obtaining the driving state of the vehicle according to the current driving state image. The invention transplants the neural network into the FPGA with high speed and high concurrency, utilizes the flexible and convenient realization mode of the embedded system, has higher detection speed than the PC end, has the robustness and robustness of the deep learning detection scheme, can effectively detect the lane line information under the complex road condition and improves the detection efficiency.
Based on the above embodiment, further, the FPGA system is further configured to perform denoising processing on the video to be processed by using an atmospheric scattering model in combination with a multi-scale convolutional neural network.
Further, the FPGA system is specifically configured to select an interested region in each frame of image in the video to be processed, amplify the interested region, input the amplified region into a Darknet network, extract a lane line feature map, perform K-means clustering on the lane line feature map to extract a sharpened feature cluster, and obtain the lane line information according to the feature cluster.
Further, the FPGA system is specifically configured to perform fitting prediction on the lane line information and the real-time driving video using a TINY-YOLO network model, and output the current driving state image.
Further, the ARM system is specifically configured to determine whether a vehicle is crossing a lane according to the current driving state image, and send an alarm message when a distance between the vehicle and the lane reaches a preset warning threshold.
It should be understood that, in the actual design process, the device in this embodiment includes the following modules, image signal acquisition, image defogging, lane line detection and identification, and display output. The method comprises the steps of acquiring a video frame image by using an image acquisition unit, defogging the frame image in the video to obtain a clearly visible lane line information image, taking the clear image as a data source, detecting a lane line through a neural network to obtain lane line characteristic information in the image, performing cluster fitting on the lane line characteristic information by using a clustering algorithm to obtain a clear and coherent lane line, and finally displaying a fitted lane through a display to play a role in assisting driving and providing lane keeping for automatic driving. The heterogeneous embedded processor formed by the ARM core and the programmable logic FPGA carries out FPGA acceleration on the neural network based on product operation and control system integration based on the ARM core, and high efficiency of lane line detection is realized through function division. The Linux system in the PS coordinates the interaction of software and hardware data and the normal work of functions, and ensures the acquisition of video images and the reasonable output of data results. The realization of image defogging and lane line detection requires a great deal of training on a neural network model so as to obtain a model with balanced performances in all aspects, and the operation which needs to be repeated continuously is realized through programmable logic and is accelerated by corresponding hardware, and meanwhile, a detection module is packaged into an IP (Internet protocol) check to realize the functions and the data protection. And displaying the output aspect, recording the position of the lane line, predicting the change of the lane line, and observing the real-time detection result. The implementation and function of each unit functional module are further described.
And for image signal acquisition, an OV7670 camera based on a USB interface can be used as acquisition equipment, and meanwhile, the FPGA system is provided with an SD card for storing real driving scene videos under multiple conditions so as to perform offline test on the model. The OV7670 is used for providing high-definition video information containing lane scenes for real-time lane line detection.
As shown in fig. 5, the schematic flow chart of image defogging of the lane line identification device based on the FPGA system according to the embodiment of the present invention shows that the image defogging module predicts and normalizes function values by using an atmospheric scattering model in combination with a 16-layer multi-scale convolutional neural network, so as to denoise a blurred image. Meanwhile, the defogging network is accelerated by utilizing the parallelism of the FPGA, and the network model is reproduced and packaged at the programmable logic terminal. And the repeated and rapid calling of the image defogging IP core by the control end is realized through the data interaction between the processing system end and the programmable logic port.
As shown in fig. 4, which is a schematic flow chart of lane line detection in a lane line recognition device based on an FPGA system according to an embodiment of the present invention, the lane line detection recognition receives image video information from an image after defogging, and performs region-of-interest amplification on a clear image. After the image is subjected to 4-layer down-sampling in 32-layer compressed Darknet, the characteristics of two lane lines are extracted, K-means clustering is carried out on the characteristics to extract a sharpened characteristic cluster, and the information of the characteristic cluster is tracked and predicted. Since the size change of the lane line in the driving scene is obvious, the sliding action of the window can be predicted, so that the hardware acceleration can be carried out by using the FPGA, namely, the programmable logic can be used as the hardware acceleration function and is packaged into the lane line detection IP.
And displaying and outputting, namely, an LCD7C-D display mainly comprising an HDMI interface, fitting and predicting a lane line path by using Tiny-yolo, and outputting two clear and high-brightness lane lines, wherein the lane lines display the path information of the current visible road and the path information of the invisible road in the foggy weather, and reconstructing and predicting the lane lines under the conditions of unobvious lane lines, incomplete lane lines and foggy weather.
And (3) packaging the image defogging, lane line detection and identification and fitted lane line into an IP core based on an AXI bus protocol by using a Vivado HLS. The three IP structures are similar, and the three IP structures mainly comprise an algorithm processing unit, a port AXI bus protocol unit, an IP control end and an input/output end. The image defogging IP reads and writes video data into the DDR memory through the AXI VDMA, and the register access of the defogging IP by the processing system is realized through the AXI-Lite bus interface. The AXI VDMA is a soft core IP provided by a heterogeneous embedded processor manufacturer, and is used for converting a data Stream in an AXI Stream format into a Memory Map format or converting data in the Memory Map format into the AXI Stream, so as to realize communication with the DDR 3. The Vivado HLS can be regarded as an IP encapsulation tool, which encapsulates a function realized by a high-level language such as C or C + +, System C, or OpenCL. Wherein, the program interface and the device driver are added with IP core driver related to the OV7670 LCD, and the heterogeneous embedded processor manufacturer provides video pipeline driver which is realized based on the kernel driver framework of V4l 2. Meanwhile, the enable parameter CONFIG _ VIDEO _ XLINX is bound when the Linux kernel is configured. To facilitate configuration and access operations of the system hardware by a user, the ioct1 command class may be used to perform functions. When the program is debugged, different video sources are selected for processing, and the lane line detection effect and the CPU use efficiency are observed through the display.
The embodiment realizes the function of detecting the lane line in real time in the foggy weather, and helps a driver filter and extract the lane line information under the condition of low visibility. The power supply of the FPGA system is turned on to electrify the camera and the LCD, the image acquisition position of the default camera is optimal, and the extraction fitting condition of the lane line on the display is observed according to the motion condition of the automobile. The running speed of the automobile can be adjusted to observe the response speed and the detection effect condition of the detection, and the reliability and the effectiveness of the detection are judged.
As shown in fig. 2, a flow diagram of a lane line identification method based on an FPGA system according to an embodiment of the present invention is shown, where the lane line identification method based on the FPGA system includes the following steps:
110. the FPGA system receives a video to be processed containing a lane line, extracts lane line information according to the video to be processed, fits the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputs the current driving state image to the ARM system.
120. And the ARM system obtains the driving state of the vehicle according to the current driving state image.
Further, the FPGA system utilizes an atmospheric scattering model in combination with a multi-scale convolutional neural network to perform denoising processing on the video to be processed.
Further, the FPGA system extracts lane line information according to the video to be processed, and specifically includes:
the FPGA system selects an interested area in each frame of image in the video to be processed, amplifies the interested area and inputs the amplified interested area into a Darknet network, extracts a lane line feature map, performs K-means clustering on the lane line feature map to extract a sharpened feature cluster, and obtains the lane line information according to the feature cluster.
Further, the FPGA system fits the lane line information and the real-time driving video to obtain a current driving state image of the vehicle, and specifically includes:
and the FPGA system performs fitting prediction on the lane line information and the real-time driving video by using a TINY-YOLO network model, and outputs the current driving state image.
Further, the ARM system obtains the driving state of the vehicle according to the current driving state image, and specifically includes:
the ARM system is specifically used for judging whether a vehicle crosses a lane in driving according to the current driving state image, and sending out alarm information when the distance between the vehicle and the lane reaches a preset warning threshold value.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium.
Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The utility model provides a lane line recognition device based on FPGA system which characterized in that, the device includes FPGA system and ARM system:
the FPGA system is used for receiving a video to be processed containing a lane line, extracting lane line information according to the video to be processed, fitting the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputting the current driving state image to the ARM system;
and the ARM system is used for obtaining the driving state of the vehicle according to the current driving state image.
2. The FPGA system-based lane line recognition device of claim 1,
the FPGA system is further used for denoising the video to be processed by utilizing an atmospheric scattering model and combining a multi-scale convolution neural network.
3. The FPGA system-based lane line recognition device of claim 1,
the FPGA system is specifically used for selecting an interested area in each frame of image in the video to be processed, amplifying the interested area, inputting the amplified interested area into a Darknet network, extracting a lane line feature map, performing K-means clustering on the lane line feature map to extract a sharpened feature cluster, and obtaining the lane line information according to the feature cluster.
4. The FPGA system-based lane line recognition device of claim 1,
the FPGA system is specifically used for performing fitting prediction on the lane line information and the real-time driving video by using a TINY-YOLO network model and outputting the current driving state image.
5. The FPGA system-based lane line recognition device of claim 1,
the ARM system is specifically used for judging whether a vehicle crosses a lane in driving according to the current driving state image, and sending out alarm information when the distance between the vehicle and the lane reaches a preset warning threshold value.
6. A lane line identification method based on an FPGA system is characterized by comprising the following steps:
the FPGA system receives a video to be processed containing a lane line, extracts lane line information according to the video to be processed, fits the lane line information and a real-time driving video to obtain a current driving state image of a vehicle, and outputs the current driving state image to an ARM system;
and the ARM system obtains the driving state of the vehicle according to the current driving state image.
7. The FPGA system-based lane line identification method of claim 6, further comprising:
and the FPGA system utilizes an atmospheric scattering model and a multi-scale convolution neural network to carry out denoising processing on the video to be processed.
8. The method for identifying lane lines based on the FPGA system according to claim 6, wherein the FPGA system extracts lane line information according to the video to be processed, and specifically comprises:
the FPGA system selects an interested area in each frame of image in the video to be processed, amplifies the interested area and inputs the amplified interested area into a Darknet network, extracts a lane line feature map, performs K-means clustering on the lane line feature map to extract a sharpened feature cluster, and obtains the lane line information according to the feature cluster.
9. The lane line recognition method based on the FPGA system of claim 6, wherein the FPGA system fits the lane line information and the real-time driving video to obtain a current driving state image of the vehicle, specifically comprising:
and the FPGA system performs fitting prediction on the lane line information and the real-time driving video by using a TINY-YOLO network model, and outputs the current driving state image.
10. The method for identifying lane lines based on the FPGA system of claim 6, wherein the ARM system obtains the driving status of the vehicle according to the current driving status image, specifically comprising:
the ARM system is specifically used for judging whether a vehicle crosses a lane in driving according to the current driving state image, and sending out alarm information when the distance between the vehicle and the lane reaches a preset warning threshold value.
CN202110121765.0A 2021-01-28 2021-01-28 A lane line recognition device and method based on FPGA system Pending CN112793571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110121765.0A CN112793571A (en) 2021-01-28 2021-01-28 A lane line recognition device and method based on FPGA system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110121765.0A CN112793571A (en) 2021-01-28 2021-01-28 A lane line recognition device and method based on FPGA system

Publications (1)

Publication Number Publication Date
CN112793571A true CN112793571A (en) 2021-05-14

Family

ID=75812683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110121765.0A Pending CN112793571A (en) 2021-01-28 2021-01-28 A lane line recognition device and method based on FPGA system

Country Status (1)

Country Link
CN (1) CN112793571A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102806913A (en) * 2011-05-31 2012-12-05 德尔福电子(苏州)有限公司 Novel lane line deviation detection method and device
CN103996053A (en) * 2014-06-05 2014-08-20 中交第一公路勘察设计研究院有限公司 Lane departure alarm method based on machine vision
CN108297867A (en) * 2018-02-11 2018-07-20 江苏金羿智芯科技有限公司 A kind of lane departure warning method and system based on artificial intelligence
CN109460742A (en) * 2018-11-20 2019-03-12 中山大学深圳研究院 A kind of deviation alarm method based on high resolution CMOS
WO2019200938A1 (en) * 2018-04-18 2019-10-24 福州大学 Early warning system for vehicles rolling on line
CN110516633A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of method for detecting lane lines and system based on deep learning
CN110517521A (en) * 2019-08-06 2019-11-29 北京航空航天大学 A lane departure warning method based on road-vehicle fusion perception

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102806913A (en) * 2011-05-31 2012-12-05 德尔福电子(苏州)有限公司 Novel lane line deviation detection method and device
CN103996053A (en) * 2014-06-05 2014-08-20 中交第一公路勘察设计研究院有限公司 Lane departure alarm method based on machine vision
CN108297867A (en) * 2018-02-11 2018-07-20 江苏金羿智芯科技有限公司 A kind of lane departure warning method and system based on artificial intelligence
WO2019200938A1 (en) * 2018-04-18 2019-10-24 福州大学 Early warning system for vehicles rolling on line
CN109460742A (en) * 2018-11-20 2019-03-12 中山大学深圳研究院 A kind of deviation alarm method based on high resolution CMOS
CN110517521A (en) * 2019-08-06 2019-11-29 北京航空航天大学 A lane departure warning method based on road-vehicle fusion perception
CN110516633A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of method for detecting lane lines and system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
崔文靓等: "基于YOLOv3算法的公路车道线检测方法研究", 《自动化学报》 *
陈清江等: "基于卷积神经网络的图像去雾算法", 《液晶与显示》 *

Similar Documents

Publication Publication Date Title
KR102677044B1 (en) Image processing methods, apparatus and devices, and storage media
CN114118124B (en) Image detection method and device
US10521700B2 (en) Methods and systems for converting a line drawing to a rendered image
CN104517111A (en) Lane line detection method and system, and lane deviation early warning method and system
CN108122245B (en) Target behavior description method and device and monitoring equipment
JP2021170399A (en) Signal lamp identification method and apparatus, device, storage medium and program
US11250279B2 (en) Generative adversarial network models for small roadway object detection
CN110837760B (en) Target detection method, training method and apparatus for target detection
CN107358803A (en) A kind of traffic signal control system and its control method
WO2025130617A1 (en) Sign detection method and apparatus, and vehicle
EP3772019A1 (en) Line-based feature generation for vision-based driver assistance systems and methods
CN113408325A (en) Method and device for identifying surrounding environment of vehicle and related equipment
CN114638969A (en) Vehicle body multi-attribute detection method, electronic equipment and storage medium
CN114897762A (en) A kind of automatic positioning method and device of coal mine working face shearer
CN112793571A (en) A lane line recognition device and method based on FPGA system
CN113553953B (en) Vehicle parabola detection method, device, electronic device and readable storage medium
CN115953760A (en) A double-camera fusion traffic light recognition method, device, electronic equipment and medium
CN113553877B (en) Depth gesture recognition method and system and electronic equipment thereof
CN114663831A (en) Vehicle-mounted image target detection method, system and computer-readable storage medium
Crash Enabling pedestrian safety using computer vision techniques: A case study of the 2018 uber inc. self-driving
CN114943748A (en) Data processing method and device, electronic equipment and storage medium
CN115035496B (en) Obstacle detection method, vehicle control method, device, vehicle and medium
CN119888647B (en) Road condition monitoring method and system under complex light based on lightweight enhanced network
US20230419682A1 (en) Method for managing driving and electronic device
TWI817579B (en) Assistance method for safety driving, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210514

RJ01 Rejection of invention patent application after publication