[go: up one dir, main page]

CN113819910A - Method and device for identifying overpass zone in vehicle navigation - Google Patents

Method and device for identifying overpass zone in vehicle navigation Download PDF

Info

Publication number
CN113819910A
CN113819910A CN202110961517.7A CN202110961517A CN113819910A CN 113819910 A CN113819910 A CN 113819910A CN 202110961517 A CN202110961517 A CN 202110961517A CN 113819910 A CN113819910 A CN 113819910A
Authority
CN
China
Prior art keywords
vehicle
viaduct
identification
image
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110961517.7A
Other languages
Chinese (zh)
Other versions
CN113819910B (en
Inventor
李冰
刘毅
杨明生
周志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Priority to CN202110961517.7A priority Critical patent/CN113819910B/en
Publication of CN113819910A publication Critical patent/CN113819910A/en
Application granted granted Critical
Publication of CN113819910B publication Critical patent/CN113819910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

The application discloses a method and a device for identifying a viaduct area in vehicle navigation, electronic equipment and a storage medium, and relates to the technical field of intelligent transportation and Internet of vehicles. The specific implementation scheme is as follows: responding to a fork road of the elevated bridge area where the vehicle is located currently, and acquiring inertial data measured by an inertial sensor on the vehicle in the process of the vehicle traveling a first distance; acquiring image data acquired by an image acquisition device on the vehicle in the process of the vehicle traveling a second distance; classifying and identifying the image data based on the secondary identification model to obtain an image identification result of the image data; and identifying whether the vehicle is currently positioned on the viaduct or under the viaduct according to the inertial data and the image identification result. According to the method and the device, whether the current position is on the viaduct or under the viaduct can be accurately identified, then the route is automatically planned, the manual switching of the upper and lower viaducts is avoided, and the navigation accuracy is improved.

Description

Method and device for identifying overpass zone in vehicle navigation
Technical Field
The application relates to the technical field of intelligent transportation and vehicle networking, in particular to a method and a device for identifying a viaduct area in vehicle navigation, electronic equipment and a storage medium.
Background
With the progress of science and technology and the development of mobile internet technology, more and more people choose to use the mobile phone navigation software APP (Application) when driving a vehicle, and great convenience is brought to owners who are not familiar with a route. The navigation software mainly utilizes a Global Positioning System (GPS) module (or a beidou module) to acquire longitude and latitude information for Positioning, and the working process of the navigation software generally performs path planning according to a departure place and a destination.
In the related technology, a vehicle driver drives according to a planned path, and in the process, when the vehicle is in an elevated bridge area, an elevated bridge is often complicated, the upper part of the elevated bridge is parallel to the lower part of the bridge, and the upper part or the lower part of the elevated bridge is judged and bound mainly by means of the longitude and latitude of a GPS and continuous GPS positions. However, due to the partial shielding of the bridge, the GPS is inaccurate or the bridge is vertically elevated, which is prone to cause a problem of wrong route judgment, and at this time, the navigation route needs to be switched manually, and when the navigation is started in a bridge area, the starting point cannot be accurately positioned on the bridge or under the bridge, which results in poor user experience.
Disclosure of Invention
The application provides a method and a device for identifying a viaduct area in vehicle navigation, electronic equipment and a storage medium.
According to a first aspect of the present application, a method for identifying a overpass zone in vehicle navigation is provided, wherein an inertial sensor and an image acquisition device are arranged on the vehicle, and the method comprises:
acquiring inertial data measured by the inertial sensor during a first distance traveled by the vehicle in response to the vehicle being currently at a fork of an elevated bridge zone;
acquiring image data acquired by the image acquisition device in the process that the vehicle travels a second distance;
classifying and identifying the image data based on a secondary identification model to obtain an image identification result of the image data;
and identifying whether the vehicle is currently positioned on the viaduct or under the viaduct according to the inertial data and the image identification result.
According to a second aspect of the present application, there is provided a viaduct area identification device in vehicle navigation, wherein an inertial sensor and an image acquisition device are arranged on the vehicle, and the device includes:
the first acquisition module is used for responding to the intersection of the elevated bridge area where the vehicle is currently located, and acquiring inertial data measured by the inertial sensor in the process that the vehicle travels a first distance;
the second acquisition module is used for acquiring image data acquired by the image acquisition device in the process of traveling a second distance by the vehicle;
the first identification module is used for carrying out classification identification on the image data based on a secondary identification model so as to obtain an image identification result of the image data;
and the second identification module is used for identifying whether the vehicle is positioned on the viaduct or under the viaduct currently according to the inertial data and the image identification result.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of identifying a overpass zone in a vehicle navigation system of the first aspect of the present application.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the method for identifying a overpass zone in a vehicle navigation according to the first aspect of the present application.
According to a fifth aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of identifying a overpass zone in a vehicle navigation according to the first aspect of the present application.
According to the technical scheme, the problem of inaccurate route judgment of a GPS or vertical overhead caused by partial shielding of a bridge in the related technology can be solved, manual switching is needed, and when navigation is started in a bridge area, the initial point cannot be accurately positioned on a bridge or under the bridge and other technical problems can be solved, so that whether the current overhead route is positioned on the viaduct or under the viaduct can be accurately identified, the route can be automatically planned, the overhead route is prevented from being manually switched, the navigation accuracy is improved, and the technical effect of user experience is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a overpass zone identification method in vehicle navigation according to one embodiment of the present application;
FIG. 2 is a flowchart illustrating another method for identifying overpass zones in vehicle navigation according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a overpass zone identification device in vehicle navigation according to one embodiment of the present application;
fig. 4 is a schematic structural diagram of a overpass zone identification device in vehicle navigation according to another embodiment of the present application;
fig. 5 is a block diagram of an electronic device for a method of viaduct area identification in vehicle navigation according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The application provides a method and a device for identifying a viaduct area in vehicle navigation, electronic equipment and a non-transitory computer readable storage medium storing computer instructions, and solves the technical problems that in the related art, due to partial shielding of a bridge, a GPS is inaccurate or an upper and lower vertical viaduct is high, a route judgment error is easy to occur, switching needs to be performed manually, and when the navigation is started in the viaduct area, a starting point cannot be accurately positioned on the bridge or under the bridge, and the like. Specifically, a overpass identification method, an apparatus, an electronic device, and a non-transitory computer-readable storage medium storing computer instructions in the vehicle navigation of the embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a flowchart of a overpass zone identification method in vehicle navigation according to one embodiment of the present application. It should be noted that the method for identifying a overpass zone in vehicle navigation according to the embodiment of the present application may be applied to the apparatus for identifying an overpass zone in vehicle navigation according to the embodiment of the present application, and the apparatus may be configured on an electronic device. As an example, the electronic device may be a map engine side device having a navigation function. In addition, an inertial sensor and an image acquisition device are arranged on the vehicle. Optionally, the image capturing device may be a vehicle-mounted camera for capturing images of an environment outside the vehicle.
As shown in fig. 1, the overpass identification method in vehicle navigation may include:
step 101, responding to a fork road of the vehicle in the elevated bridge area, acquiring inertial data measured by an inertial sensor in the process that the vehicle travels a first distance.
For example, it is assumed that a map engine end device with a navigation function is installed on a vehicle, and in a process that a user uses the navigation function provided by the map engine end device to navigate, a GPS positioning system on the vehicle may be used to determine a current position of the vehicle, for example, to obtain longitude and latitude coordinates of the current position of the vehicle, and then, whether the vehicle is currently located at a fork of an elevated bridge area may be determined according to the longitude and latitude coordinates of the current position in combination with a map provided by the map engine end.
Optionally, when it is determined that the vehicle is currently located at the intersection of the viaduct area, the inertial sensor provided on the vehicle may measure inertial data of the vehicle during the first distance traveled by the vehicle, and send the inertial data to the map engine-side device, so that the map engine-side device may obtain the inertial data measured by the inertial sensor during the first distance traveled by the vehicle.
Inertial data includes, but is not limited to, acceleration, tilt, shock, vibration, rotation, and multiple degree of freedom (DoF) motion, among others.
For example, when the vehicle is currently located at a junction of an elevated bridge area, the vehicle may determine whether to enter a ramp, if so, set the current position of the vehicle as position 1, and then during the traveling of the vehicle, may determine whether the vehicle has finished the ramp, if so, set the current position at this time as position 2, and at this time, may determine the distance between position 1 and position 2 as the first distance, and the inertial sensor may measure inertial data of the vehicle during the traveling of the vehicle by the first distance and upload the measured inertial data to the map engine-side device, so that the map engine-side device may obtain the inertial data measured by the inertial sensor during the traveling of the vehicle by the first distance. It should be noted that when it is impossible to determine whether the vehicle has finished the ramp, the position N meters from the front distance position 1 may be set as the position 2, for example, N may be 600, and the distance between the position 1 and the position 2 may be determined as the first distance, that is, when it is impossible to determine whether the vehicle has finished the ramp, 600 meters may be used as the first distance, so as to obtain the inertial data measured by the inertial sensor during the vehicle travels for 600 meters.
In the embodiment of the present application, when the vehicle is not currently located at the intersection of the overpass, the identification of the overpass is not performed at this time.
And 102, acquiring image data acquired by the image acquisition device in the process of the vehicle traveling the second distance.
In the embodiment of the application, the image acquisition device on the vehicle can acquire the image of the external environment of the vehicle during the running process of the vehicle. After acquiring inertial data measured by the inertial sensor during the first distance traveled by the vehicle, image data acquired by the image acquisition device during the second distance traveled by the vehicle may be acquired.
For example, when it is determined that the vehicle is currently at an intersection of the elevated bridge area and it is determined that the vehicle currently travels a first distance, the current position of the vehicle may be set to position 3, and then, during the traveling of the vehicle, the M-meter position from the position 3 to the front position may be set to position 4, for example, M may be 100, and at this time, the distance between the position 3 and the position 4 may be determined to be the second distance, and during the traveling of the vehicle at the second distance, the image data acquired by the image acquisition device during the traveling of the vehicle at the second distance may be acquired.
And 103, classifying and identifying the image data based on the secondary identification model to obtain an image identification result of the image data.
In one implementation, the secondary recognition module includes a first recognition model and a second recognition model. Optionally, the image data is classified and identified based on the first identification model and the second identification model to obtain an image identification result of the image data. Wherein the first recognition model can be used for carrying out rough classification recognition on the image data; the second recognition model can be used for further fine recognition of the uncertain classification results output by the first recognition model. The image recognition result of the image data is determined by combining the recognition result of the first recognition model and the recognition result of the second recognition model.
And 104, identifying whether the vehicle is positioned on the viaduct or under the viaduct currently according to the inertial data and the image identification result.
In the embodiment of the application, an included angle between the current movement direction of the vehicle and the horizontal direction of the earth surface can be calculated according to the inertial data, the image data acquired by the image acquisition device in the process of the vehicle traveling the second distance is classified and identified to obtain a corresponding image identification result, and then whether the vehicle is currently positioned on the viaduct or under the viaduct is judged according to the included angle between the current movement direction of the vehicle and the horizontal direction of the earth surface and the image identification result. The specific implementation process can be referred to the description of the subsequent embodiments.
In order to solve the technical problem that whether the current vehicle is under the viaduct or on the viaduct when the map engine end device is in the navigation initial state or the navigation state cannot be accurately positioned, optionally, in an embodiment of the present application, the map engine end located on the vehicle is in the navigation initial state or in the navigation state, and when it is determined that the current position of the vehicle belongs to the viaduct area, image data acquired by the image acquisition device during the vehicle travels the third distance may be acquired, and then the image data may be classified and identified to determine whether the vehicle is currently on the viaduct or under the viaduct.
That is, when the map engine end located on the vehicle is in the navigation starting state or in the navigation state, and when it is determined that the current position of the vehicle belongs to the viaduct area, the image acquisition device may acquire image data during the vehicle travels the third distance, and upload the image data to the map engine end device, so that the map engine end device acquires the image data acquired by the image acquisition device during the vehicle travels the third distance, and then performs classification and identification on the image data to determine whether the vehicle is currently located on the viaduct or under the viaduct. Therefore, the technical problem that whether the current vehicle is under or on the viaduct can not be accurately positioned when the map engine end equipment is in the navigation initial state or the navigation state can be solved.
According to the method for identifying the viaduct area in the vehicle navigation, whether the viaduct area is located on the viaduct or under the viaduct can be accurately identified, so that the route is automatically planned, manual switching of the upper and lower viaduct routes is avoided, the navigation accuracy is improved, and the user experience is improved. Because the current position of the vehicle is determined, whether the vehicle is currently positioned at the turnout of the viaduct area or not is judged according to the current position, when the vehicle is judged to be currently positioned at the turnout of the viaduct area, the inertial data measured by the inertial sensor in the process of the vehicle traveling for the first distance is obtained, the image data collected by the image collecting device in the process of the vehicle traveling for the second distance is obtained, and then the vehicle is judged to be currently positioned on the viaduct or positioned under the viaduct according to the inertial data measured by the inertial sensor in the process of the vehicle traveling for the first distance and the image data collected by the image collecting device in the process of the vehicle traveling for the second distance, the problem that the GPS is inaccurate or vertically elevated due to partial shielding of the bridge in the related technology and the problem of wrong route judgment easily occurs and the switching needs to be carried out manually is solved, and when navigation is started in a bridge area, the initial point cannot be accurately positioned on the bridge or under the bridge, and the like, so that the aim of accurately identifying whether the current point is positioned on the viaduct or under the viaduct can be achieved, the route can be automatically planned, the manual switching of the upper and lower viaduct routes is avoided, the navigation accuracy is improved, and the user experience technical effect is improved.
Fig. 2 is a flowchart of another overpass zone identification method in vehicle navigation according to an embodiment of the present disclosure. As shown in fig. 2, the overpass identification method in vehicle navigation may include:
step 201, responding to the intersection of the elevated bridge area where the vehicle is currently located, acquiring inertial data measured by the inertial sensor in the process of the vehicle traveling the first distance.
In an embodiment of the present application, an implementation manner of the step 210 may refer to an implementation manner of the step 110, which is not described herein again.
And 202, calculating an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the inertial data.
In the embodiment of the application, after acquiring the inertial data measured by the inertial sensor during the vehicle travels the first distance, according to the inertial data, the non-gravitational acceleration in the vehicle motion direction can be determined, the current acceleration of the vehicle can be determined, and then the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface can be calculated according to the non-gravitational acceleration, the current acceleration of the vehicle and the gravitational acceleration.
For example, after acquiring the inertial data measured by the inertial sensor during the vehicle traveling the first distance, according to the inertial data, when the acceleration sensitive axis direction of the inertial sensor is consistent with the vehicle moving direction, the acceleration of the acceleration sensitive axis senses the non-gravitational acceleration in the vehicle moving direction, which is understood as the sum of the component of the gravitational acceleration in the vehicle moving direction and the current acceleration of the vehicle. Data from an acceleration sensor on the vehicle may be acquired to determine a current acceleration of the vehicle, and then based on the acceleration of non-gravity, the current acceleration of the vehicle, and the acceleration of gravity, the following formula may be calculated:
Figure BDA0003222491330000071
the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface can be calculated. Wherein theta is an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface, and alpha isccyIs the acceleration of the vehicle in the direction of motion without gravitational force, alphacarAs the current acceleration of the vehicle, G0Is the acceleration of gravity. When the included angle between the current movement direction of the vehicle and the horizontal direction of the earth surface is calculated to be an ascending included angle, the fact that the vehicle is located on the viaduct at present can be determined, and when the included angle between the current movement direction of the vehicle and the horizontal direction of the earth surface is calculated to be a descending included angle, the fact that the vehicle is located under the viaduct at present can be determined.
And step 203, acquiring image data acquired by the image acquisition device in the process of the vehicle traveling the second distance.
For example, inertia data measured by the inertia sensor during the vehicle travels a first distance may be acquired, then the position for determining that the ramp has ended may be set as position 3, then the position of the vehicle during the traveling process may be set as position 4, for example, N may be 100, and the image acquisition device may acquire image data during the vehicle travels a second distance, where the distance between position 3 and position 4 is a second distance, and upload the image data to the map engine device, so that the map engine device may acquire the image data acquired by the image acquisition device during the vehicle travels the second distance.
And 204, classifying and identifying the image data based on the secondary identification model to obtain an image identification result of the image data.
In the embodiment of the application, after the map engine-side device acquires the image data acquired by the image acquisition device in the process of the vehicle traveling the second distance, the image data can be roughly classified and identified through the first identification model, and the image data of which the classification label output by the first identification model is an unknown label is finely identified through the second identification model; and obtaining corresponding image recognition results according to the classification labels output by the first recognition model, namely the labels on the bridge and the labels under the bridge, and the classification label results output by the second recognition model.
For example, the first recognition model and the second recognition model are models obtained based on deep learning or traditional Machine learning training, wherein the traditional Machine learning supports a classifier (SVM) algorithm and a feature of organized Gradient (HOG) algorithm.
That is to say, after the map engine-side device acquires the image data acquired by the image acquisition device in the process of the vehicle traveling the second distance, the image data can be roughly classified and identified through the first identification model, the bridge upper label, the bridge lower label and the unknown label with obvious features can be roughly classified and identified, and the confidence level of the bridge upper label or the confidence level of the bridge lower label is given, wherein the bridge upper label or the bridge lower label must be higher than a certain threshold value, otherwise, the image data can be classified into the unknown label, wherein the unknown label does not contain the obvious features, in order to further more accurately judge the position, the image data of which the classification label output by the first identification model is the unknown label is finely judged through the second identification model, and then the classification labels output by the first identification model are the bridge upper label, the bridge lower label and the classification label result output by the second identification model, and finally, a corresponding image recognition result is obtained, so that the image recognition result is more accurate.
And step 205, judging whether the vehicle is currently positioned on the viaduct or under the viaduct according to the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface and the image recognition result.
In the embodiment of the application, an included angle between the current movement direction of the vehicle and the horizontal direction of the earth surface is calculated, image data collected by the image collecting device in the process that the vehicle travels the second distance is classified and recognized to obtain a corresponding image recognition result, and then whether the vehicle is located on the viaduct or under the viaduct is judged according to the included angle between the current movement direction of the vehicle and the horizontal direction of the earth surface and the image recognition result. For example, if the angle between the current movement direction of the vehicle and the horizontal direction of the earth surface is an uphill angle, and the image recognition result is that the vehicle is on a bridge, it can be determined that the vehicle is currently located on a viaduct.
In the embodiment of the application, after obtaining the identification result calculated according to the inertial data and the identification result of the image acquisition device, the map engine-side device may select a voting mechanism for the identification result of the inertial sensor and the identification result of the image acquisition device, and may determine that the vehicle is currently located on the viaduct or located under the viaduct, so as to switch the navigation route.
For example, when the calculated identification result of the inertial data is on the viaduct and the identification result of the image acquisition device is on the viaduct, it may be determined that the vehicle is currently located on the viaduct, so as to switch to a navigation route on the viaduct; when the calculated identification result of the inertial data is below the viaduct and the identification result of the image acquisition device is below the viaduct, the current position of the vehicle under the viaduct can be determined so as to be switched to a navigation route under the viaduct; when the calculated identification result of the inertial data is on the viaduct and the identification result of the image acquisition device is under the viaduct, the current unknown state of the vehicle can be determined, and at the moment, the navigation route is not switched. That is, when the identification result of the inertial sensor and the identification result of the image acquisition device are both on the viaduct, the current position of the vehicle on the viaduct can be determined through a voting mechanism; when the identification result of the inertial sensor and the identification result of the image acquisition device are both under the viaduct, the current position of the vehicle under the viaduct can be determined through a voting mechanism; if the identification result of the inertial sensor and the identification result of the image acquisition device are unknown or not, the final identification result can be determined to be unknown through a voting mechanism.
Optionally, when it is determined that the vehicle is currently located on the viaduct or located under the viaduct, the navigation route may be switched according to the current position of the vehicle, for example, when it is determined that the vehicle is currently located on the viaduct, the navigation route may be switched to the navigation route on the viaduct; and when the vehicle is determined to be currently positioned under the viaduct, switching to the navigation route under the viaduct.
According to the identification method for the viaduct area in the vehicle navigation, when the vehicle is judged to be located at the intersection of the viaduct area at present, the inertial data measured by the inertial sensor in the process that the vehicle travels the first distance is obtained, then the included angle between the current moving direction of the vehicle and the horizontal direction of the earth surface is calculated according to the inertial data, the image data collected by the image collecting device in the process that the vehicle travels the second distance is obtained, then the image data collected by the image collecting device in the process that the vehicle travels the second distance is classified and identified, the corresponding image identification result is obtained, and then whether the vehicle is located on the viaduct or under the viaduct at present is judged according to the included angle between the current moving direction of the vehicle and the horizontal direction of the earth surface and the image identification result. According to the method, the positions of the upper bridge and the lower bridge of the current vehicle are judged through the comprehensive identification effect of the inertial sensor and the image acquisition device, whether the current vehicle is positioned on the viaduct or under the viaduct can be accurately identified, the route is automatically planned, the manual switching of the upper and lower viaducts is avoided, the navigation accuracy is improved, and the user experience is improved.
Corresponding to the viaduct area identification methods provided in the foregoing several embodiments, an embodiment of the present application further provides a viaduct area identification device in vehicle navigation, and since the viaduct area identification device in vehicle navigation provided in the embodiment of the present application corresponds to the viaduct area identification methods provided in the foregoing several embodiments, the implementation manner of the viaduct area identification method in vehicle navigation is also applicable to the viaduct area identification device in vehicle navigation provided in the embodiment, and the detailed description is not repeated in this embodiment. Fig. 3 is a schematic structural diagram of a overpass zone identification device in vehicle navigation according to an embodiment of the present application.
As shown in fig. 3, the overpass zone identification apparatus 300 in the vehicle navigation includes: a first obtaining module 301, a second obtaining module 302, a first identifying module 303 and a second identifying module 304.
The first obtaining module 301 is configured to obtain inertial data measured by the inertial sensor during a first distance traveled by the vehicle in response to the vehicle being currently at a fork of an elevated bridge area;
the second obtaining module 302 is configured to obtain image data collected by the image collecting device during a second distance traveled by the vehicle;
the first identification module 303 is configured to perform classification identification on the image data based on a secondary identification model to obtain an image identification result of the image data. As one example, the secondary recognition module includes a first recognition model and a second recognition model.
In one implementation manner, the first identification module 303 performs rough classification and identification on the image data through the first identification model, and performs fine identification on the image data, of which the classification tag output by the first identification model is an unknown tag, through the second identification model; and obtaining corresponding image recognition results according to the classification labels output by the first recognition model, namely the labels on the bridge and the labels under the bridge, and the classification label results output by the second recognition model.
The second identification module 304 is configured to identify whether the vehicle is currently located on the viaduct or under the viaduct according to the inertial data and the image identification result.
In some embodiments, the second identification module 304 calculates an angle between the current movement direction of the vehicle and the horizontal direction of the earth surface according to the inertial data; and identifying whether the vehicle is currently positioned on the viaduct or under the viaduct according to the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface and the image identification result.
In one implementation, the second identification module 304 can calculate the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the inertial data as follows: determining non-gravitational acceleration in the vehicle motion direction according to the inertial data, and determining the current acceleration of the vehicle; and calculating an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the non-gravitational acceleration, the current acceleration of the vehicle and the gravitational acceleration.
In some embodiments, as shown in fig. 4, the overpass zone identification apparatus 300 in the vehicle navigation may further include: a third acquisition module 405 and a third identification module 406. The third obtaining module 405 is configured to obtain image data, which is collected by the image collecting device during a third distance traveled by the vehicle, in response to that a map engine end on the vehicle is in a navigation initial state or in a yaw state and a current position of the vehicle belongs to an elevated bridge area; the third identification module 406 is configured to perform classification and identification on the image data to determine whether the vehicle is currently located on the viaduct or located under the viaduct. Wherein 401-404 in fig. 4 and 301-304 in fig. 3 have the same functions and structures.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
According to the technical scheme, the problem of inaccurate route judgment of a GPS or vertical overhead caused by partial shielding of a bridge in the related technology can be solved, manual switching is needed, and when navigation is started in a bridge area, the initial point cannot be accurately positioned on a bridge or under the bridge and other technical problems can be solved, so that whether the current overhead route is positioned on the viaduct or under the viaduct can be accurately identified, the route can be automatically planned, the overhead route is prevented from being manually switched, the navigation accuracy is improved, and the technical effect of user experience is improved.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 5 is a block diagram of an electronic device for a method of identifying an overpass area in vehicle navigation according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 501 is taken as an example.
Memory 502 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for viaduct area identification in vehicle navigation provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method for viaduct area identification in vehicle navigation provided by the present application.
The memory 502, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method for identifying a overpass area in vehicle navigation in the embodiments of the present application. The processor 501 executes various functional applications of the server and data processing, namely, a method for identifying a overpass region in vehicle navigation in the above-described method embodiment, by executing the non-transitory software programs, instructions, and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of electronic equipment identified by an overpass area in vehicle navigation, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 optionally includes memory remotely located from processor 501, and such remote memory may be connected over a network to the overpass zone identification electronics in the vehicle navigation. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for identifying the overpass zone in the vehicle navigation may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic equipment identified by the overpass in the vehicle navigation, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, or other input device. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the current position of the vehicle can be determined, whether the vehicle is currently positioned at the fork of the viaduct area or not is judged according to the current position, when the vehicle is judged to be currently positioned at the fork of the viaduct area, the inertial data measured by the inertial sensor in the process that the vehicle travels the first distance is obtained, then the image data collected by the image collecting device in the process that the vehicle travels the second distance is obtained, and then whether the vehicle is currently positioned on the viaduct or under the viaduct is judged according to the inertial data measured by the inertial sensor in the process that the vehicle travels the first distance and the image data collected by the image collecting device in the process that the vehicle travels the second distance. From this, through inertial sensor and image acquisition device's comprehensive identification effect, judge the position under the bridge on the bridge that locates of current vehicle, can accurately discern whether present be located the overpass on or under the overpass, and then automatic planning route, the overhead route about avoiding manual switching has improved the accuracy of navigation, has improved user experience.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (13)

1. A method for identifying a viaduct area in vehicle navigation, wherein an inertial sensor and an image acquisition device are arranged on a vehicle, and the method comprises the following steps:
acquiring inertial data measured by the inertial sensor during a first distance traveled by the vehicle in response to the vehicle being currently at a fork of an elevated bridge zone;
acquiring image data acquired by the image acquisition device in the process that the vehicle travels a second distance;
classifying and identifying the image data based on a secondary identification model to obtain an image identification result of the image data;
and identifying whether the vehicle is currently positioned on the viaduct or under the viaduct according to the inertial data and the image identification result.
2. The method of claim 1, wherein said identifying whether the vehicle is currently located on or under a viaduct based on the inertial data and the image recognition results comprises:
calculating an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the inertial data;
and identifying whether the vehicle is currently positioned on the viaduct or under the viaduct according to the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface and the image identification result.
3. The method of claim 2, wherein said calculating an angle between a current direction of motion of the vehicle and a horizontal direction of the earth's surface from the inertial data comprises:
determining non-gravitational acceleration in the vehicle motion direction according to the inertial data, and determining the current acceleration of the vehicle;
and calculating an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the non-gravitational acceleration, the current acceleration of the vehicle and the gravitational acceleration.
4. The method of claim 1, wherein the secondary recognition module comprises a first recognition model and a second recognition model; the classifying and identifying the image data based on the secondary identification model to obtain an image identification result of the image data comprises the following steps:
carrying out rough classification identification on the image data through the first identification model, and carrying out fine identification on the image data of which the classification label output by the first identification model is an unknown label through the second identification model;
and obtaining corresponding image recognition results according to the classification labels output by the first recognition model, namely the labels on the bridge and the labels under the bridge, and the classification label results output by the second recognition model.
5. The method of any of claims 1 to 4, further comprising:
in response to a map engine end on the vehicle being in a navigation-initiated state or in a yaw state and a current location of the vehicle belonging to an elevated bridge area, acquiring image data acquired by the image acquisition device during a third distance traveled by the vehicle;
and classifying and identifying the image data to judge whether the vehicle is positioned on the viaduct or under the viaduct currently.
6. A viaduct area identification device in vehicle navigation, wherein an inertial sensor and an image acquisition device are arranged on a vehicle, and the device comprises:
the first acquisition module is used for responding to the intersection of the elevated bridge area where the vehicle is currently located, and acquiring inertial data measured by the inertial sensor in the process that the vehicle travels a first distance;
the second acquisition module is used for acquiring image data acquired by the image acquisition device in the process of traveling a second distance by the vehicle;
the first identification module is used for carrying out classification identification on the image data based on a secondary identification model so as to obtain an image identification result of the image data;
and the second identification module is used for identifying whether the vehicle is positioned on the viaduct or under the viaduct currently according to the inertial data and the image identification result.
7. The apparatus of claim 6, wherein the second identifying means is specifically configured to:
calculating an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the inertial data;
and identifying whether the vehicle is currently positioned on the viaduct or under the viaduct according to the included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface and the image identification result.
8. The apparatus of claim 7, wherein the second identifying module is specifically configured to:
determining non-gravitational acceleration in the vehicle motion direction according to the inertial data, and determining the current acceleration of the vehicle;
and calculating an included angle between the current motion direction of the vehicle and the horizontal direction of the earth surface according to the non-gravitational acceleration, the current acceleration of the vehicle and the gravitational acceleration.
9. The apparatus of claim 6, wherein the secondary recognition module comprises a first recognition model and a second recognition model; the first identification module is specifically configured to:
carrying out rough classification identification on the image data through the first identification model, and carrying out fine identification on the image data of which the classification label output by the first identification model is an unknown label through the second identification model;
and obtaining corresponding image recognition results according to the classification labels output by the first recognition model, namely the labels on the bridge and the labels under the bridge, and the classification label results output by the second recognition model.
10. The apparatus of any of claims 6 to 9, further comprising:
the third acquisition module is used for responding to the map engine end on the vehicle being in a navigation initial state or a yaw state and the current position of the vehicle belonging to an elevated bridge area, and acquiring image data acquired by the image acquisition device in the process that the vehicle travels a third distance;
and the third identification module is used for classifying and identifying the image data so as to judge whether the vehicle is positioned on the viaduct or under the viaduct currently.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 5.
13. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN202110961517.7A 2019-09-29 2019-09-29 Viaduct area identification method and device in vehicle navigation Active CN113819910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110961517.7A CN113819910B (en) 2019-09-29 2019-09-29 Viaduct area identification method and device in vehicle navigation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110961517.7A CN113819910B (en) 2019-09-29 2019-09-29 Viaduct area identification method and device in vehicle navigation
CN201910939195.9A CN110617826B (en) 2019-09-29 2019-09-29 Method, device, equipment and storage medium for identifying overpass zone in vehicle navigation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910939195.9A Division CN110617826B (en) 2019-09-29 2019-09-29 Method, device, equipment and storage medium for identifying overpass zone in vehicle navigation

Publications (2)

Publication Number Publication Date
CN113819910A true CN113819910A (en) 2021-12-21
CN113819910B CN113819910B (en) 2024-10-11

Family

ID=68924941

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910939195.9A Active CN110617826B (en) 2019-09-29 2019-09-29 Method, device, equipment and storage medium for identifying overpass zone in vehicle navigation
CN202110961517.7A Active CN113819910B (en) 2019-09-29 2019-09-29 Viaduct area identification method and device in vehicle navigation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910939195.9A Active CN110617826B (en) 2019-09-29 2019-09-29 Method, device, equipment and storage medium for identifying overpass zone in vehicle navigation

Country Status (1)

Country Link
CN (2) CN110617826B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116358574A (en) * 2021-12-23 2023-06-30 北京嘀嘀无限科技发展有限公司 Method and system for judging whether vehicle is on overhead
CN116704466A (en) * 2023-06-29 2023-09-05 北京汇通天下物联科技有限公司 Vehicle position recognition method, device, computer equipment and storage medium
CN118936450A (en) * 2024-07-25 2024-11-12 成都睿沿芯创科技有限公司 Aircraft positioning method, program product, electronic device and storage medium

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310675A (en) * 2020-02-20 2020-06-19 上海赛可出行科技服务有限公司 Overhead identification auxiliary positioning method based on convolutional neural network
CN111504334B (en) * 2020-04-13 2022-01-11 腾讯科技(深圳)有限公司 Road updating method and device of electronic map, computer equipment and storage medium
CN113758490B (en) * 2020-06-01 2024-06-14 南宁富联富桂精密工业有限公司 Method for judging entering ramp and navigation system
CN112197780B (en) * 2020-09-15 2022-11-01 汉海信息技术(上海)有限公司 Path planning method and device and electronic equipment
CN114519935B (en) * 2020-11-20 2023-06-06 华为技术有限公司 Road recognition method and device
CN113792589B (en) * 2021-08-06 2022-09-09 荣耀终端有限公司 A kind of overhead identification method and device
CN114199343B (en) * 2021-11-19 2025-06-20 领翌技术(横琴)有限公司 Method and device for measuring liquid level in container
CN114581888A (en) * 2022-03-10 2022-06-03 合众新能源汽车有限公司 Vehicle driving position identification method, system, device and computer readable medium
CN115046558B (en) * 2022-04-28 2025-06-24 东风汽车有限公司东风日产乘用车公司 Overhead navigation method, electronic device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR970049966A (en) * 1995-12-14 1997-07-29 김태구 Device for detecting position of vehicle and method of detecting duplex road in the device
CN101241188A (en) * 2007-02-06 2008-08-13 通用汽车环球科技运作公司 Collision avoidance system and method of detecting overpass locations using data fusion
CN102829791A (en) * 2011-06-14 2012-12-19 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted terminal based navigation unit and navigation path correction method
CN107077782A (en) * 2014-08-06 2017-08-18 李宗志 Adaptive and/or autonomous traffic control systems and methods
CN107179088A (en) * 2017-04-14 2017-09-19 深圳市国科微半导体股份有限公司 A kind of automobile navigation method and its device based on overhead road surface
CN107883969A (en) * 2017-10-31 2018-04-06 平潭诚信智创科技有限公司 A kind of viaduct intelligent navigation method based on RFID
CN108873040A (en) * 2017-05-16 2018-11-23 通用汽车环球科技运作有限责任公司 Method and apparatus for detecting road layer position
US10140553B1 (en) * 2018-03-08 2018-11-27 Capital One Services, Llc Machine learning artificial intelligence system for identifying vehicles
CN109737971A (en) * 2019-03-18 2019-05-10 爱驰汽车有限公司 Vehicle-mounted assisting navigation positioning system, method, equipment and storage medium
CN109883438A (en) * 2019-03-21 2019-06-14 斑马网络技术有限公司 Vehicle navigation method, device, medium and electronic device
CN110060493A (en) * 2019-05-16 2019-07-26 维智汽车电子(天津)有限公司 Lane location method, apparatus and electronic equipment
CN110164164A (en) * 2019-04-03 2019-08-23 浙江工业大学之江学院 The method for identifying complicated road precision using camera shooting function enhancing Mobile Telephone Gps software
CN110175533A (en) * 2019-05-07 2019-08-27 平安科技(深圳)有限公司 Overpass traffic condition method of real-time, device, terminal and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101290230B (en) * 2008-04-14 2011-03-30 深圳市凯立德软件技术股份有限公司 A navigation method for an intersection and a navigation system using the navigation method
US20140362082A1 (en) * 2011-05-03 2014-12-11 Google Inc. Automated Overpass Extraction from Aerial Imagery
CN105547309B (en) * 2015-12-02 2018-11-02 百度在线网络技术(北京)有限公司 A kind of recognition methods of overpass road and device
US9558467B1 (en) * 2016-03-02 2017-01-31 Software Ag Systems and/or methods for grid-based multi-level digitization of enterprise models
CN106989743A (en) * 2017-03-31 2017-07-28 上海电机学院 A kind of energy automatic sensing passes in and out the apparatus for vehicle navigation of overpass information
US10248754B2 (en) * 2017-05-23 2019-04-02 Globalfoundries Inc. Multi-stage pattern recognition in circuit designs
KR102035592B1 (en) * 2017-12-27 2019-10-23 소프트온넷(주) A supporting system and method that assist partial inspections of suspicious objects in cctv video streams by using multi-level object recognition technology to reduce workload of human-eye based inspectors
CN108334831A (en) * 2018-01-26 2018-07-27 中南大学 A kind of monitoring image processing method, monitoring terminal and system
CN109635737B (en) * 2018-12-12 2021-03-26 中国地质大学(武汉) Auxiliary vehicle navigation and positioning method based on road marking line visual recognition
CN109919177B (en) * 2019-01-23 2022-03-29 西北工业大学 Feature selection method based on hierarchical deep network
CN110031010A (en) * 2019-04-09 2019-07-19 百度在线网络技术(北京)有限公司 Vehicle guiding route method for drafting, device and equipment
CN110221328A (en) * 2019-07-23 2019-09-10 广州小鹏汽车科技有限公司 A kind of Combinated navigation method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR970049966A (en) * 1995-12-14 1997-07-29 김태구 Device for detecting position of vehicle and method of detecting duplex road in the device
CN101241188A (en) * 2007-02-06 2008-08-13 通用汽车环球科技运作公司 Collision avoidance system and method of detecting overpass locations using data fusion
CN102829791A (en) * 2011-06-14 2012-12-19 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted terminal based navigation unit and navigation path correction method
CN107077782A (en) * 2014-08-06 2017-08-18 李宗志 Adaptive and/or autonomous traffic control systems and methods
CN107179088A (en) * 2017-04-14 2017-09-19 深圳市国科微半导体股份有限公司 A kind of automobile navigation method and its device based on overhead road surface
CN108873040A (en) * 2017-05-16 2018-11-23 通用汽车环球科技运作有限责任公司 Method and apparatus for detecting road layer position
CN107883969A (en) * 2017-10-31 2018-04-06 平潭诚信智创科技有限公司 A kind of viaduct intelligent navigation method based on RFID
US10140553B1 (en) * 2018-03-08 2018-11-27 Capital One Services, Llc Machine learning artificial intelligence system for identifying vehicles
CN109737971A (en) * 2019-03-18 2019-05-10 爱驰汽车有限公司 Vehicle-mounted assisting navigation positioning system, method, equipment and storage medium
CN109883438A (en) * 2019-03-21 2019-06-14 斑马网络技术有限公司 Vehicle navigation method, device, medium and electronic device
CN110164164A (en) * 2019-04-03 2019-08-23 浙江工业大学之江学院 The method for identifying complicated road precision using camera shooting function enhancing Mobile Telephone Gps software
CN110175533A (en) * 2019-05-07 2019-08-27 平安科技(深圳)有限公司 Overpass traffic condition method of real-time, device, terminal and storage medium
CN110060493A (en) * 2019-05-16 2019-07-26 维智汽车电子(天津)有限公司 Lane location method, apparatus and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张丽园;王娜: "基于两级组合分类的车牌字符识别技术研究", 信息与电脑(理论版), no. 11, pages 62 - 64 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116358574A (en) * 2021-12-23 2023-06-30 北京嘀嘀无限科技发展有限公司 Method and system for judging whether vehicle is on overhead
CN116704466A (en) * 2023-06-29 2023-09-05 北京汇通天下物联科技有限公司 Vehicle position recognition method, device, computer equipment and storage medium
CN118936450A (en) * 2024-07-25 2024-11-12 成都睿沿芯创科技有限公司 Aircraft positioning method, program product, electronic device and storage medium

Also Published As

Publication number Publication date
CN110617826B (en) 2021-10-01
CN110617826A (en) 2019-12-27
CN113819910B (en) 2024-10-11

Similar Documents

Publication Publication Date Title
CN110617826B (en) Method, device, equipment and storage medium for identifying overpass zone in vehicle navigation
CN111959495B (en) Vehicle control method and device and vehicle
US11887376B2 (en) Method and apparatus of estimating road condition, and method and apparatus of establishing road condition estimation model
CN110556012B (en) Lane positioning method and vehicle positioning system
CN111649739B (en) Positioning method and device, self-driving vehicle, electronic device and storage medium
US20200042800A1 (en) Image acquiring system, terminal, image acquiring method, and image acquiring program
CN112581763A (en) Method, device, equipment and storage medium for detecting road event
CN109429518A (en) Automatic Pilot traffic forecast based on map image
CN110377025A (en) Sensor aggregation framework for automatic driving vehicle
CN108732589A (en) The training data of Object identifying is used for using 3D LIDAR and positioning automatic collection
CN111985662B (en) Network vehicle-restraining method, device, electronic equipment and storage medium
CN110389580A (en) Method for planning the drift correction in the path of automatic driving vehicle
JP2020521191A (en) Driving method along lanes without maps and positioning for autonomous driving of autonomous vehicles on highways
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN111311906B (en) Intersection distance detection method and device, electronic equipment and storage medium
CN114080537A (en) Collecting user contribution data relating to a navigable network
CN110660219A (en) Parking lot parking prediction method and device
US20170343372A1 (en) Navigation system with vision augmentation mechanism and method of operation thereof
CN110809706A (en) Providing street level images related to ride services in a navigation application
CN111442775B (en) Road identification method and device, electronic equipment and readable storage medium
US12233914B2 (en) Automatic driving-based riding method, apparatus and device, and storage medium
CN114689074B (en) Information processing method and navigation method
CN113124887A (en) Route information processing method, device, equipment and storage medium
US20210364306A1 (en) Map selection device, storage medium storing computer program for map selection and map selection method
WO2008047449A1 (en) Image display device, image display method, image display program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant