[go: up one dir, main page]

CN115140055B - A driving navigation system based on scene perception and dynamic multi-source fusion - Google Patents

A driving navigation system based on scene perception and dynamic multi-source fusion Download PDF

Info

Publication number
CN115140055B
CN115140055B CN202211031301.1A CN202211031301A CN115140055B CN 115140055 B CN115140055 B CN 115140055B CN 202211031301 A CN202211031301 A CN 202211031301A CN 115140055 B CN115140055 B CN 115140055B
Authority
CN
China
Prior art keywords
vehicle
navigation
demand
scene
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211031301.1A
Other languages
Chinese (zh)
Other versions
CN115140055A (en
Inventor
张凯元
张凯斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Junkai Technology Group Co ltd
Original Assignee
Shaanxi Junkai Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Junkai Technology Group Co ltd filed Critical Shaanxi Junkai Technology Group Co ltd
Priority to CN202211031301.1A priority Critical patent/CN115140055B/en
Publication of CN115140055A publication Critical patent/CN115140055A/en
Application granted granted Critical
Publication of CN115140055B publication Critical patent/CN115140055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/182Selecting between different operative modes, e.g. comfort and performance modes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a driving navigation system based on scene perception and dynamic multisource fusion. According to the invention, satellite navigation map and 3D digital visual map conversion are carried out through scene acquisition inside and outside the vehicle, and finally fusion navigation is realized. The scene perception method is used for acquiring scenes inside and outside the vehicle, judging the state of the driver, recording the bad habit of the driver, identifying abnormal events in the vehicle, or observing passengers, and judging whether the passengers sleep or have other abnormal states, so that the driver is reminded or the requirements of the passengers are adjusted. The digital map is used for marking the off-vehicle abnormal data obtained by scene perception, has all the functions of navigation and traffic light marking in the prior art, integrates navigation, and dynamically navigates according to the on-vehicle and off-vehicle scenes and the requirements of drivers.

Description

Driving navigation system based on scene perception and dynamic multisource fusion
Technical Field
The invention relates to the technical field of traffic navigation, in particular to a driving navigation system based on scene perception and dynamic multisource fusion.
Background
At present, aiming at the navigation technology, navigation is generally realized through map data, a rendering engine, a road searching algorithm, map fragment loading, inertial navigation, deviation correcting and filtering, a TTS voice engine, intersection reminding and real-time road condition data.
In the prior art, although road congestion and road navigation can be displayed, the congestion degree is counted by the traffic flow of a road section, and the vehicles have GPS, which is equivalent to the position which can be reported by the users, and the method belongs to an automatic congestion degree judging mechanism. But is mainly a notification based on government authorities and a real-time upload by drivers for accidents or maintenance segments occurring in the road. For accidents and road maintenance on a far road section, the map can be known, but for accidents or abnormal events which occur suddenly within hundreds of meters or tens of meters in a very near place, the navigation of the vehicle is conducted according to the traffic rule, but obviously the vehicle needs to change the road, and the driver can not know whether the road is changed by relying on the subjective awareness of the driver, but if the road is changed in a strange city, whether the distance is further after the road is changed, whether other accidents exist or not, and the driver cannot know. Because the navigation system does not know that there is an accident, it cannot automatically change roads.
In addition, if the driver has driving fatigue (sleepiness) or has bad driving habit (such as watching a mobile phone and smoking), some vehicles need to navigate fewer roads when navigating, in the prior art, the navigation routes which are the shortest distance and the most time-saving and are convenient for the driver to reach the target point faster are all needed to navigate, and other path options should be given for the people with bad habit, so that fewer vehicles and as few driving accidents as possible are emphasized.
Disclosure of Invention
The invention provides a driving navigation system based on scene perception and dynamic multisource fusion, which is used for solving the problems that the navigation system in the prior art cannot accurately navigate and cannot realize in-vehicle supervision.
A driving navigation system based on scene perception and dynamic multisource fusion, comprising:
The in-vehicle scene Jing Ganzhi module is used for acquiring real-time in-vehicle scene data by means of at least one in-vehicle scene sensing device;
the vehicle exterior Jing Ganzhi module is used for acquiring real-time vehicle exterior scene data by means of at least one laser radar and vehicle exterior scene sensing equipment;
The satellite map module is used for acquiring satellite positioning signals, generating an initial navigation map and generating a vehicle guide line on the initial navigation map;
The digital map module is used for fusing the real-time in-vehicle scene data and the real-time out-of-vehicle scene data to the initial navigation map to generate a 3D digital visual map;
the dynamic decision module is used for marking the inside and outside of the vehicle through the 3D digital visual map and determining a corresponding navigation mode according to the requirement mark,
The navigation modes comprise an obstacle avoidance navigation mode required outside the vehicle, an automatic driving navigation mode required in the vehicle and a conventional navigation mode;
and the fusion navigation module is used for generating a global visual navigation line according to the navigation mode.
Further, the in-vehicle scene perception module comprises:
A first perception device unit for configuring in-car scene perception devices in advance in the car, wherein,
The in-car scene sensing device comprises a temperature sensor, an illuminance sensor, an odor sensor and a multi-angle video monitor;
The multi-angle video monitor is respectively arranged on the roof, the bottom and the door, and is a self-adaptive angle adjusting video monitor;
The in-vehicle environment information acquisition unit is used for docking in-vehicle scene sensing equipment and acquiring in-vehicle real-time environment information through a vehicle terminal,
The real-time environment information comprises temperature, illuminance and peculiar smell;
an in-car personnel information acquisition unit for acquiring in-car personnel information through in-car monitoring equipment, wherein,
The personnel information in the vehicle comprises passenger number information and passenger distribution information;
an in-vehicle personnel age group judging unit for judging the age groups of the passengers and the driver in the vehicle and outputting age group information, wherein,
The age group comprises infants, teenagers, young, middle-aged and elderly people;
The driver gesture recognition unit is used for capturing the action gesture of the driver and acquiring real-time action information of the driver;
The passenger gesture recognition unit is used for capturing the action gesture of the passenger and acquiring real-time action information of the passenger;
and the scene data aggregation unit is used for aggregating the real-time environment information, the age group information, the real-time action information of the driver and the real-time action information of the passenger to generate in-vehicle scene data.
Further, the out-of-vehicle scene perception module comprises:
a second perception device configuration unit for configuring the external scene perception device on the vehicle side in advance, wherein,
The external scene sensing equipment comprises an infrared ranging camera device and high-precision image acquisition equipment;
the infrared ranging camera device and the high-precision image acquisition equipment are respectively arranged on the vehicle head body and the vehicle tail;
the laser radar unit is used for acquiring road information through a laser radar and carrying out real-time road simulation on different lanes to generate simulated lanes,
The real-time road simulation comprises real-time simulation of different elements on a lane;
the external sensing unit is used for acquiring external sensing data in real time through external scene sensing equipment,
The vehicle exterior sensing data comprises a vehicle exterior high-precision image, real-time distances of different elements and traffic signs;
the perception correction unit is used for carrying out perception data correction on the simulated lane through the vehicle exterior induction data to generate a high-precision simulated scene;
and the vehicle exterior scene acquisition unit is used for generating real-time vehicle exterior scene data through the high-precision simulation scene.
Further, the satellite map module comprises:
The track grabbing unit is used for carrying out real-time positioning on the vehicle according to the satellite positioning signals and generating a running track of the vehicle;
The initial navigation map unit is used for acquiring a destination set by a user and generating an initial planning line;
The driving line unit is used for marking the vehicle density on the initial planning line and taking the driving track as the driving line on the initial planning line;
A guiding line unit for judging whether the real-time driving track changes the initial planning line according to the driving line, generating a second planning line after the initial planning line changes, and generating a vehicle guiding line in the second planning line,
The number of the second planning lines is not less than 2;
Each line of the second planned line has a separate planning attribute, wherein,
The planning attributes at least comprise shortest time, shortest mileage, minimum traffic flow and minimum accident occurrence times.
Further, the digital map module includes:
The in-vehicle scene Jing Jiexi unit is used for analyzing the in-vehicle scene data, determining the gesture characteristics of different personnel in the vehicle, judging the real-time in-vehicle requirements according to the gesture characteristics and generating a first requirement text;
The vehicle exterior Jing Jiexi unit is used for analyzing the vehicle exterior scene data, generating element characteristics of different lanes outside the vehicle, generating lane data and element distribution data, judging whether an avoidance requirement exists or not according to the element characteristics, and generating a second requirement text;
The first fusion unit is used for carrying out 3D conversion on the initial navigation map according to the lane data and the element distribution data to generate a 3D map;
And the second fusion unit is used for determining real-time demand information according to the first demand text and the second demand text, generating a labeling frame in the 3D map, filling the real-time demand information and generating the 3D digital visual map.
Further, the dynamic decision module comprises:
The demand marking unit is used for marking the inside and outside of the vehicle according to the 3D digital visual map, wherein,
The vehicle interior and exterior demand marks comprise obstacle avoidance marks, passenger demand marks and driver demand marks;
The mark analysis unit is used for judging the type of the demand and the object of the demand according to the inside and outside demand mark of the vehicle and setting navigation behavior, wherein,
The demand types include driver demand, passenger demand and out-of-vehicle obstacle avoidance demand;
The navigation behavior comprises a speed control behavior, a stability adjustment behavior, a driver reminding behavior, a path regulation behavior, an obstacle reminding behavior, a lane switching behavior and a vehicle automatic braking behavior;
The navigation mode decision unit is used for constructing a navigation mode according to the navigation behavior, the demand type and the demand object and setting navigation control parameters corresponding to the navigation mode, wherein the navigation mode decision unit is used for setting the navigation control parameters corresponding to the navigation mode according to the navigation behavior, the demand type and the demand object
The constructing the navigation mode comprises:
When the demand type is obstacle avoidance, generating an obstacle avoidance navigation mode of the demand outside the vehicle,
When the required objects are passengers and drivers, generating an automatic driving navigation mode of the in-vehicle requirement;
when the type and demand objects are not required, a conventional navigation pattern is generated.
Further, the generating the obstacle avoidance navigation mode of the external requirement of the vehicle comprises the following steps:
determining real-time demand information according to the inside and outside demand marks of the vehicle;
Determining corresponding lanes and vehicle distances according to the real-time demand information;
Generating a lane switching instruction and a speed adjusting instruction according to the lanes and the 3D digital visual map;
Determining a switched lane according to the lane switching instruction;
and carrying out real-time speed regulation according to the speed regulation instruction, determining the time and distance for switching lanes according to the real-time speed regulation, generating corresponding off-vehicle control parameters, and generating an obstacle avoidance navigation mode required by the outside of the vehicle through the off-vehicle control parameters.
Further, the automatic driving navigation mode for generating the in-vehicle demand comprises the following steps:
determining a demand object according to the inside and outside demand marks of the vehicle, wherein,
The demand object is a driver or a passenger;
Determining corresponding demand information according to the demand object;
Setting an automatic driving navigation mode required in the vehicle according to the requirement information, wherein,
When the demand object is a driver, starting an automatic driving mode, and generating a corresponding voice instruction according to the demand information;
And when the demand object is a passenger, starting a semi-automatic driving mode and sending vehicle regulation information to a driver.
Further, the generating the regular navigation pattern includes:
destination display is carried out on the 3D digital visual map according to the 3D digital visual map;
determining a congestion road section and a low-flow road section in a planned route according to the destination display;
And according to the crowded road section and the low-traffic road section, carrying out dynamic route guidance on the 3D digital visual map, and taking the dynamic route guidance as a conventional navigation mode.
Further, the fusion navigation module comprises:
The navigation selection unit is used for screening a corresponding navigation control scheme according to the navigation mode;
The global navigation unit is used for displaying global navigation line information on display equipment in the vehicle in real time according to the navigation control scheme;
and the visual display unit is used for simulating the inside and outside of the vehicle at the position of the user according to the global navigation circuit information and visually displaying the simulated scene.
The method has the advantages that the method can collect the motion gesture data of the passengers and the motion gesture data of the drivers, so that the in-vehicle environment adjustment and voice reminding can be carried out according to the motion gestures of the drivers and the passengers during navigation, and the vehicle speed can be regulated in an auxiliary mode.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a system composition diagram of a driving navigation system based on scene perception and dynamic multisource fusion in an embodiment of the invention;
FIG. 2 is a flow chart illustrating a configuration of an obstacle avoidance navigation mode for vehicle exterior demand in an embodiment of the present invention;
FIG. 3 is a flow chart illustrating a configuration of an automatic driving navigation mode for in-vehicle demand according to an embodiment of the present invention;
fig. 4 is a flow chart illustrating a configuration of a conventional navigation mode in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
As shown in fig. 1, the present invention is a driving navigation system based on scene perception and dynamic multisource fusion, comprising:
The in-vehicle scene Jing Ganzhi module is used for acquiring real-time in-vehicle scene data by means of at least one in-vehicle scene sensing device;
the vehicle exterior Jing Ganzhi module is used for acquiring real-time vehicle exterior scene data by means of at least one laser radar and vehicle exterior scene sensing equipment;
The satellite map module is used for acquiring satellite positioning signals, generating an initial navigation map and generating a vehicle guide line on the initial navigation map;
The digital map module is used for fusing the real-time in-vehicle scene data and the real-time out-of-vehicle scene data to the initial navigation map to generate a 3D digital visual map;
the dynamic decision module is used for marking the inside and outside of the vehicle through the 3D digital visual map and determining a corresponding navigation mode according to the requirement mark,
The navigation modes comprise an obstacle avoidance navigation mode required outside the vehicle, an automatic driving navigation mode required in the vehicle and a conventional navigation mode;
and the fusion navigation module is used for generating a global visual navigation line according to the navigation mode.
The principle of the technical scheme is as follows: as shown in figure 1, the invention consists of six modules, namely, scene perception is used for acquiring scenes inside and outside a vehicle, judging the state of a driver, recording bad habits of the driver, identifying abnormal events in the vehicle, or observing passengers, and judging whether the passengers sleep or have other abnormal states, thereby prompting the driver or adjusting the demands of the passengers. The digital map is used for marking the off-vehicle abnormal data obtained by scene perception, has all the functions of navigation and traffic light marking in the prior art, integrates navigation, and dynamically navigates according to the on-vehicle and off-vehicle scenes and the requirements of drivers. When the habit of the driver is bad or the driving is tired in the vehicle, the fewer roads of the vehicle are recommended preferentially, and the real-time dynamic navigation line adjustment is performed. And when the vehicle is abnormal, the navigation line is adjusted according to the reason of the abnormality, and meanwhile, the face recognition is also carried out, so that the driver authentication is carried out.
Compared with the prior art, the intelligent navigation system has the advantages that the intelligent navigation system can identify the requirements of users, the requirements can be subjective requirements on actions, and also can be unconscious requirements in fatigue or stuffy states, and the automatic vehicle control requirements can be realized according to the requirements, so that the intelligent navigation system and the intelligent driving system are integrated, and high intelligent driving and intelligent control functions are realized.
The technical scheme has the advantages that the 3D visual navigation can be realized, the unconscious requirements and the initiative requirements of passengers in the vehicle can be met, the speed regulation and the real-time opening of the requirement function can be carried out, the visual navigation information display can be carried out in real time, and finally the intelligent navigation combining intelligent driving and intelligent navigation is carried out.
Further, the in-vehicle scene perception module comprises:
a first perception device configuration unit for configuring in-vehicle scene perception devices in advance in a vehicle, wherein,
The in-car scene sensing device comprises a temperature sensor, an illuminance sensor, an odor sensor and a multi-angle video monitor;
The multi-angle video monitor is respectively arranged on the roof, the bottom and the door, and is a self-adaptive angle adjusting video monitor;
The in-vehicle environment information acquisition unit is used for docking in-vehicle scene sensing equipment and acquiring in-vehicle real-time environment information through a vehicle terminal,
The real-time environment information comprises temperature, illuminance and peculiar smell;
an in-car personnel information acquisition unit for acquiring in-car personnel information through in-car monitoring equipment, wherein,
The personnel information in the vehicle comprises passenger number information and passenger distribution information;
an in-vehicle personnel age group judging unit for judging the age groups of the passengers and the driver in the vehicle and outputting age group information, wherein,
The age group comprises infants, teenagers, young, middle-aged and elderly people;
The driver gesture recognition unit is used for capturing the action gesture of the driver and acquiring real-time action information of the driver;
The passenger gesture recognition unit is used for capturing the action gesture of the passenger and acquiring real-time action information of the passenger;
and the scene data aggregation unit is used for aggregating the real-time environment information, the age group information, the real-time action information of the driver and the real-time action information of the passenger to generate in-vehicle scene data.
The principle of the technical scheme is as follows:
When the scene in the vehicle is collected, the action gesture analysis of the personnel is mainly carried out through various sensors and videos, and whether the personnel in the vehicle is in a normal sitting position or in a sleeping or other states is judged. The invention can sense the human body posture and the environment in the vehicle, thereby ensuring that the air conditioner can be automatically turned on when the temperature is too high. And the age bracket of the passenger can be identified, so that the identification of the action gesture of the passenger is facilitated, and the action amplitude and the action mode of the passengers of different age brackets or passengers of different years can be different. The in-vehicle environment information also has illuminance and odor because if the illuminance is high, and the in-vehicle occupant needs to sleep, the illuminance is automatically adjusted, for example, the glass is changed to a dark color mode. The illuminance increase may also be performed if the illuminance in the vehicle is too low. For the taste, the ventilation equipment in the vehicle is controlled to automatically ventilate.
The method and the device have the beneficial effects that the motion gesture data of the passengers and the motion gesture data of the drivers can be collected, so that the in-vehicle environment adjustment and voice reminding can be carried out according to the motion gestures of the drivers and the passengers during navigation, the vehicle speed can also be assisted in adjustment, the most important in-vehicle environment collection is achieved, the in-vehicle scene is built, a clearer and more realistic simulation scene is obtained, more accurate scene data is obtained, and the real-time requirements of the in-vehicle personnel can be judged through the scene data.
Further, the out-of-vehicle scene perception module comprises:
a second perception device configuration unit for configuring the external scene perception device on the vehicle side in advance, wherein,
The external scene sensing equipment comprises an infrared ranging camera device and high-precision image acquisition equipment;
the infrared ranging camera device and the high-precision image acquisition equipment are respectively arranged on the vehicle head body and the vehicle tail;
the laser radar unit is used for acquiring road information through a laser radar and carrying out real-time road simulation on different lanes to generate simulated lanes,
The real-time road simulation comprises real-time simulation of different elements on a lane;
the external sensing unit is used for acquiring external sensing data in real time through external scene sensing equipment,
The vehicle exterior sensing data comprises a vehicle exterior high-precision image, real-time distances of different elements and traffic signs;
the perception correction unit is used for carrying out perception data correction on the simulated lane through the vehicle exterior induction data to generate a high-precision simulated scene;
and the vehicle exterior scene acquisition unit is used for generating real-time vehicle exterior scene data through the high-precision simulation scene.
The principle of the technical scheme is as follows:
The greatest problem in the prior art is that whether the laser radar is accurately measured cannot be known, an accurate result is obtained, and whether the distances of different vehicles in the perceived scene area are accurate or not is judged. In the prior art, besides the laser radar, a camera device is generally used for assisting in scene perception, which is the basis of an automatic driving technology, but the general automatic driving technology is combined with a navigation system, and the invention is improved in the aspect. The laser radar technology of the invention constructs a lane-level three-dimensional scene map, realizes road simulation, generates a simulated lane, and has different elements on the simulated lane, mainly vehicles, pedestrians and other barrier elements possibly existing in the lane. However, only the laser radar is used, and the accuracy is not required, and the sensing data of the vehicle side comprises the data for correcting the sensing of the laser radar, so that the sensing of the laser radar only can sense the distance, and the camera device can correct the distance and the authenticity of different elements. That is, the pedestrians and vehicles are more clearly displayed on the simulated roadway, and the abnormal phenomena existing on the roadway, such as pit or road collapse needing to be maintained or traffic accidents, are also displayed on the roadway.
The intelligent navigation system combining the auxiliary driving function is based on the principle of the technical scheme, and various requirements in the driving process, including driving requirements, riding and better experience requirements of driving vehicles, are required to be maintained. Therefore, in the aspect of scene perception, the external real-time scene is built in various modes, so that the external real-time scene can be more realistic.
Further, the satellite map module comprises:
The satellite map module includes:
The track grabbing unit is used for carrying out real-time positioning on the vehicle according to the satellite positioning signals and generating a running track of the vehicle;
The initial navigation map unit is used for acquiring a destination set by a user and generating an initial planning line;
The driving line unit is used for marking the vehicle density on the initial planning line and taking the driving track as the driving line on the initial planning line;
A guiding line unit for judging whether the real-time driving track changes the initial planning line according to the driving line, generating a second planning line after the initial planning line changes, and generating a vehicle guiding line in the second planning line,
The number of the second planning lines is not less than 2;
Each line of the second planned line has a separate planning attribute, wherein,
The planning attributes at least comprise shortest time, shortest mileage, minimum traffic flow and minimum accident occurrence times.
The principle of the technical scheme is that the 3D digital visual map obtained by the method is converted through two maps, namely an initial navigation map obtained by a satellite. This map is an initial route obtained by initial positioning according to a destination set by a user, the initial navigation map marks the vehicle density on the travel route, and planned routes each having a vehicle guide line are generated based on continuous updating of the driver's travel. The 3D digital visual map not only can see the navigation line, but also can see the real-time demand information inside and outside the vehicle and control the adjustment information in real time, and the invention is applicable to the vehicle which is also an intelligent vehicle with multi-screen split-screen display.
The technical scheme has the advantages that the running track of the vehicle is determined through the miniature signal, the first planning line, namely the initial running line, is planned, namely the navigation technology which is the same as that in the prior art, is different in that the line guiding is carried out, whether the originally planned line needs to be replaced or not can be judged according to the running track, and the guide line is generated. Of course, the prior art also has the problem of re-planning the planned route after changing the initial travel track. The invention is different from the prior art in that the invention can carry out vehicle density marking when carrying out line planning, and is provided with different planning attributes, the number of lines is large, and the planning attributes have different planning attributes for each line, and the planning attributes are self-set, can be set as the most restaurants or epidemic-free lines.
Further, the digital map module includes:
The in-vehicle scene Jing Jiexi unit is used for analyzing the in-vehicle scene data, determining the gesture characteristics of different personnel in the vehicle, judging the real-time in-vehicle requirements according to the gesture characteristics and generating a first requirement text;
The vehicle exterior Jing Jiexi unit is used for analyzing the vehicle exterior scene data, generating element characteristics of different lanes outside the vehicle, generating lane data and element distribution data, judging whether an avoidance requirement exists or not according to the element characteristics, and generating a second requirement text;
The first fusion unit is used for carrying out 3D conversion on the initial navigation map according to the lane data and the element distribution data to generate a 3D map;
And the second fusion unit is used for determining real-time demand information according to the first demand text and the second demand text, generating a labeling frame in the 3D map, filling the real-time demand information and generating the 3D digital visual map.
The principle of the technical scheme is that the visual data for generating the 3D digital visual map mainly shows that what needs are generated can be judged in real time according to the in-vehicle scene data and the out-vehicle scene data, the corresponding demand text is generated and displayed on the marking frame on the 3D digital visual map, and in-vehicle analysis and out-of-vehicle analysis are respectively carried out, so that the state of a passenger can be judged through in-vehicle analysis, for example, the passenger sleeps, the speed of the passenger is reduced when the demands are, and the stability of the vehicle is kept. Or the driver is tired in driving, and the navigation system of the vehicle carries out fatigue reminding. The outside of the vehicle is the need to avoid different obstacles and switch lanes. For these requirements, they are displayed on a 3D digital visual map.
The technical scheme has the beneficial effects that the 3D digital map generated by the invention can mark different requirements of users, including the requirements in the vehicle and the requirements outside the vehicle. Through these demands, different vehicle intelligent navigation functions are realized.
Further, the dynamic decision module comprises:
The demand marking unit is used for marking the inside and outside of the vehicle according to the 3D digital visual map, wherein,
The vehicle interior and exterior demand marks comprise obstacle avoidance marks, passenger demand marks and driver demand marks;
The mark analysis unit is used for judging the type of the demand and the object of the demand according to the inside and outside demand mark of the vehicle and setting navigation behavior, wherein,
The demand types include driver demand, passenger demand and out-of-vehicle obstacle avoidance demand;
The navigation behavior comprises a speed control behavior, a stability adjustment behavior, a driver reminding behavior, a path regulation behavior, an obstacle reminding behavior, a lane switching behavior and a vehicle automatic braking behavior;
The navigation mode decision unit is used for constructing a navigation mode according to the navigation behavior, the demand type and the demand object and setting navigation control parameters corresponding to the navigation mode, wherein the navigation mode decision unit is used for setting the navigation control parameters corresponding to the navigation mode according to the navigation behavior, the demand type and the demand object
The constructing the navigation mode comprises:
When the demand type is obstacle avoidance, generating an obstacle avoidance navigation mode of the demand outside the vehicle,
When the required objects are passengers and drivers, generating an automatic driving navigation mode of the in-vehicle requirement;
when the type and demand objects are not required, a conventional navigation pattern is generated.
The principle of the technical scheme is that the dynamic decision module is used for making a navigation mode decision, and three navigation modes are respectively an obstacle avoidance navigation mode required outside a vehicle, an automatic driving navigation mode required in the vehicle and a conventional navigation mode. The three modes are that after the demand mark is obtained through analysis according to the demand mark, corresponding navigation behaviors, namely speed control behaviors, stability adjustment behaviors, driver reminding behaviors, path regulation and control behaviors, barrier reminding behaviors, lane switching behaviors and vehicle automatic braking behaviors, are determined, and through the behaviors, navigation decision is realized, and corresponding functions of navigation execution are set.
The technical scheme has the advantages that after the navigation map is converted into the 3D digital visual map, the 3D digital visual map can be marked according to the requirements inside and outside the vehicle, the requirements are embodied in the form of a requirement frame, setting parameters corresponding to the vehicle are set for the requirements, and corresponding navigation control is carried out through the set parameters.
Further, as shown in fig. 2, the generating the obstacle avoidance navigation mode of the external requirement of the vehicle includes:
Determining corresponding lanes and vehicle distances according to the real-time demand information;
Generating a lane switching instruction and a speed adjusting instruction according to the lanes and the 3D digital visual map;
Determining a switched lane according to the lane switching instruction;
and carrying out real-time speed regulation according to the speed regulation instruction, determining the time and distance for switching lanes according to the real-time speed regulation, generating corresponding off-vehicle control parameters, and generating an obstacle avoidance navigation mode required by the outside of the vehicle through the off-vehicle control parameters.
The technical scheme is characterized in that when obstacle avoidance navigation is carried out according to the requirement outside the vehicle, the position of the vehicle is determined according to the requirement information, the lane and the distance of the obstacle are generated through a 3D digital visual map, the corresponding switching instruction is set according to the switching instruction, the vehicle is controlled to avoid the obstacle, and the lane change is automatically reminded and automatically controlled.
The technical scheme has the advantages that the active control obstacle avoidance of the driver is not needed in the dynamic obstacle avoidance navigation mode, but the control behavior of the driver is taken as the main control behavior, the lane and the speed of the vehicle are regulated, the obstacle is avoided, and the accident is prevented.
Further, as shown in fig. 3, the automatic driving navigation mode for generating the in-vehicle demand includes:
determining a demand object according to the inside and outside demand marks of the vehicle, wherein,
The demand object is a driver or a passenger;
Determining corresponding demand information according to the demand object;
Setting an automatic driving navigation mode required in the vehicle according to the requirement information, wherein,
When the demand object is a driver, starting an automatic driving mode, and generating a corresponding voice instruction according to the demand information;
And when the demand object is a passenger, starting a semi-automatic driving mode and sending vehicle regulation information to a driver.
The principle of the technical scheme is that in an automatic driving navigation mode of the in-vehicle demand, corresponding demand objects can be determined according to the in-vehicle demand marks, whether the passenger has the demand of vehicle regulation or the driver has the demand of vehicle regulation is judged, and in the mode, when the demand exists, the voice reminding of the driver can be carried out, and the speed of the vehicle is regulated and controlled through the automatic driving mode and the semiautomatic driving mode.
The technical scheme has the principle that the method is innovative in that corresponding automatic navigation driving modes are set for the demand information of drivers and passengers, the vehicles are controlled to run in a semi-automatic mode through the driving modes, and corresponding voice instructions can be generated and are used for broadcasting the starting function of the vehicles.
Further, as shown in fig. 4, the generating the conventional navigation mode includes:
destination display is carried out on the 3D digital visual map according to the 3D digital visual map;
determining a congestion road section and a low-flow road section in a planned route according to the destination display;
And according to the crowded road section and the low-traffic road section, carrying out dynamic route guidance on the 3D digital visual map, and taking the dynamic route guidance as a conventional navigation mode.
The principle of the technical scheme is that for the conventional navigation mode, a conventional navigation line is generated according to the destination, and conventional guided navigation is performed through the marks of the crowded road section and the low-flow road section.
The technical scheme has the advantages that the navigation mode is similar to the navigation mode in the prior art in the conventional navigation mode, but the navigation mode belongs to a dynamic navigation mode, and can display a crowded road section and a low-flow road section in a planned route, so that the dynamic adjustment of the route can be performed in advance.
Further, the fusion navigation module comprises:
The navigation selection unit is used for screening a corresponding navigation control scheme according to the navigation mode;
The global navigation unit is used for displaying global navigation line information on display equipment in the vehicle in real time according to the navigation control scheme;
and the visual display unit is used for simulating the inside and outside of the vehicle at the position of the user according to the global navigation circuit information and visually displaying the simulated scene.
The principle of the technical scheme is that in the stage of fusion navigation, parameters which need to be adjusted for the vehicle and corresponding navigation control modes are judged according to the navigation mode, the line information of global navigation is displayed through a display screen in the vehicle, and the simulated scenes inside and outside the vehicle are visually displayed on the line information.
The technical scheme has the beneficial effects that after the fusion navigation, the visual display of the simulated scene can be realized, so that when the navigation is performed, if the danger or risk information appears inside and outside the vehicle, the simulated scene of the vehicle can be displayed without being found by a driver, and the function is not provided in the existing market.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

1. A driving navigation system based on scene perception and dynamic multisource fusion, comprising:
The in-vehicle scene Jing Ganzhi module is used for acquiring real-time in-vehicle scene data by means of at least one in-vehicle scene sensing device;
the vehicle exterior Jing Ganzhi module is used for acquiring real-time vehicle exterior scene data by means of at least one laser radar and vehicle exterior scene sensing equipment;
The satellite map module is used for acquiring satellite positioning signals, generating an initial navigation map and generating a vehicle guide line on the initial navigation map;
The digital map module is used for fusing the real-time in-vehicle scene data and the real-time out-of-vehicle scene data to the initial navigation map to generate a 3D digital visual map;
the dynamic decision module is used for marking the inside and outside of the vehicle through the 3D digital visual map and determining a corresponding navigation mode according to the requirement mark,
The navigation modes comprise an obstacle avoidance navigation mode required outside the vehicle, an automatic driving navigation mode required in the vehicle and a conventional navigation mode;
the fusion navigation module is used for generating a global visual navigation line according to the navigation mode;
the in-vehicle scene perception module comprises:
a first perception device configuration unit for configuring in-vehicle scene perception devices in advance in a vehicle, wherein,
The in-car scene sensing device comprises a temperature sensor, an illuminance sensor, an odor sensor and a multi-angle video monitor;
The multi-angle video monitor is respectively arranged on the roof, the bottom and the door, and is a self-adaptive angle adjusting video monitor;
The in-vehicle environment information acquisition unit is used for docking in-vehicle scene sensing equipment and acquiring in-vehicle real-time environment information through a vehicle terminal,
The real-time environment information comprises temperature, illuminance and peculiar smell;
the in-car personnel information acquisition unit is used for acquiring in-car personnel information through in-car monitoring equipment, wherein the in-car personnel information comprises passenger number information and passenger distribution information;
an in-vehicle personnel age group judging unit for judging the age groups of the passengers and the driver in the vehicle and outputting age group information, wherein,
The age group comprises infants, teenagers, young, middle-aged and elderly people;
The system comprises a driver gesture recognition unit, a passenger gesture recognition unit and a passenger detection unit, wherein the driver gesture recognition unit is used for capturing the action gesture of a driver and acquiring real-time action information of the driver;
The scene data aggregation unit is used for aggregating the real-time environment information, the age group information, the real-time action information of the driver and the real-time action information of the passenger to generate in-vehicle scene data;
the outside scene perception module comprises:
The second perception equipment configuration unit is used for configuring an external scene perception equipment on the vehicle side in advance, wherein the external scene perception equipment comprises an infrared ranging camera device and high-precision image acquisition equipment;
the infrared ranging camera device and the high-precision image acquisition equipment are respectively arranged on the vehicle head body and the vehicle tail;
the laser radar unit is used for acquiring road information through a laser radar and carrying out real-time road simulation on different lanes to generate simulated lanes,
The real-time road simulation comprises real-time simulation of different elements on a lane;
The vehicle exterior sensing unit is used for acquiring vehicle exterior sensing data in real time through vehicle exterior scene sensing equipment, wherein the vehicle exterior sensing data comprises a vehicle exterior high-precision image, real-time distances of different elements and traffic signs;
the perception correction unit is used for carrying out perception data correction on the simulated lane through the vehicle exterior induction data to generate a high-precision simulated scene;
the vehicle exterior scene acquisition unit is used for generating real-time vehicle exterior scene data through the high-precision simulation scene;
the digital map module includes:
The in-vehicle scene Jing Jiexi unit is used for analyzing in-vehicle scene data, determining the gesture characteristics of different personnel in the vehicle, judging the real-time in-vehicle requirements according to the gesture characteristics and generating a first requirement text;
The vehicle exterior Jing Jiexi unit is used for analyzing the vehicle exterior scene data, generating element characteristics of different lanes outside the vehicle, generating lane data and element distribution data, judging whether an avoidance demand exists or not through the element characteristics, and generating a second demand text;
the first fusion unit is used for carrying out 3D conversion on the initial navigation map according to the lane data and the element distribution data to generate a 3D map;
And the second fusion unit is used for determining real-time demand information according to the first demand text and the second demand text, generating a labeling frame in the 3D map, filling the real-time demand information and generating the 3D digital visual map.
2. The vehicle navigation system based on scene perception and dynamic multisource fusion of claim 1, wherein the satellite map module comprises:
The track grabbing unit is used for carrying out real-time positioning on the vehicle according to the satellite positioning signals and generating a running track of the vehicle;
The initial navigation map unit is used for acquiring a destination set by a user and generating an initial planning line;
the driving line unit is used for marking the density of the vehicle on the initial rule line and taking the driving track as a driving line on the initial rule line;
A guiding line unit for judging whether the real-time driving track changes the initial planning line according to the driving line, generating a second planning line after the initial planning line changes, and generating a vehicle guiding line in the second planning line,
The number of the second planning lines is not less than 2;
Each line of the second planned line has a separate planning attribute, wherein,
The planning attributes at least comprise shortest time, shortest mileage, minimum traffic flow and minimum accident occurrence times.
3. The vehicle navigation system based on scene perception and dynamic multisource fusion according to claim 1, wherein the dynamic decision module comprises:
The demand marking unit is used for marking the inside and outside of the vehicle according to the 3D digital visual map, wherein the inside and outside demand marking comprises an obstacle avoidance marking, a passenger demand marking and a driver demand marking;
The mark analysis unit is used for judging the type of the demand and the object of the demand according to the inside and outside demand mark of the vehicle and setting navigation behavior, wherein,
The demand types include driver demand, passenger demand and out-of-vehicle obstacle avoidance demand;
The navigation behavior comprises a speed control behavior, a stability adjustment behavior, a driver reminding behavior, a path regulation behavior, an obstacle reminding behavior, a lane switching behavior and a vehicle automatic braking behavior;
The navigation mode decision unit is used for constructing a navigation mode according to the navigation behavior, the demand type and the demand object, and
Setting navigation control parameters corresponding to the navigation mode, wherein the constructing the navigation mode comprises:
When the demand type is obstacle avoidance, generating an obstacle avoidance navigation mode of the demand outside the vehicle,
When the required objects are passengers and drivers, an automatic driving navigation mode of the in-vehicle requirement is generated, and when the type and the required objects are not required, a conventional navigation mode is generated.
4. A driving navigation system based on scene perception and dynamic multisource fusion as claimed in claim 3, wherein generating the obstacle avoidance navigation pattern of the out-of-vehicle demand comprises:
determining real-time demand information according to the inside and outside demand marks of the vehicle;
Determining corresponding lanes and vehicle distances according to the real-time demand information;
generating a lane switching instruction and a speed adjusting instruction according to the lanes and the 3D digital visual map;
and carrying out real-time speed regulation according to the speed regulation instruction, determining the time and distance for switching lanes according to the real-time speed regulation, generating corresponding off-vehicle control parameters, and generating an obstacle avoidance navigation mode required by the outside of the vehicle through the off-vehicle control parameters.
5. A driving navigation system based on scene perception and dynamic multisource fusion as claimed in claim 3, wherein said generating an automatic driving navigation pattern of in-vehicle demand comprises:
Determining a demand object according to the inside and outside demand marks of the vehicle, wherein the demand object is a driver or a passenger;
Determining corresponding demand information according to the demand object;
Setting an automatic driving navigation mode required in the vehicle according to the requirement information, wherein,
When the demand object is a driver, starting an automatic driving mode, and generating a corresponding voice instruction according to the demand information;
And when the demand object is a passenger, starting a semi-automatic driving mode and sending vehicle regulation information to a driver.
6. A driving navigation system based on scene perception and dynamic multisource fusion as claimed in claim 3, wherein said generating a regular navigation pattern comprises:
according to the destination display, determining a crowded road section and a low-flow road section in a planned route;
And according to the crowded road section and the low-traffic road section, carrying out dynamic route guidance on the 3D digital visual map, and taking the dynamic route guidance as a conventional navigation mode.
7. The driving navigation system based on scene perception and dynamic multisource fusion according to claim 1, wherein the fusion navigation module comprises:
The navigation selection unit is used for screening a corresponding navigation control scheme according to the navigation mode;
The global navigation unit is used for displaying global navigation line information on display equipment in the vehicle in real time according to the navigation control scheme;
and the visual display unit is used for simulating the inside and outside of the vehicle at the position of the user according to the global navigation circuit information and visually displaying the simulated scene.
CN202211031301.1A 2022-08-26 2022-08-26 A driving navigation system based on scene perception and dynamic multi-source fusion Active CN115140055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211031301.1A CN115140055B (en) 2022-08-26 2022-08-26 A driving navigation system based on scene perception and dynamic multi-source fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211031301.1A CN115140055B (en) 2022-08-26 2022-08-26 A driving navigation system based on scene perception and dynamic multi-source fusion

Publications (2)

Publication Number Publication Date
CN115140055A CN115140055A (en) 2022-10-04
CN115140055B true CN115140055B (en) 2025-02-14

Family

ID=83415405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211031301.1A Active CN115140055B (en) 2022-08-26 2022-08-26 A driving navigation system based on scene perception and dynamic multi-source fusion

Country Status (1)

Country Link
CN (1) CN115140055B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974942A (en) * 2022-10-25 2024-05-03 腾讯科技(深圳)有限公司 Vehicle driving state display method and device, electronic equipment and storage medium
CN115979293A (en) * 2023-01-16 2023-04-18 阿里巴巴(中国)有限公司 Navigation method, device, equipment and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106364488A (en) * 2015-07-20 2017-02-01 Lg电子株式会社 Autonomous vehicle
CN110861635A (en) * 2019-11-15 2020-03-06 安徽省阜阳市好希望工贸有限公司 Reminding method and device for safety seat
CN111645691A (en) * 2020-04-29 2020-09-11 云南安之骅科技有限责任公司 Driving behavior evaluation system based on comprehensive environment perception
CN113237490A (en) * 2021-02-08 2021-08-10 上海博泰悦臻网络技术服务有限公司 AR navigation method, system, electronic device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002049998A (en) * 2000-04-24 2002-02-15 Matsushita Electric Ind Co Ltd Driving support device
KR20180112949A (en) * 2017-04-05 2018-10-15 현대자동차주식회사 Autonomous Travelling Control Ststem And Control Metheod Using It
US11620419B2 (en) * 2018-01-24 2023-04-04 Toyota Research Institute, Inc. Systems and methods for identifying human-based perception techniques
CN110462543B (en) * 2018-03-08 2022-09-30 百度时代网络技术(北京)有限公司 Simulation-based method for evaluating perception requirements of autonomous vehicles
US11086317B2 (en) * 2018-03-30 2021-08-10 Intel Corporation Emotional adaptive driving policies for automated driving vehicles
KR102508511B1 (en) * 2018-05-24 2023-03-09 한국자동차연구원 System for automatic driving
CN113734197A (en) * 2021-09-03 2021-12-03 合肥学院 Unmanned intelligent control scheme based on data fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106364488A (en) * 2015-07-20 2017-02-01 Lg电子株式会社 Autonomous vehicle
CN110861635A (en) * 2019-11-15 2020-03-06 安徽省阜阳市好希望工贸有限公司 Reminding method and device for safety seat
CN111645691A (en) * 2020-04-29 2020-09-11 云南安之骅科技有限责任公司 Driving behavior evaluation system based on comprehensive environment perception
CN113237490A (en) * 2021-02-08 2021-08-10 上海博泰悦臻网络技术服务有限公司 AR navigation method, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN115140055A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
KR102569789B1 (en) Graphical user interface for display of autonomous vehicle behaviors
US10043393B2 (en) Optimal warning distance
US10198009B2 (en) Vehicle automation and operator engagment level prediction
US7605773B2 (en) Head-up display system and method for carrying out the location-correct display of an object situated outside a vehicle with regard to the position of the driver
US20230118619A1 (en) Parking-stopping point management device, parking-stopping point management method, and vehicle device
JP4453046B2 (en) Vehicle behavior learning apparatus and vehicle behavior learning program
EP3705846A1 (en) Object location indicator system and method
US11015948B2 (en) Information provision device, information provision server, and information provision method
US11181918B2 (en) Moving traffic obstacle detection and avoidance
CN115140055B (en) A driving navigation system based on scene perception and dynamic multi-source fusion
US20180239359A1 (en) System and method for determining navigational hazards
US20120109521A1 (en) System and method of integrating lane position monitoring with locational information systems
CN111857905A (en) Graphical User Interface for Display of Autonomous Vehicle Behavior
US8718858B2 (en) GPS navigation system
CN113196291A (en) Automatic selection of data samples for annotation
US11338819B2 (en) Cloud-based vehicle calibration system for autonomous driving
CN101458093B (en) Navigation apparatus
US20160107688A1 (en) Driver assistance system including additional information in connection with a road map
KR20240012448A (en) Route guidance device and route guidance system based on augmented reality and mixed reality
CN117922290A (en) Vehicle display control device
JP2008197725A (en) Traffic information generating apparatus, traffic information providing system, and traffic information generating method
US20190236382A1 (en) Roadside image tracking system
JP2022155283A (en) Display control device for vehicles and display control method for vehicles
KR102417514B1 (en) Vehicle, and control method for the same
US11390215B2 (en) Assistance system for a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 710000 floor 12, block C, Eurasia international, No. 666, west section of Eurasia Avenue, Chanba Ecological District, Xi'an City, Shaanxi Province

Applicant after: Shaanxi Junkai Technology Group Co.,Ltd.

Address before: 710000 floor 12, block C, Eurasia international, No. 666, west section of Eurasia Avenue, Chanba Ecological District, Xi'an City, Shaanxi Province

Applicant before: Shaanxi Junkai Electronic Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant