[go: up one dir, main page]

CN109087485B - Driving reminding method and device, intelligent glasses and storage medium - Google Patents

Driving reminding method and device, intelligent glasses and storage medium Download PDF

Info

Publication number
CN109087485B
CN109087485B CN201811001189.0A CN201811001189A CN109087485B CN 109087485 B CN109087485 B CN 109087485B CN 201811001189 A CN201811001189 A CN 201811001189A CN 109087485 B CN109087485 B CN 109087485B
Authority
CN
China
Prior art keywords
driving
current vehicle
reminding
image data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811001189.0A
Other languages
Chinese (zh)
Other versions
CN109087485A (en
Inventor
魏苏龙
林肇堃
麦绮兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811001189.0A priority Critical patent/CN109087485B/en
Publication of CN109087485A publication Critical patent/CN109087485A/en
Application granted granted Critical
Publication of CN109087485B publication Critical patent/CN109087485B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Emergency Alarm Devices (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a driving reminding method and device, intelligent glasses and a storage medium, wherein the method comprises the steps of acquiring image data collected by a camera when a driving reminding instruction is detected, wherein the camera is arranged on the intelligent glasses; identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets the preset condition; if the driving parameters meet the driving reminding conditions, a reminding event is triggered to remind the user, so that the driving safety is improved, and reasonable assistance is provided for the driving of the user.

Description

Driving reminding method and device, intelligent glasses and storage medium
Technical Field
The embodiment of the application relates to the field of wearable equipment, in particular to a driving reminding method and device, intelligent glasses and a storage medium.
Background
With the development of computing devices and the advancement of internet technologies, interaction between users and smart devices is more and more frequent, such as watching movies and television shows by using smart phones, watching television programs by using smart televisions, and checking short messages and physical sign parameters by using smart watches.
In the prior art, a user is limited by the function of driving a vehicle during daily driving, and the auxiliary capacity of the driver is defective, so that improvement is needed.
Disclosure of Invention
The invention provides a driving reminding method and device, intelligent glasses and a storage medium, which improve driving safety and reasonably assist driving of a user.
In a first aspect, an embodiment of the present application provides a driving reminding method, including:
when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on the intelligent glasses;
identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets the preset condition;
and if the driving parameters meet the driving reminding conditions, triggering a reminding event to remind the user.
In a second aspect, an embodiment of the present application further provides a driving reminding device, including:
the driving reminding system comprises an image data acquisition module, a driving reminding module and a driving reminding module, wherein the image data acquisition module is used for acquiring image data acquired by a camera when a driving reminding instruction is detected, and the camera is arranged on intelligent glasses;
the driving parameter determining module is used for identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets a preset condition;
and the reminding module is used for triggering a reminding event to remind the user if the driving parameters meet the driving reminding conditions.
In a third aspect, an embodiment of the present application further provides a pair of smart glasses, including: the driving reminding system comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the driving reminding method according to the embodiment of the application.
In a fourth aspect, the present application further provides a storage medium containing smart glasses executable instructions, which are used to execute the driving reminding method according to the present application when executed by a smart glasses processor.
According to the scheme, when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on intelligent glasses; identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets the preset condition; and if the driving parameters meet the driving reminding conditions, triggering a reminding event to remind the user, so that the driving safety is improved, and reasonable assistance is performed on the driving of the user.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
fig. 1 is a flowchart of a driving reminding method provided in an embodiment of the present application;
FIG. 2 is a flow chart of another driving reminding method provided in the embodiment of the present application;
FIG. 3 is a flow chart of another driving reminding method provided by the embodiment of the application;
FIG. 4 is a flow chart of another driving reminding method provided in the embodiment of the present application;
fig. 5 is a block diagram of a driving reminding device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of smart glasses provided in an embodiment of the present application;
fig. 7 is a schematic physical diagram of smart glasses provided in an embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a driving reminding method provided in an embodiment of the present application, and is applicable to reminding a driver during driving, where the method may be executed by the intelligent glasses provided in the embodiment of the present application, and a driving reminding device of the intelligent glasses may be implemented in a software and/or hardware manner, as shown in fig. 1, a specific scheme provided in this embodiment is as follows:
and S101, when a driving reminding instruction is detected, acquiring image data acquired by a camera, wherein the camera is arranged on the intelligent glasses.
The driving reminding instruction is an instruction for starting image acquisition and performing an auxiliary driving reminding function, and when the driving reminding instruction is detected, the camera is correspondingly started to acquire image data acquired by the camera. In one embodiment, the driving reminding instruction can be manually touched by a user, such as touching a touch panel integrated at the temple of the smart glasses to generate the driving reminding instruction, and can also be triggered to generate the driving reminding instruction after detecting a specific action, such as detecting a head movement action (which can be head nodding or head shaking for two times in succession) meeting the condition through an acceleration sensor and a gyroscope sensor integrated with the smart glasses. The camera is used for collecting image data, is optional and is arranged in a glasses frame of the intelligent glasses, and when the user wears the intelligent glasses, the camera is used for collecting the image data in the sight range of the user.
And S102, identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets a preset condition.
In one embodiment, the image recognition is performed on the currently acquired image to obtain the target object included in the image data, where the recognition result meeting the preset condition may be: and if the target object is an obstacle (such as a pedestrian or a railing in front of the vehicle is recognized), determining the driving parameters of the current vehicle. The driving parameter may be a driving speed and/or an acceleration of the current vehicle. In one embodiment, the image data collected by the camera comprises a plurality of continuous images collected by the camera, the characteristics of a target object in the plurality of continuous images are identified, and if an obstacle is identified and the distance between the obstacle and the current vehicle becomes smaller, the driving parameters of the current vehicle are determined. The distance change from the current vehicle to the obstacle is judged through a plurality of continuously collected images, and if the distance is gradually reduced, the driving parameters of the vehicle are correspondingly determined. In one embodiment, the image data is identified, and if the identification result meets the preset condition, namely that an obstacle (which may be a pedestrian, a railing, a vehicle, or the like) exists in the left front or the right front of the vehicle is identified from the current image data, the running parameter of the current vehicle is correspondingly determined, wherein the running parameter may be the current steering angle of the vehicle.
In one embodiment, the sensing data collected by the sensor integrated in the smart glasses is obtained and determined as the running parameters of the current vehicle, such as the acceleration of the current vehicle according to the sensing data collected by the acceleration sensor and the gyroscope sensor, and the running parameters of the current vehicle can be determined according to the data received in real time and sent by the current vehicle system, wherein the running parameters may be the running speed of the vehicle sent by the vehicle-mounted navigation system, or the parameters such as the speed per hour measured by the vehicle itself. In another embodiment, determining the current vehicle's driving parameters includes: and judging whether the image data contains a vehicle steering wheel, and if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel. When a user wears the smart glasses and the smart glasses camera collects an image in front of the user, if a steering wheel of a vehicle is identified in the collected image data, a rotation angle of the steering wheel is correspondingly determined through image identification, such as, for example, turning left by 30 degrees, turning right by 40 degrees, and the like. The determined rotation angle may be determined as a driving parameter of the vehicle.
And step S103, if the driving parameters meet driving reminding conditions, triggering a reminding event to remind the user.
The driving reminding condition is a condition which needs to reasonably remind the current driver, and the reminding event is a specific associated event for reminding the driver. In one embodiment, the driving parameters include a current driving speed of the vehicle and a rotation angle of a steering wheel, and if it is determined that the driving speed of the vehicle is greater than a first preset speed value (e.g., 70 km/h) and the rotation angle of the steering wheel is greater than a preset angle value (e.g., 10 degrees), a driving reminding event is correspondingly triggered to remind a user of the existence of a driving hidden danger and please drive safely, specifically, "the vehicle speed is too fast, please change the driving direction slowly", and the like. In another embodiment, a distance value between the obstacle and the current vehicle is determined through continuous images acquired by the camera, and if the distance value is smaller than a preset distance value (for example, 10 meters) and the current running speed of the vehicle is greater than a second preset speed (for example, 50 kilometer hours), a driving reminding event is correspondingly triggered to remind a user of the existence of a driving hidden danger and the safety driving, specifically, the driving hidden danger and the safety driving can be 'paying attention to the obstacle ahead, and slowing down' and the like.
According to the content, in the process that the user drives the vehicle, the driving condition of the user can be assisted through the intelligent glasses, the cost is obviously reduced compared with the use of the existing vehicle self-assisting functions, the auxiliary driving can be completed when the user wears the intelligent glasses, and the driving safety is improved.
Fig. 2 is a flowchart of another driving reminding method provided in an embodiment of the present application, and optionally, the image data acquired by the camera includes a plurality of consecutive images acquired by the camera, the image data is identified, and if the identification result satisfies a preset condition, determining the driving parameters of the current vehicle includes: and performing feature recognition on the target objects in the plurality of continuous images, and determining the driving parameters of the current vehicle if an obstacle is recognized and the distance between the obstacle and the current vehicle becomes smaller. As shown in fig. 2, the technical solution is as follows:
step S201, when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on the intelligent glasses.
Step S202, carrying out feature recognition on the target objects in the continuous multiple images, and if an obstacle is recognized and the distance between the obstacle and the current vehicle is reduced, determining the running parameters of the current vehicle.
In one embodiment, after the obstacle is identified through the image data, if the area occupied by the obstacle in the continuous multiple images is identified to be larger, the distance between the obstacle and the current vehicle can be correspondingly determined to be smaller, and the driving parameter of the current vehicle, such as the driving speed, is determined.
In another embodiment, the projection mapping relationship of the feature points can be used in a plurality of continuous images collected in the same direction to calculate the current distance from the obstacle to the vehicle, or the position of the obstacle from the vehicle can be obtained by using the visual difference of the plurality of continuous images in the horizontal direction. Optionally, in order to obtain the distance from the obstacle to the vehicle more accurately through the image data, the scheme may further adopt a mode including: the image processing method comprises the steps of carrying out image processing on obtained image data by utilizing wavelet transformation through different fuzzy degrees of images collected by a camera, detecting the width of the edge of an obstacle in a fuzzy image, and calculating the actual distance from the obstacle to the camera by utilizing cubic spline difference operation. After the distance from the obstacle to the camera is obtained, the distance (such as 2 meters) from the camera (namely the head position of the user) to the foremost end of the vehicle can be correspondingly subtracted, and the distance is the actual distance from the obstacle to the vehicle.
And step S203, if the driving parameters meet driving reminding conditions, triggering a reminding event to remind the user.
Therefore, after the image data acquired by the intelligent glasses camera obtains the accurate distance from the barrier to the vehicle, when the distance is gradually reduced or reaches a certain early warning distance (such as 5 meters), driving reminding is given to assist a user in driving safely, and the driving safety of a driver is improved.
Fig. 3 is a flowchart of another driving reminding method provided in an embodiment of the present application, and optionally, the determining the current driving parameter of the vehicle includes: judging whether the image data contains a vehicle steering wheel, if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel; after the determining the driving parameters of the current vehicle, the method further comprises the following steps: and generating a driving reminding condition according to the image data. As shown in fig. 3, the technical solution is as follows:
step S301, when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on the intelligent glasses.
Step S302, the image data is identified, if the identification result meets a preset condition, whether a vehicle steering wheel is included in the image data is judged, and if yes, the driving parameters of the current vehicle are determined according to the rotation angle of the steering wheel.
In one embodiment, the preset condition may be that an obstacle is recognized in the image data in the left front direction and/or the right front direction of the vehicle, it is determined whether a steering wheel image of the user is captured in the image data, and if the steering wheel image is captured, the rotation angle of the steering wheel is determined as the steering angle of the current vehicle. Specifically, when the rotation angle of the steering wheel is determined according to the steering wheel image, a horizontal line where a vehicle instrument desk is located may be used as a reference line, and the rotation angle of the mark in the center of the steering wheel relative to the reference line is determined as the rotation angle of the steering wheel.
And step S303, generating a driving reminding condition according to the image data.
In one embodiment, the driving reminding condition is generated in real time according to the image data collected by the camera, and can be changed. For example, if it is recognized that an obstacle exists in the image data in front of the left of the vehicle, the corresponding driving alert condition may be that the steering angle of the vehicle is determined to be a left turn (e.g., the steering angle is between 10 degrees and 90 degrees). Optionally, the generation manner of the driving reminding condition may be correspondingly generated according to different image data according to a stored mapping table, that is, the mapping table records the positions of different obstacles, that is, corresponding driving reminding conditions, and when the positions (the positions may be directions, distances from the vehicle, and the like) of the obstacles in the image data are identified, the corresponding driving reminding conditions are correspondingly generated according to the mapping table. Optionally, the driving reminding condition may be generated by a route simulation trajectory, specifically, after the relative position of the obstacle and the vehicle is identified, it is determined whether the obstacle will be collided with according to the driving parameters of the current vehicle, that is, if the driving parameters meet the driving reminding condition, the driving parameters are determined to meet the driving reminding condition if the driving trajectory determined according to the current driving parameters (for example, in the steering direction of the current vehicle) will collide with the obstacle.
And step S304, if the driving parameters meet driving reminding conditions, triggering a reminding event to remind the user.
Therefore, the driving reminding condition can be dynamically determined through the acquired image data in the driving process of the vehicle, and if the determined driving parameters of the vehicle meet the driving reminding condition, the driving reminding is correspondingly given, so that the flexibility of the auxiliary driving function is improved, and the driving safety is further improved.
Fig. 4 is a flowchart of another driving reminding method provided in the embodiment of the present application, and optionally, the triggering a reminding event to remind the user includes: determining whether a display screen of the intelligent glasses is lighted, if so, displaying driving reminding information through the display screen, wherein the display screen is integrated in a picture frame of the intelligent glasses, and if not, playing the driving reminding information through a bone conduction loudspeaker, wherein the bone conduction loudspeaker is integrated on a glasses leg of the intelligent glasses. As shown in fig. 4, the technical solution is as follows:
step S401, when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on the intelligent glasses.
And S402, identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets the preset condition.
Step S403, if the driving parameter meets the driving reminding condition, determining whether the display screen of the smart glasses is lighted, if so, executing step S404, and if not, executing step S405.
When a user wears the intelligent glasses, the display screen is integrated in the frame of the intelligent glasses and can display characters, pictures and video information, if the user detects that the user is in a lighting state, the display function is activated correspondingly, and at the moment, the step S404 can be directly executed, and the driving reminding information is displayed in the display screen.
And S404, displaying the driving reminding information through the display screen.
Wherein, the driving reminding information can be a red eye-catching mark to remind the user of paying attention.
Step S405, playing driving reminding information through a bone conduction speaker, wherein the bone conduction speaker is integrated on a glasses leg of the intelligent glasses.
For example, the driving reminder may be "please notice an obstacle ahead". It should be noted that, during the driving process of the user, the driving reminding information can be preferentially played through the bone conduction speaker, and meanwhile, the driving reminding information can be synchronously displayed in the display screen to prompt the user for multiple prompts.
Therefore, in the driving process, the user can be reminded of driving through the self-carried interaction function of the intelligent glasses, the problem that the user is unsafe to drive due to personal reasons is reduced, and the traffic accident rate is effectively reduced.
Fig. 5 is a block diagram of a driving reminding device according to an embodiment of the present application, where the driving reminding device is used for executing the driving reminding method according to the embodiment, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 5, the apparatus specifically includes: an image data acquisition module 101, a driving parameter determination module 102 and a reminder module 103, wherein,
the image data acquisition module 101 is configured to acquire image data acquired by a camera when a driving reminding instruction is detected, and the camera is arranged on the smart glasses.
The driving reminding instruction is an instruction for starting image acquisition and performing an auxiliary driving reminding function, and when the driving reminding instruction is detected, the camera is correspondingly started to acquire image data acquired by the camera. In one embodiment, the driving reminding instruction can be manually touched by a user, such as touching a touch panel integrated at the temple of the smart glasses to generate the driving reminding instruction, and can also be triggered to generate the driving reminding instruction after detecting a specific action, such as detecting a head movement action (which can be head nodding or head shaking for two times in succession) meeting the condition through an acceleration sensor and a gyroscope sensor integrated with the smart glasses. The camera is used for collecting image data, is optional and is arranged in a glasses frame of the intelligent glasses, and when the user wears the intelligent glasses, the camera is used for collecting the image data in the sight range of the user.
And a driving parameter determining module 102, configured to identify the image data, and determine a driving parameter of the current vehicle if the identification result meets a preset condition.
In one embodiment, the image recognition is performed on the currently acquired image to obtain the target object included in the image data, where the recognition result meeting the preset condition may be: and if the target object is an obstacle (such as a pedestrian or a railing in front of the vehicle is recognized), determining the driving parameters of the current vehicle. The driving parameter may be a driving speed and/or an acceleration of the current vehicle. In one embodiment, the image data collected by the camera comprises a plurality of continuous images collected by the camera, the characteristics of a target object in the plurality of continuous images are identified, and if an obstacle is identified and the distance between the obstacle and the current vehicle becomes smaller, the driving parameters of the current vehicle are determined. The distance change from the current vehicle to the obstacle is judged through a plurality of continuously collected images, and if the distance is gradually reduced, the driving parameters of the vehicle are correspondingly determined. In one embodiment, the image data is identified, and if the identification result meets the preset condition, namely that an obstacle (which may be a pedestrian, a railing, a vehicle, or the like) exists in the left front or the right front of the vehicle is identified from the current image data, the running parameter of the current vehicle is correspondingly determined, wherein the running parameter may be the current steering angle of the vehicle.
In one embodiment, the sensing data collected by the sensor integrated in the smart glasses is obtained and determined as the running parameters of the current vehicle, such as the acceleration of the current vehicle according to the sensing data collected by the acceleration sensor and the gyroscope sensor, and the running parameters of the current vehicle can be determined according to the data received in real time and sent by the current vehicle system, wherein the running parameters may be the running speed of the vehicle sent by the vehicle-mounted navigation system, or the parameters such as the speed per hour measured by the vehicle itself. In another embodiment, determining the current vehicle's driving parameters includes: and judging whether the image data contains a vehicle steering wheel, and if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel. When a user wears the smart glasses and the smart glasses camera collects an image in front of the user, if a steering wheel of a vehicle is identified in the collected image data, a rotation angle of the steering wheel is correspondingly determined through image identification, such as, for example, turning left by 30 degrees, turning right by 40 degrees, and the like. The determined rotation angle may be determined as a driving parameter of the vehicle.
And the reminding module 103 is used for triggering a reminding event to remind the user if the driving parameters meet the driving reminding conditions.
The driving reminding condition is a condition which needs to reasonably remind the current driver, and the reminding event is a specific associated event for reminding the driver. In one embodiment, the driving parameters include a current driving speed of the vehicle and a rotation angle of a steering wheel, and if it is determined that the driving speed of the vehicle is greater than a first preset speed value (e.g., 70 km/h) and the rotation angle of the steering wheel is greater than a preset angle value (e.g., 10 degrees), a driving reminding event is correspondingly triggered to remind a user of the existence of a driving hidden danger and please drive safely, specifically, "the vehicle speed is too fast, please change the driving direction slowly", and the like. In another embodiment, a distance value between the obstacle and the current vehicle is determined through continuous images acquired by the camera, and if the distance value is smaller than a preset distance value (for example, 10 meters) and the current running speed of the vehicle is greater than a second preset speed (for example, 50 kilometer hours), a driving reminding event is correspondingly triggered to remind a user of the existence of a driving hidden danger and the safety driving, specifically, the driving hidden danger and the safety driving can be 'paying attention to the obstacle ahead, and slowing down' and the like.
According to the content, in the process that the user drives the vehicle, the driving condition of the user can be assisted through the intelligent glasses, the cost is obviously reduced compared with the use of the existing vehicle self-assisting functions, the auxiliary driving can be completed when the user wears the intelligent glasses, and the driving safety is improved.
In a possible embodiment, the image data collected by the camera includes a plurality of consecutive images collected by the camera, and the driving parameter determining module 102 is specifically configured to:
and performing feature recognition on the target objects in the plurality of continuous images, and determining the driving parameters of the current vehicle if an obstacle is recognized and the distance between the obstacle and the current vehicle becomes smaller.
In one possible embodiment, the driving parameter determination module 102 is specifically configured to:
acquiring sensing data acquired by a sensor integrated in the intelligent glasses, and determining the sensing data as the driving parameters of the current vehicle; or
And determining the running parameters of the current vehicle according to the data received in real time and sent by the current vehicle system.
In one possible embodiment, the driving parameter determination module 102 is specifically configured to:
and judging whether the image data contains a vehicle steering wheel, and if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel.
In a possible embodiment, the system further includes a reminder condition generating module 104, configured to:
and after the driving parameters of the current vehicle are determined, generating driving reminding conditions according to the image data.
In a possible embodiment, the reminding module 103 is specifically configured to:
and determining whether a display screen of the intelligent glasses is lightened, and if the display screen is in a lightening state, displaying driving reminding information through the display screen, wherein the display screen is integrated in a glasses frame of the intelligent glasses.
In one possible embodiment, if the display screen is in a non-illuminated state, a driving reminder message is played through a bone conduction speaker, wherein the bone conduction speaker is integrated on a temple of the smart glasses.
In this embodiment, on the basis of the foregoing embodiments, a pair of smart glasses is provided, fig. 6 is a schematic structural diagram of a pair of smart glasses provided in an embodiment of the present application, and fig. 7 is a schematic physical diagram of a pair of smart glasses provided in an embodiment of the present application, and as shown in fig. 6 and 7, the pair of smart glasses includes: memory 201, a processor (CPU) 202, a display Unit 203, a touch panel 204, a heart rate detection module 205, a distance sensor 206, a camera 207, a bone conduction speaker 208, a microphone 209, a breathing light 210, which communicate via one or more communication buses or signal lines 211.
It should be understood that the smart glasses illustrated are merely one example of smart glasses, and that smart glasses may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail the smart glasses for driving reminding provided by this embodiment, and the smart glasses take smart glasses as an example.
A memory 201, the memory 201 being accessible by the CPU202, the memory 201 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
The display component 203 can be used for displaying image data and a control interface of an operating system, the display component 203 is embedded in a frame of the intelligent glasses, an internal transmission line 211 is arranged inside the frame, and the internal transmission line 211 is connected with the display component 203.
And a touch panel 204, the touch panel 204 being disposed at an outer side of at least one smart glasses temple for acquiring touch data, the touch panel 204 being connected to the CPU202 through an internal transmission line 211. The touch panel 204 can detect finger sliding and clicking operations of the user, and accordingly transmit the detected data to the processor 202 for processing to generate corresponding control instructions, which may be, for example, a left shift instruction, a right shift instruction, an up shift instruction, a down shift instruction, and the like. Illustratively, the display part 203 may display the virtual image data transmitted by the processor 202, and the virtual image data may be correspondingly changed according to the user operation detected by the touch panel 204, specifically, the virtual image data may be switched to a previous or next virtual image frame when a left shift instruction or a right shift instruction is detected; when the display section 203 displays video play information, the left shift instruction may be to perform playback of the play content, and the right shift instruction may be to perform fast forward of the play content; when the editable text content is displayed on the display part 203, the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction may be displacement operations on a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display part 203 is a game moving picture, the left shift instruction, the right shift instruction, the upward shift instruction and the downward shift instruction can be used for controlling an object in a game, for example, in an airplane game, the flying direction of an airplane can be controlled by the left shift instruction, the right shift instruction, the upward shift instruction and the downward shift instruction respectively; when the display part 203 can display video pictures of different channels, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction can perform switching of different channels, wherein the up shift instruction and the down shift instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 203 displays a still picture, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may perform switching between different pictures, where the left shift instruction may be switching to a previous picture, the right shift instruction may be switching to a next picture, the up shift instruction may be switching to a previous set, and the down shift instruction may be switching to a next set. The touch panel 204 can also be used to control display switches of the display section 203, for example, when the touch area of the touch panel 204 is pressed for a long time, the display section 203 is powered on to display an image interface, when the touch area of the touch panel 204 is pressed for a long time again, the display section 203 is powered off, and when the display section 203 is powered on, the brightness or resolution of an image displayed in the display section 203 can be adjusted by performing a slide-up and slide-down operation on the touch panel 204.
Heart rate detection module 205 for measure user's heart rate data, the heart rate indicates the heartbeat number of minute, and this heart rate detection module 205 sets up at the mirror leg inboard. Specifically, the heart rate detection module 205 may obtain human body electrocardiographic data by using a dry electrode in an electric pulse measurement manner, and determine the heart rate according to an amplitude peak value in the electrocardiographic data; this heart rate detection module 205 can also be by adopting the light transmission and the light receiver component of photoelectric method measurement rhythm of the heart, and is corresponding, and this heart rate detection module 205 sets up in the mirror leg bottom, the earlobe department of human auricle. Heart rate detection module 205 can be corresponding after gathering heart rate data send to the treater 202 and carry out data processing and have obtained the current heart rate value of wearer, in an embodiment, treater 202 can show this heart rate value in display component 203 in real time after determining user's heart rate value, optional treater 202 is determining that heart rate value is lower (if be less than 50) or higher (if be greater than 100) can be corresponding trigger the alarm of integration in the intelligent glasses, send this heart rate value and/or the alarm information that generates to the server through communication module simultaneously.
And a distance sensor 206, which can be disposed on the frame, wherein the distance sensor 206 is used for sensing the distance from the human face to the frame 101, and the distance sensor 206 can be implemented by using an infrared sensing principle. Specifically, the distance sensor 206 transmits the acquired distance data to the processor 202, and the processor 202 controls the brightness of the display section 203 according to the distance data. Illustratively, the processor 202 controls the display 203 to be in an on state when the distance sensor 206 detects a distance of less than 5 cm, and controls the display 204 to be in an off state when the distance sensor detects an object approaching.
And the breathing lamp 210 can be arranged at the edge of the frame, and when the display part 203 closes the display screen, the breathing lamp 210 can be lightened to be in a gradual dimming effect according to the control of the processor 202.
The camera 207 may be a front camera module disposed at the upper frame of the frame for collecting image data in front of the user, a rear camera module for collecting eyeball information of the user, or a combination thereof. Specifically, when the camera 207 collects a front image, the collected image is sent to the processor 202 for recognition and processing, and a corresponding trigger event is triggered according to a recognition result. Illustratively, when a user wears the wearable device at home, by identifying the collected front image, if a furniture item is identified, correspondingly inquiring whether a corresponding control event exists, if so, correspondingly displaying a control interface corresponding to the control event in the display part 203, and the user can control the corresponding furniture item through the touch panel 204, wherein the furniture item and the smart glasses are in network connection through bluetooth or wireless ad hoc network; when a user wears the wearable device outdoors, a target recognition mode can be correspondingly started, the target recognition mode can be used for recognizing specific people, the camera 207 sends collected images to the processor 202 for face recognition processing, if preset faces are recognized, voice broadcasting can be correspondingly conducted through a loudspeaker integrated with the intelligent glasses, the target recognition mode can also be used for recognizing different plants, for example, the processor 202 records current images collected by the camera 207 according to touch operation of the touch panel 204 and sends the current images to the server through the communication module for recognition, the server recognizes the plants in the collected images and feeds back related plant names to the intelligent glasses, and feedback data are displayed in the display part 203. The camera 207 may also be configured to capture an image of an eye of a user, such as an eyeball, and generate different control instructions by recognizing rotation of the eyeball, for example, the eyeball rotates upward to generate an upward movement control instruction, the eyeball rotates downward to generate a downward movement control instruction, the eyeball rotates leftward to generate a left movement control instruction, and the eyeball rotates rightward to generate a right movement control instruction, where the display unit 203 may display, as appropriate, virtual image data transmitted by the processor 202, where the virtual image data may be changed according to a control instruction generated by a change in movement of the eyeball of the user detected by the camera 207, specifically, a frame switching may be performed, and when a left movement control instruction or a right movement control instruction is detected, a previous or next virtual image frame may be correspondingly switched; when the display part 203 displays video playing information, the left control instruction can be to play back the played content, and the right control instruction can be to fast forward the played content; when the editable text content is displayed on the display part 203, the left movement control instruction, the right movement control instruction, the upward movement control instruction and the downward movement control instruction may be displacement operations of a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display part 203 is a game animation picture, the left movement control command, the right movement control command, the upward movement control command and the downward movement control command can control an object in a game, for example, in an airplane game, the flying direction of an airplane can be controlled by the left movement control command, the right movement control command, the upward movement control command and the downward movement control command respectively; when the display part 203 can display video pictures of different channels, the left shift control instruction, the right shift control instruction, the upward shift control instruction and the downward shift control instruction can switch different channels, wherein the upward shift control instruction and the downward shift control instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 203 displays a still picture, the left shift control instruction, the right shift control instruction, the up shift control instruction, and the down shift control instruction may switch between different pictures, where the left shift control instruction may be to a previous picture, the right shift control instruction may be to a next picture, the up shift control instruction may be to a previous picture set, and the down shift control instruction may be to a next picture set.
And a bone conduction speaker 208, the bone conduction speaker 208 being provided on an inner wall side of at least one temple, for converting the received audio signal transmitted from the processor 202 into a vibration signal. The bone conduction speaker 208 transmits sound to the inner ear of the human body through the skull, converts an electrical signal of the audio frequency into a vibration signal, transmits the vibration signal into the cochlea of the skull, and then is sensed by the auditory nerve. The bone conduction speaker 208 is used as a sound production device, so that the thickness of a hardware structure is reduced, the weight is lighter, meanwhile, the influence of electromagnetic radiation is avoided when no electromagnetic radiation exists, and the bone conduction speaker has the advantages of noise resistance, water resistance and capability of freeing ears.
A microphone 209 may be disposed on the lower frame of the frame for capturing external (user, ambient) sounds and transmitting them to the processor 202 for processing. Illustratively, the microphone 209 collects the sound emitted by the user and performs voiceprint recognition by the processor 202, and if the sound is recognized as a voiceprint for authenticating the user, the subsequent voice control can be correspondingly received, specifically, the user can emit voice, the microphone 209 sends the collected voice to the processor 202 for recognition so as to generate a corresponding control instruction according to the recognition result, such as "power on", "power off", "display brightness increase", "display brightness decrease", and the processor 202 subsequently executes a corresponding control process according to the generated control instruction.
The driving reminding device for the intelligent glasses and the intelligent glasses provided in the above embodiments can execute the driving reminding method for the intelligent glasses provided in any embodiment of the present invention, and have corresponding functional modules and beneficial effects for executing the method. For the technical details that are not described in detail in the above embodiments, reference may be made to a driving reminding method of smart glasses provided in any embodiment of the present invention.
Embodiments of the present application also provide a storage medium containing smart glasses executable instructions, which when executed by a smart glasses processor, are configured to perform a driving alert method, the method including:
when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on the intelligent glasses;
identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets the preset condition;
and if the driving parameters meet the driving reminding conditions, triggering a reminding event to remind the user.
In one possible embodiment, the image data collected by the camera includes a plurality of continuous images collected by the camera, the identifying the image data, and if the identification result satisfies a preset condition, determining the driving parameters of the current vehicle includes:
and performing feature recognition on the target objects in the plurality of continuous images, and determining the driving parameters of the current vehicle if an obstacle is recognized and the distance between the obstacle and the current vehicle becomes smaller.
In one possible embodiment, the determining the driving parameters of the current vehicle includes:
acquiring sensing data acquired by a sensor integrated in the intelligent glasses, and determining the sensing data as the driving parameters of the current vehicle; or
And determining the running parameters of the current vehicle according to the data received in real time and sent by the current vehicle system.
In one possible embodiment, the determining the driving parameters of the current vehicle includes:
and judging whether the image data contains a vehicle steering wheel, and if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel.
In one possible embodiment, after the determining the driving parameters of the current vehicle, the method further includes:
and generating a driving reminding condition according to the image data.
In one possible embodiment, the triggering a reminder event to remind the user includes:
and determining whether a display screen of the intelligent glasses is lightened, and if the display screen is in a lightening state, displaying driving reminding information through the display screen, wherein the display screen is integrated in a glasses frame of the intelligent glasses.
In one possible embodiment, if the display screen is in a non-illuminated state, a driving reminder message is played through a bone conduction speaker, wherein the bone conduction speaker is integrated on a temple of the smart glasses.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application and containing the computer-executable instructions is not limited to the operation of the driving reminding method described above, and may also perform the relevant operations in the driving reminding method provided in any embodiment of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. The driving reminding method is characterized by comprising the following steps:
when a driving reminding instruction is detected, image data collected by a camera is obtained, and the camera is arranged on the intelligent glasses;
identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets the preset condition;
if the driving parameters meet the driving reminding conditions, triggering a reminding event to remind a user;
the intelligent glasses are integrated with a processor, and the intelligent glasses identify image data acquired by the camera through the processor and control triggering of a reminding event through the processor;
the determining of the driving parameters of the current vehicle comprises:
judging whether the image data contains a vehicle steering wheel, if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel, wherein the driving parameters also comprise the steering angle of the vehicle, and determining the rotation angle of the steering wheel as the steering angle of the current vehicle;
when the running parameters comprise the current running speed of the vehicle and the rotating angle of the steering wheel, if the running speed is greater than a first preset speed value and the rotating angle of the steering wheel is greater than a preset angle value, triggering a reminding event to remind the user;
the rotation angle of the steering wheel is determined according to the rotation angle of a mark in the center of the steering wheel relative to a reference line by taking a horizontal line where an instrument desk of a current vehicle is located as the reference line, or the rotation angle of the steering wheel is determined according to an image of the steering wheel, acquired by the intelligent glasses, through a preset trained machine learning model;
the image data collected by the camera comprises a plurality of continuous images collected by the camera, the image data is identified, and if the identification result meets a preset condition, the determining of the driving parameters of the current vehicle comprises the following steps:
and performing feature recognition on the target objects in the plurality of continuous images, and determining the driving parameters of the current vehicle if an obstacle is recognized and the distance between the obstacle and the current vehicle becomes smaller.
2. The method of claim 1, wherein the determining a driving parameter of a current vehicle comprises:
acquiring sensing data acquired by a sensor integrated in the intelligent glasses, and determining the sensing data as the driving parameters of the current vehicle; or
And determining the running parameters of the current vehicle according to the data received in real time and sent by the current vehicle system.
3. The method according to any one of claims 1-2, further comprising, after said determining a driving parameter of the current vehicle:
and generating a driving reminding condition according to the image data.
4. The method of claim 3, wherein the triggering a reminder event to remind a user comprises:
and determining whether a display screen of the intelligent glasses is lightened, and if the display screen is in a lightening state, displaying driving reminding information through the display screen, wherein the display screen is integrated in a glasses frame of the intelligent glasses.
5. The method of claim 4, wherein if the display screen is in a non-illuminated state, playing a driving reminder message through a bone conduction speaker, wherein the bone conduction speaker is integrated on a temple of the smart glasses.
6. Driving reminding device, its characterized in that includes:
the driving reminding system comprises an image data acquisition module, a driving reminding module and a driving reminding module, wherein the image data acquisition module is used for acquiring image data acquired by a camera when a driving reminding instruction is detected, and the camera is arranged on intelligent glasses;
the driving parameter determining module is used for identifying the image data, and determining the driving parameters of the current vehicle if the identification result meets a preset condition;
the reminding module is used for triggering a reminding event to remind a user if the driving parameters meet the driving reminding conditions;
the intelligent glasses are integrated with a processor, and the intelligent glasses identify image data acquired by the camera through the processor and control triggering of a reminding event through the processor;
the determining of the driving parameters of the current vehicle comprises:
judging whether the image data contains a vehicle steering wheel, if so, determining the driving parameters of the current vehicle according to the rotation angle of the steering wheel, wherein the driving parameters also comprise the steering angle of the vehicle, and determining the rotation angle of the steering wheel as the steering angle of the current vehicle;
when the running parameters comprise the current running speed of the vehicle and the rotating angle of the steering wheel, if the running speed is greater than a first preset speed value and the rotating angle of the steering wheel is greater than a preset angle value, triggering a reminding event to remind the user;
the rotation angle of the steering wheel is determined according to the rotation angle of a mark in the center of the steering wheel relative to a reference line by taking a horizontal line where an instrument desk of a current vehicle is located as the reference line, or the rotation angle of the steering wheel is determined according to an image of the steering wheel, acquired by the intelligent glasses, through a preset trained machine learning model;
the image data collected by the camera comprises a plurality of continuous images collected by the camera, the image data is identified, and if the identification result meets a preset condition, the determining of the driving parameters of the current vehicle comprises the following steps:
and performing feature recognition on the target objects in the plurality of continuous images, and determining the driving parameters of the current vehicle if an obstacle is recognized and the distance between the obstacle and the current vehicle becomes smaller.
7. A smart eyewear comprising: processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements the driving alert method according to any of claims 1-5 when executing the computer program.
8. A storage medium containing smart-glasses-executable instructions, which when executed by a smart-glasses processor, are configured to perform the driving reminder method of any of claims 1-5.
CN201811001189.0A 2018-08-30 2018-08-30 Driving reminding method and device, intelligent glasses and storage medium Expired - Fee Related CN109087485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811001189.0A CN109087485B (en) 2018-08-30 2018-08-30 Driving reminding method and device, intelligent glasses and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811001189.0A CN109087485B (en) 2018-08-30 2018-08-30 Driving reminding method and device, intelligent glasses and storage medium

Publications (2)

Publication Number Publication Date
CN109087485A CN109087485A (en) 2018-12-25
CN109087485B true CN109087485B (en) 2021-06-08

Family

ID=64795233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811001189.0A Expired - Fee Related CN109087485B (en) 2018-08-30 2018-08-30 Driving reminding method and device, intelligent glasses and storage medium

Country Status (1)

Country Link
CN (1) CN109087485B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6696558B1 (en) * 2018-12-26 2020-05-20 株式会社Jvcケンウッド Vehicle recording control device, vehicle recording device, vehicle recording control method, and program
CN111488055A (en) * 2019-01-28 2020-08-04 富顶精密组件(深圳)有限公司 Automobile-used augmented reality glasses auxiliary device
CN110705483B (en) * 2019-10-08 2022-11-18 Oppo广东移动通信有限公司 Driving reminding method, device, terminal and storage medium
CN111340880B (en) * 2020-02-17 2023-08-04 北京百度网讯科技有限公司 Method and apparatus for generating predictive model
CN112347897A (en) * 2020-11-03 2021-02-09 Tcl通讯(宁波)有限公司 Working method and system of intelligent glasses
CN113247015A (en) * 2021-06-30 2021-08-13 厦门元馨智能科技有限公司 Vehicle driving auxiliary system based on somatosensory operation integrated glasses and method thereof
CN114385005B (en) * 2021-12-24 2024-04-26 领悦数字信息技术有限公司 Personalized virtual test driving device, method and storage medium
CN114312306B (en) * 2022-01-04 2024-03-19 一汽解放汽车有限公司 Control method of driving glasses, computer device and storage medium
CN114973667B (en) * 2022-05-18 2023-09-22 北京邮电大学 Communication perception calculation integrated road infrastructure system and processing method thereof
CN115761687A (en) * 2022-07-04 2023-03-07 惠州市德赛西威汽车电子股份有限公司 Obstacle recognition method, obstacle recognition device, electronic device and storage medium
CN115995142B (en) * 2022-10-12 2025-03-11 广州市德赛西威智慧交通技术有限公司 Driving training reminding method based on wearable device and wearable device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1371079A (en) * 2001-02-09 2002-09-25 松下电器产业株式会社 Image synthesizer
CN104641405A (en) * 2012-07-30 2015-05-20 市光工业株式会社 Warning device for vehicle and outside mirror device for vehicle
CN105539293A (en) * 2016-02-03 2016-05-04 北京中科慧眼科技有限公司 Lane-departure early warning method and device and automobile driving assistance system
CN106133807A (en) * 2014-03-27 2016-11-16 日本精机株式会社 Car alarm apparatus
CN106494309A (en) * 2016-10-11 2017-03-15 广州视源电子科技股份有限公司 Vehicle vision blind area picture display method and device and vehicle-mounted virtual system
US9718405B1 (en) * 2015-03-23 2017-08-01 Rosco, Inc. Collision avoidance and/or pedestrian detection system
JP2017194926A (en) * 2016-04-22 2017-10-26 株式会社デンソー Vehicle control apparatus and vehicle control method
CN107444400A (en) * 2016-05-31 2017-12-08 福特全球技术公司 Vehicle intelligent collision
CN107585099A (en) * 2016-07-08 2018-01-16 福特全球技术公司 Pedestrian detection during vehicle backing
JP6311628B2 (en) * 2015-03-10 2018-04-18 トヨタ自動車株式会社 Collision avoidance control device
KR20180064894A (en) * 2016-12-06 2018-06-15 주식회사 다산네트웍스 Head Mounted Display apparatus and system for a car
JP2018106334A (en) * 2016-12-26 2018-07-05 トヨタ自動車株式会社 Warning device for vehicle
JP2018129732A (en) * 2017-02-09 2018-08-16 三菱自動車工業株式会社 Video display device for vehicle

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10356307A1 (en) * 2003-11-28 2005-06-23 Robert Bosch Gmbh Method and device for warning the driver of a motor vehicle
CN2724032Y (en) * 2004-04-07 2005-09-07 王巍 Multifunction glasses
JP5031801B2 (en) * 2009-07-28 2012-09-26 日立オートモティブシステムズ株式会社 In-vehicle image display device
KR20150139229A (en) * 2014-06-03 2015-12-11 현대모비스 주식회사 Hmd utilizing driver's drowsiness warning apparatus and method thereof
CN105216792A (en) * 2014-06-12 2016-01-06 株式会社日立制作所 Obstacle target in surrounding environment is carried out to the method and apparatus of recognition and tracking
CN104269070B (en) * 2014-08-20 2017-05-17 东风汽车公司 Active vehicle safety pre-warning method and safety pre-warning system with same applied
US9440649B2 (en) * 2014-10-29 2016-09-13 Robert Bosch Gmbh Impact mitigation by intelligent vehicle positioning
CN104485008A (en) * 2014-12-04 2015-04-01 上海交通大学 Head-wearing type auxiliary driving system aiming at achromate
JP6358123B2 (en) * 2015-02-16 2018-07-18 株式会社デンソー Driving assistance device
CN105128857B (en) * 2015-09-02 2017-11-14 郑州宇通客车股份有限公司 A kind of automobile autonomous driving control method and a kind of automobile autonomous driving system
US10071748B2 (en) * 2015-09-17 2018-09-11 Sony Corporation System and method for providing driving assistance to safely overtake a vehicle
US20170287217A1 (en) * 2016-03-30 2017-10-05 Kahyun Kim Preceding traffic alert system and method
CN105835820B (en) * 2016-04-28 2018-04-13 姜锡华 The vehicle collision avoidance system of onboard sensor method and application this method
CN108116405A (en) * 2016-11-30 2018-06-05 长城汽车股份有限公司 Control method, system and the vehicle of vehicle
DE102016226047A1 (en) * 2016-12-22 2018-06-28 Robert Bosch Gmbh Method and device in a motor vehicle for pedestrian protection
CN107139918A (en) * 2017-03-29 2017-09-08 深圳市元征科技股份有限公司 A kind of vehicle collision reminding method and vehicle
CN106874900A (en) * 2017-04-26 2017-06-20 桂林电子科技大学 A kind of tired driver detection method and detection means based on steering wheel image
CN107933306B (en) * 2017-12-11 2019-08-09 广州德科投资咨询有限公司 A kind of driving safety method for early warning and intelligent glasses based on intelligent glasses
CN108260880A (en) * 2017-12-12 2018-07-10 深圳道尔法科技有限公司 Wear-type speed sensor
CN109059929B (en) * 2018-08-30 2021-02-26 Oppo广东移动通信有限公司 Navigation method, navigation device, wearable device and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1371079A (en) * 2001-02-09 2002-09-25 松下电器产业株式会社 Image synthesizer
CN104641405A (en) * 2012-07-30 2015-05-20 市光工业株式会社 Warning device for vehicle and outside mirror device for vehicle
CN106133807A (en) * 2014-03-27 2016-11-16 日本精机株式会社 Car alarm apparatus
JP6311628B2 (en) * 2015-03-10 2018-04-18 トヨタ自動車株式会社 Collision avoidance control device
US9718405B1 (en) * 2015-03-23 2017-08-01 Rosco, Inc. Collision avoidance and/or pedestrian detection system
CN105539293A (en) * 2016-02-03 2016-05-04 北京中科慧眼科技有限公司 Lane-departure early warning method and device and automobile driving assistance system
JP2017194926A (en) * 2016-04-22 2017-10-26 株式会社デンソー Vehicle control apparatus and vehicle control method
CN107444400A (en) * 2016-05-31 2017-12-08 福特全球技术公司 Vehicle intelligent collision
CN107585099A (en) * 2016-07-08 2018-01-16 福特全球技术公司 Pedestrian detection during vehicle backing
CN106494309A (en) * 2016-10-11 2017-03-15 广州视源电子科技股份有限公司 Vehicle vision blind area picture display method and device and vehicle-mounted virtual system
KR20180064894A (en) * 2016-12-06 2018-06-15 주식회사 다산네트웍스 Head Mounted Display apparatus and system for a car
JP2018106334A (en) * 2016-12-26 2018-07-05 トヨタ自動車株式会社 Warning device for vehicle
JP2018129732A (en) * 2017-02-09 2018-08-16 三菱自動車工業株式会社 Video display device for vehicle

Also Published As

Publication number Publication date
CN109087485A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109087485B (en) Driving reminding method and device, intelligent glasses and storage medium
US10832031B2 (en) Command processing using multimodal signal analysis
CN106471419B (en) Management information is shown
KR101659027B1 (en) Mobile terminal and apparatus for controlling a vehicle
JP4633043B2 (en) Image processing device
JP6246829B2 (en) Resource management for head mounted display
CN109059929B (en) Navigation method, navigation device, wearable device and storage medium
CN109145847B (en) Identification method and device, wearable device and storage medium
KR20140033009A (en) An optical device for the visually impaired
CN109241900B (en) Wearable device control method and device, storage medium and wearable device
WO2019244670A1 (en) Information processing device, information processing method, and program
CN109259724B (en) Eye monitoring method and device, storage medium and wearable device
KR20150037251A (en) Wearable computing device and user interface method
US20220036758A1 (en) System and method for providing vehicle function guidance and virtual test-driving experience based on augmented reality content
CN109061903B (en) Data display method and device, intelligent glasses and storage medium
US20200380733A1 (en) Information processing device, information processing method, and program
CN109224432B (en) Entertainment application control method and device, storage medium and wearable device
US20210081047A1 (en) Head-Mounted Display With Haptic Output
CN109257490B (en) Audio processing method and device, wearable device and storage medium
CN109255314B (en) Information prompting method and device, intelligent glasses and storage medium
CN105653020A (en) Time traveling method and apparatus and glasses or helmet using same
CN109358744A (en) Information sharing method and device, storage medium and wearable device
CN116552556A (en) Lane changing early warning method, device, equipment and storage medium
CN109144265A (en) Display changeover method, device, wearable device and storage medium
US20200265252A1 (en) Information processing apparatus and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210608

CF01 Termination of patent right due to non-payment of annual fee