CN107329478A - A kind of life detection car, wearable device and virtual reality detection system - Google Patents
A kind of life detection car, wearable device and virtual reality detection system Download PDFInfo
- Publication number
- CN107329478A CN107329478A CN201710725905.9A CN201710725905A CN107329478A CN 107329478 A CN107329478 A CN 107329478A CN 201710725905 A CN201710725905 A CN 201710725905A CN 107329478 A CN107329478 A CN 107329478A
- Authority
- CN
- China
- Prior art keywords
- life
- module
- detection car
- information
- life detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 109
- 230000009429 distress Effects 0.000 claims abstract description 35
- 238000013135 deep learning Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 12
- 230000000875 corresponding effect Effects 0.000 claims description 6
- 230000010354 integration Effects 0.000 claims description 4
- 239000000523 sample Substances 0.000 claims description 4
- 230000004888 barrier function Effects 0.000 claims description 2
- 231100001261 hazardous Toxicity 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 6
- 238000000034 method Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000034994 death Effects 0.000 description 1
- 231100000517 death Toxicity 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 230000013016 learning Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62D—MOTOR VEHICLES; TRAILERS
- B62D63/00—Motor vehicles or trailers not otherwise provided for
- B62D63/02—Motor vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/30—Interpretation of pictures by triangulation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H1/00—Measuring characteristics of vibrations in solids by using direct conduction to the detector
- G01H1/12—Measuring characteristics of vibrations in solids by using direct conduction to the detector of longitudinal or not specified vibrations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01H—MEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
- G01H17/00—Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J5/0022—Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
- G01J5/0025—Living bodies
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J2005/0077—Imaging
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B2027/0178—Eyeglass type
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Signal Processing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Optics & Photonics (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Combustion & Propulsion (AREA)
- Chemical & Material Sciences (AREA)
- Acoustics & Sound (AREA)
- Traffic Control Systems (AREA)
- Alarm Systems (AREA)
Abstract
A kind of life detection car, wearable device and virtual reality detection system, life detection car includes vehicle main body, life-detection instrument, sensor assembly, at least four cameras and processor, vehicle main body is provided with the running assembly of driving life detection car traveling, and processor is connected with running assembly, life-detection instrument, sensor assembly and all cameras.The surrounding environment image that sign of life, the life detection car of sensor assembly detection and the range information and camera of surrounding objects that can be detected due to the application by life-detection instrument are shot, under the control of a processor, so that direction automatic running of the life detection car towards sign of life, search person in distress and its accurate location, rescue personnel is allowed in the place away from hazardous environment, the information that life detection car is passed back can be received by wearable device, understand site environment as on the spot in person, the decision of next step action is precisely made, unnecessary casualties is reduced.
Description
Technical field
The present invention relates to disaster assistance equipment technology field, and in particular to a kind of life detection car, wearable device and void
Intend real detection system.
Background technology
The generation of natural calamity brings the event of huge economic losses and casualties, common natural calamity bag to the mankind
Include geological disaster, such as earthquake, landslide, mud-rock flow;Meteorological disaster, such as bloods and droughts, cold wave, typhoon;Oceanic disasters etc..
When disaster occurs, the life of people is most important, how to hold disaster relief and obtains gold 72 hours, saves as far as possible
More life are top priorities.Existing rescue mode generally requires rescue personnel and enters disaster field, manually searches life
Sign.But due to the environmental hazard of rescue site, secondary disaster can all occur at any time, exist to the personal safety of rescue personnel huge
It is big to threaten.
The content of the invention
In order to protect the safety of rescue personnel, the application provides a kind of life detection car, wearable device and virtual reality
Detection system, the life detection car can detect sign of life automatically into hazardous environment, search out the position of person in distress, allow rescue
Personnel in the place away from hazardous environment, it is on the spot in person as understand scene, then while reducing unnecessary casualties, and
When, accurately rescued, it is as far as possible to save more life more.
According in a first aspect, provide a kind of life detection car in a kind of embodiment, including:
Vehicle main body, is provided with the running assembly of driving life detection car traveling, wherein, running assembly at least includes wheel;
Life-detection instrument, is arranged in vehicle main body, for detecting sign of life, and exports corresponding data, the packet
Include sign of life characteristic information and its azimuth information;
Sensor assembly, is arranged in vehicle main body, and the sensor assembly at least includes:Radar and ultrasonic sensor,
Range information for detecting life detection car and surrounding objects;
At least four cameras, are arranged in vehicle main body, the image for shooting surrounding environment;
Processor, be arranged at in vehicle main body, with the running assembly, life-detection instrument, sensor assembly and at least four
Individual camera is connected, the data for receiving life-detection instrument output, according to the azimuth information, controls the running assembly
Action is so as to drive life detection car towards sign of life direction running;And believe for the distance detected according to the sensor assembly
The view data that breath and at least four camera are shot, is positioned and is built map, is life probe vehicles planning traveling road
Line, controls life detection car automatic running;It is additionally operable to from described image data, identifies the image of people.
According to second aspect, a kind of wearable device is provided in a kind of embodiment, including:
Data reception module, for receiving person in distress and its position that life detection car as claimed in claim 1 is identified
Information;
VR display modules, are connected with the data reception module, for rebuilding the person in distress in reality environment
And its positional information, and carry out video playback.
According to the third aspect, a kind of virtual reality detection system is provided in a kind of embodiment, including as described in relation to the first aspect
Life detection car and the wearable device as described in second aspect.
According to above-described embodiment, the life that can be detected due to the life detection car that the application is provided by life-detection instrument
The environmental view that the range information and camera of sign, the life detection car of sensor assembly detection and surrounding objects are shot
Picture, under the control of a processor so that life detection car searches person in distress and its standard towards the direction automatic running of sign of life
True position, allows rescue personnel in the place away from hazardous environment, it is on the spot in person as understand site environment, precisely make next step
The decision of action, reduces unnecessary casualties.
Brief description of the drawings
A kind of life detection bassinet structure schematic diagram that Fig. 1 provides for a kind of embodiment;
Fig. 2 is a kind of a kind of composition frame chart of life detection car of offer of embodiment;
Fig. 3 is camera, processor and the data outputting module group of a kind of a kind of life detection car of offer of embodiment
Into block diagram;
A kind of a kind of composition frame chart for wearable device that Fig. 4 provides for embodiment;
A kind of virtual reality detection system that Fig. 5 provides for the application.
Embodiment
The present invention is described in further detail below by embodiment combination accompanying drawing.Wherein different embodiments
Middle similar component employs associated similar element numbers.In the following embodiments, many detailed descriptions be in order to
The application is better understood.However, those skilled in the art can be without lifting an eyebrow recognize, which part feature
It is dispensed, or can be substituted by other elements, material, method in varied situations.In some cases, this Shen
Certain operations that please be related do not show or description that this is the core in order to avoid the application by mistake in the description
Many descriptions are flooded, and to those skilled in the art, be described in detail these associative operations be not it is necessary, they
The general technology knowledge of description and this area in specification can completely understand associative operation.
In addition, feature described in this description, operation or feature can be combined to form respectively in any suitable way
Plant embodiment.Meanwhile, each step or action in method description can also be aobvious and easy according to those skilled in the art institute energy
The mode carry out order exchange or adjustment seen.Therefore, the various orders in specification and drawings are intended merely to clearly describe a certain
Individual embodiment, is not meant to be necessary order, wherein some sequentially must comply with unless otherwise indicated.
Embodiment one:
Fig. 1 and Fig. 2 are refer to, a kind of life detection car provided for the application, including:Vehicle main body 1, life-detection instrument
70th, sensor assembly 50, at least four cameras 20 and processor 10.
Vehicle main body 1 is provided with the running assembly 60 of driving life detection car traveling.Wherein, running assembly 60 at least includes car
Wheel 61, in some embodiments, wheel 61 is universal wheel, wheel 61 can 360 degree flexibly turn to.
Life-detection instrument 70 is arranged in vehicle main body 1, is stated data for detecting sign of life, and exporting corresponding data and is included
Sign of life characteristic information and its azimuth information, wherein, sign of life characteristic information at least includes:Respiratory characteristic or infra-red radiation
Feature.As long as any object temperature can all produce infra-red radiation more than absolute zero, human body is also the natural source of infrared radiation.
But the infrared signature of human body is different from the infrared signature of surrounding environment, infrared life-detection instrument is exactly it using them
Between difference, the target to be searched for and background are separated in the way of imaging.Moaned by the faint such as captive of Underground
The audio frequency sound and vibration wave of generation such as chant, call out, creeping, beaing, life-detection instrument 70 may determine that life whether there is.
Sensor assembly 50 is arranged in vehicle main body 1, and sensor assembly 50 at least includes:Radar and ultrasonic sensor,
Range information for detecting life detection car and surrounding objects.Different from this passive avoidance mode of infrared obstacle avoidance, the application
Using avoiding obstacles by supersonic wave and radar avoidance, avoiding obstacles by supersonic wave can monitor long range ultrasonic sensor in real time, be life detection
Car searches for open path.When with a certain distance from life detection car also has from barrier, ultrasonic sensor is just able to detect that phase
Information is closed, and control life detection car leaves accordingly;Radar avoidance can be considered simultaneously due to the distance measurement technique based on laser
Required precision and rate request, and laser radar is adusk more preferable apart from Detection results, is relatively applied to disaster assistance
Environment, because disaster assistance usually needs to carry out in the bad environment of light.
At least four cameras 20 are arranged in vehicle main body 1, the image for shooting surrounding environment.Do not stay dead to shoot
Angle, at least four camera 20 is fish-eye camera, and sliceable captured image is 360 degree of images of panorama, it is thus understood that
The situation of every nook and cranny around life detection car, it is to avoid the situation that person in distress is missed rescue occur.
Processor 10 be arranged at in vehicle main body, with running assembly 60, life-detection instrument 70, sensor assembly 50 and at least
Four cameras 20 are connected, the data for receiving the output of life-detection instrument 70, according to the azimuth information, control running assembly
60 actions are so as to drive life detection car towards sign of life direction running;And believe for the distance detected according to sensor assembly 50
The view data that breath and at least four camera 20 are shot, is positioned and is built map, is life probe vehicles planning traveling road
Line, controls life detection car automatic running;It is additionally operable to from the view data of at least four camera 20 shooting, identifies people
Image.
As can be seen here, due to the application provide life detection car can be detected by life-detection instrument sign of life,
The surrounding environment image that the life detection car of sensor assembly detection and the range information and camera of surrounding objects are shot, at place
Under the control for managing device so that life detection car searches person in distress and its accurate location towards the direction automatic running of sign of life,
Allow rescue personnel away from hazardous environment place, it is on the spot in person as understand site environment, precisely make next step action
Determine, reduce unnecessary casualties.
In some embodiments, vehicle main body 1 includes front and rear, and front and rear is respectively arranged with two cameras, its
In, after the view data that an anterior camera and a camera at rear portion are shot is handled through merging algorithm for images, it can spell
Practice midwifery and give birth to 360 degree of images of a panorama, therefore, the view data of at least four cameras 20 shooting of the application is through image mosaic
After can produce at least two panoramas, 360 degree of images.In the specific implementation, the master for two cameras that front and rear is set respectively
Optical axis is horizontal direction.
Specifically, with reference to Fig. 2, processor 10 includes:Object-recognition unit 17 and unmanned unit 18.Target identification list
The view data that member 17 is shot according to camera 20, identification person in distress and its positional information.Unmanned unit 18 includes SLAM
Module, life detection car and the range information of surrounding objects and be somebody's turn to do that unmanned unit 18 can be detected according to sensor assembly 50
The image that at least four cameras 20 are shot, is positioned and is built map, and plan travel route, control row for life probe vehicles
Sail component 60 and carry out corresponding actions so that life detection car automatic running.In some embodiments, processor 10 also includes:Avoidance
Unit 19, avoidance unit 19 is used for the range information of the life detection car and surrounding objects detected according to sensor assembly 50, hair
Go out control signal control running assembly 60 and acted timely avoidance accordingly.
As shown in Fig. 2 the life detection car of the application also includes:Data outputting module 40, data outputting module 40 and place
Reason device 10 is connected, the person in distress recognized for being wirelessly transferred and its positional information.
With reference to Fig. 3, in some embodiments, the life detection car of the application also includes:D GPS locating module 30, GPS location mould
Block 30 is connected with processor 10, the current geographic position information for obtaining life detection car in real time;Wherein, processor 10 is gone back
Including:Image processing module 12, deep learning module 13 and depth of field detection module 14.
Image processing module 12 is used to obtain the view data that all cameras 20 are shot, and is entered using merging algorithm for images
Row image mosaic, generates at least two panoramas, 360 degree of images.In one embodiment, image processing module 12 also includes vision
Module, the vision module is used for before the image that 12 pairs of all cameras 20 of image processing module are shot splices, and first carries out
Pretreatment of the anti-distortion with going noise.
Deep learning module 13 is connected with image processing module 12, for using deep learning algorithm according to this at least two
Individual 360 degree of images of panorama, identification person in distress's image and its local environment image.The concept of deep learning comes from artificial neural network
Research, be to set up, simulate the neutral net that human brain carries out analytic learning, data are explained by imitating the mechanism of human brain,
The deep learning high-rise expression attribute classification or feature more abstract by combining low-level feature formation, to find view data
Distributed nature is represented, so as to identify person in distress's image and its local environment image.
360 degree of images of panorama that foregoing image processing module 12 is generated, the image for being 2D can not therefrom know each identification
The depth of view information of person in distress out, can not just extrapolate the specific locus of each object.Depth of field detection module 14 and figure
As processing module 12 is connected with deep learning module 13, compares at least two panorama, 360 degree of images, utilize triangle imaging method
Then, the depth of view information for the person in distress that deep learning module 13 is recognized can be obtained.According to the depth of view information, so that it may calculate person in distress
With the relative position information of life detection car, the geographical location information for the life detection car that d GPS locating module 30 is obtained is being combined,
It can obtain the accurate geographical location information of person in distress.For example, the current geographic position of life detection car is<xa,ya,za>, and meet
The depth of view information of dangerous person is<xb,yb,zb>, then the accurate geographical position of person in distress is<xa+xb,ya+yb,za+zb>.
In one embodiment, processor 10 also includes:Data Integration module 16, Data Integration module 16 and deep learning mould
Block 13, depth of field detection module 14 are connected with d GPS locating module 30, the person in distress recognized according to the deep learning module 13
And its local environment image, depth of field detection module 14 calculate the current position that obtained depth of view information and d GPS locating module 30 are obtained
Positional information is managed, using video production algorithm, 360 degree of video flowings of a panorama are generated, 360 degree of video flowings of the panorama include in danger
The beacon information and geographical location information of person.During specific embodiment, 360 degree of the panorama is wirelessly transferred by data outputting module 40
Video stream data.Processor 10 also include build mapping module so that the beacon information and its geographical location information of person in distress regarding
With the formal presentation of map in frequency.
As can be seen here, the application is believed by being superimposed current geographic position information, person in distress's sign on 360 degree of videos of panorama
Breath and depth of view information and person in distress's local environment information, can help rescue personnel in the place away from hazardous environment, body faces it
Scene is understood as border, more rationally efficient rescue deployment is made, it is to avoid occur in the position in danger of precise positioning person in distress
Unnecessary casualties, ensures the personal safety of rescue personnel.
With reference to Fig. 4, correspondingly, the application also provides a kind of wearable device, including:Data reception module 24 and VR are shown
Module 20, also includes regular display 21 in some embodiments, the following detailed description of.
Data reception module 24 is used to receive person in distress and its positional information that above-mentioned life detection car is identified.At some
In embodiment, reception is that above-mentioned data outputting module 40 is wirelessly transferred and to include beacon information and the geographical position of person in distress
360 degree of video flowings of panorama of information.
VR display modules 20 are used to rebuild person in distress's information and its positional information in reality environment, and carry out video
Play.In certain embodiments, the beacon information and geographical location information of person in distress is included for being rebuild in reality environment
360 degree of video flowings of panorama, and carry out video playback.
Common display module 21 is connected with data reception module 24, for playing the panorama 360 by regular display
Video flowing is spent, so that rescue personnel observes the position of person in distress.
By dressing the wearable device (such as VR glasses), it may be such that rescue personnel demarcates the position of person in distress in advance, obtain
Take accurately rescue information, it is to avoid rashly into hazardous environment, unpredictable dangerous situation occur, cause the unnecessary person
Injures and deaths.
With reference to Fig. 5, based on above-mentioned life detection car and wearable device, present invention also provides a kind of detection of virtual reality
System, the virtual reality detection system includes:Above-mentioned life detection car and wearable device.
In summary, in summary, the virtual reality detection system can by wearable device receive life detection car without
360 degree video flowings of panorama of line transmission, local net cast person in distress outside remote hazardous environment environmental aspect in danger,
Site environment is understood as on the spot in person, and real-time rescue site map can be set up so that rescue personnel has to rescue site
It is apparent from, so as to precisely make the decision of next step action, it is to avoid unnecessary casualties occur.
Embodiment two:
Based on a kind of life detection car of embodiment, present invention also provides a kind of life detection car object localization method,
This method includes:
(1) using the image of at least four camera surrounding environment, sliceable captured image is at least two panoramas
360 degree of images;
(2) anti-distortion is first carried out to the image of camera surrounding environment and goes noise, then carry out image mosaic, after splicing
At least two panoramas, 360 degree of images carry out deep learnings, recognize wherein default anti-terrorism target;
(3) compare at least two panorama, 360 degree of images, rule is imaged using triangle, obtain by deep learning identification
The depth of view information of the anti-terrorism target arrived, so as to obtain the relative position information between anti-terrorism target and life detection car;
(4) current geographic position information of life detection car is obtained, it is relative with life detection car in conjunction with anti-terrorism target
Positional information, obtains the accurate geographical location information of anti-terrorism target;
(5) according to the beacon information and geographical location information of anti-terrorism target, build corresponding map and present, and with video shape
Formula is played in reality environment by wearable device.
It will be understood by those skilled in the art that all or part of function of various methods can pass through in above-mentioned embodiment
The mode of hardware is realized, can also be realized by way of computer program.When all or part of function in above-mentioned embodiment
When being realized by way of computer program, the program can be stored in a computer-readable recording medium, and storage medium can
With including:Read-only storage, random access memory, disk, CD, hard disk etc., perform the program above-mentioned to realize by computer
Function.For example, by program storage in the memory of equipment, when passing through computing device memory Program, you can in realization
State all or part of function.In addition, when in above-mentioned embodiment all or part of function realized by way of computer program
When, the program can also be stored in the storage mediums such as server, another computer, disk, CD, flash disk or mobile hard disk
In, by download or copying and saving into the memory of local device, or version updating is carried out to the system of local device, when logical
When crossing the program in computing device memory, you can realize all or part of function in above-mentioned embodiment.
Use above specific case is illustrated to the present invention, is only intended to help and is understood the present invention, not to limit
The system present invention.For those skilled in the art, according to the thought of the present invention, it can also make some simple
Deduce, deform or replace.
Claims (10)
1. a kind of life detection car, it is characterised in that including:
Vehicle main body, is provided with the running assembly of driving life detection car traveling, wherein, running assembly at least includes wheel;
Life-detection instrument, is arranged in vehicle main body, for detecting sign of life, and exports corresponding data, the data include life
Order sign characteristic information and its azimuth information;
Sensor assembly, is arranged in vehicle main body, and the sensor assembly at least includes:Radar and ultrasonic sensor, are used for
Detect the range information of life detection car and surrounding objects;
At least four cameras, are arranged in vehicle main body, the image for shooting surrounding environment;
Processor, is arranged at in vehicle main body, being taken the photograph with the running assembly, life-detection instrument, sensor assembly and at least four
As head is connected, for receiving the data that life-detection instrument is exported, according to the azimuth information, the running assembly action is controlled
So as to drive life detection car towards sign of life direction running;And for according to the sensor assembly detect range information and
The view data that at least four camera is shot, is positioned and is built map, is life probe vehicles planning travel route, control
Life detection car automatic running processed;It is additionally operable to from described image data, identifies the image of people.
2. life detection car as claimed in claim 1, it is characterised in that also include:Data outputting module, the data output
Module is connected with processor, the person in distress recognized for being wirelessly transferred and its positional information.
3. life detection car as claimed in claim 1, it is characterised in that also include:D GPS locating module, the GPS location mould
Block is connected with processor, the current geographic position information for obtaining life detection car in real time.
4. life detection car as claimed in claim 1, it is characterised in that the processor is used to be examined according to sensor assembly 50
The life detection car and the range information of surrounding objects of survey, send control signal control running assembly 60 and carry out corresponding action and keep away
Barrier.
5. life detection car as claimed in claim 1, it is characterised in that the vehicle main body includes front and rear, it is anterior and
Rear portion is respectively arranged with two cameras, wherein, the picture number that an anterior camera and a camera at rear portion are shot
According to that can produce 360 degree of images of a panorama after image mosaic, the view data that at least four camera is shot is through image
At least two panoramas, 360 degree of images can be produced after splicing.
6. life detection car as claimed in claim 4, it is characterised in that the processor also includes:
Image processing module, for obtaining the view data that at least four camera is shot, and carries out image mosaic, generates
At least two 360 degree of panorama images;
Deep learning module, is connected with described image processing module, for according to described at least two panoramas, 360 degree of images,
Recognize person in distress's image and its local environment image;
Depth of field detection module, is connected with described image processing module and deep learning module, relatively more described at least two panorama
360 degree of images, rule is imaged using triangle, obtains the depth of view information for the person in distress that deep learning module is recognized.
7. life detection car as claimed in claim 5, it is characterised in that described image processing module also includes:Vision module,
The vision module is used for before the image that described image processing module is shot at least four camera splices, first
Carry out pretreatment of the anti-distortion with going noise.
8. life detection car as claimed in claims 6 or 7, it is characterised in that the processor also includes:Data Integration mould
Block, the Data Integration module is connected with the deep learning module, depth of field detection module and d GPS locating module, according to institute
State the person in distress, depth of field detection module that deep learning module recognizes and calculate the obtained depth of view information and GPS location
The current geographic position information that module is obtained, generates 360 degree of video flowings of a panorama, described 360 degree of video stream packets of panorama
Include the beacon information and geographical location information of person in distress.
9. a kind of wearable device, it is characterised in that including:
Data reception module, for receiving person in distress and its position letter that life detection car as claimed in claim 1 is identified
Breath;
VR display modules, are connected with the data reception module, for rebuilding person in distress's information in reality environment
And its positional information, and carry out video playback.
10. a kind of virtual reality detection system, it is characterised in that including life detection car as claimed in claim 1 and as weighed
Profit requires the wearable device described in 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710725905.9A CN107329478A (en) | 2017-08-22 | 2017-08-22 | A kind of life detection car, wearable device and virtual reality detection system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710725905.9A CN107329478A (en) | 2017-08-22 | 2017-08-22 | A kind of life detection car, wearable device and virtual reality detection system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107329478A true CN107329478A (en) | 2017-11-07 |
Family
ID=60228157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710725905.9A Withdrawn CN107329478A (en) | 2017-08-22 | 2017-08-22 | A kind of life detection car, wearable device and virtual reality detection system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107329478A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108114387A (en) * | 2017-11-28 | 2018-06-05 | 歌尔科技有限公司 | A kind of method that robot is sued and laboured in earthquake and earthquake is sued and laboured |
CN109862112A (en) * | 2019-03-11 | 2019-06-07 | 上海救要救信息科技有限公司 | A kind of rescue mode and equipment |
US10390003B1 (en) | 2016-08-29 | 2019-08-20 | Perceptln Shenzhen Limited | Visual-inertial positional awareness for autonomous and non-autonomous device |
US10437252B1 (en) | 2017-09-08 | 2019-10-08 | Perceptln Shenzhen Limited | High-precision multi-layer visual and semantic map for autonomous driving |
CN111174765A (en) * | 2020-02-24 | 2020-05-19 | 北京航天飞行控制中心 | Planet vehicle target detection control method and device based on visual guidance |
US10794710B1 (en) | 2017-09-08 | 2020-10-06 | Perceptin Shenzhen Limited | High-precision multi-layer visual and semantic map by autonomous units |
US11328158B2 (en) | 2016-08-29 | 2022-05-10 | Trifo, Inc. | Visual-inertial positional awareness for autonomous and non-autonomous tracking |
US11774983B1 (en) | 2019-01-02 | 2023-10-03 | Trifo, Inc. | Autonomous platform guidance systems with unknown environment mapping |
US11842500B2 (en) | 2016-08-29 | 2023-12-12 | Trifo, Inc. | Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness |
US11953910B2 (en) | 2016-08-29 | 2024-04-09 | Trifo, Inc. | Autonomous platform guidance systems with task planning and obstacle avoidance |
US12158344B2 (en) | 2016-08-29 | 2024-12-03 | Trifo, Inc. | Mapping in autonomous and non-autonomous platforms |
US12181888B2 (en) | 2017-06-14 | 2024-12-31 | Trifo, Inc. | Monocular modes for autonomous platform guidance systems with auxiliary sensors |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102211624A (en) * | 2011-04-27 | 2011-10-12 | 中国地质大学(武汉) | Searching and rescuing positioning system robot |
CN205468610U (en) * | 2016-01-19 | 2016-08-17 | 杨凯名 | Intelligent detection rescue car |
CN106737544A (en) * | 2016-11-22 | 2017-05-31 | 电子科技大学 | Searching machine people based on various biosensors and 3D cameras |
CN106909158A (en) * | 2017-04-21 | 2017-06-30 | 上海海事大学 | A kind of all-around mobile environment VR image detection cars |
CN107050709A (en) * | 2017-04-28 | 2017-08-18 | 苏州亮磊知识产权运营有限公司 | It is a kind of for the automatic machinery people of fire scene rescue and its control method |
CN207367052U (en) * | 2017-08-22 | 2018-05-15 | 深圳普思英察科技有限公司 | A kind of life detection car, wearable device and virtual reality detection system |
-
2017
- 2017-08-22 CN CN201710725905.9A patent/CN107329478A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102211624A (en) * | 2011-04-27 | 2011-10-12 | 中国地质大学(武汉) | Searching and rescuing positioning system robot |
CN205468610U (en) * | 2016-01-19 | 2016-08-17 | 杨凯名 | Intelligent detection rescue car |
CN106737544A (en) * | 2016-11-22 | 2017-05-31 | 电子科技大学 | Searching machine people based on various biosensors and 3D cameras |
CN106909158A (en) * | 2017-04-21 | 2017-06-30 | 上海海事大学 | A kind of all-around mobile environment VR image detection cars |
CN107050709A (en) * | 2017-04-28 | 2017-08-18 | 苏州亮磊知识产权运营有限公司 | It is a kind of for the automatic machinery people of fire scene rescue and its control method |
CN207367052U (en) * | 2017-08-22 | 2018-05-15 | 深圳普思英察科技有限公司 | A kind of life detection car, wearable device and virtual reality detection system |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11328158B2 (en) | 2016-08-29 | 2022-05-10 | Trifo, Inc. | Visual-inertial positional awareness for autonomous and non-autonomous tracking |
US10390003B1 (en) | 2016-08-29 | 2019-08-20 | Perceptln Shenzhen Limited | Visual-inertial positional awareness for autonomous and non-autonomous device |
US11953910B2 (en) | 2016-08-29 | 2024-04-09 | Trifo, Inc. | Autonomous platform guidance systems with task planning and obstacle avoidance |
US12158344B2 (en) | 2016-08-29 | 2024-12-03 | Trifo, Inc. | Mapping in autonomous and non-autonomous platforms |
US11900536B2 (en) | 2016-08-29 | 2024-02-13 | Trifo, Inc. | Visual-inertial positional awareness for autonomous and non-autonomous tracking |
US11842500B2 (en) | 2016-08-29 | 2023-12-12 | Trifo, Inc. | Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness |
US12181888B2 (en) | 2017-06-14 | 2024-12-31 | Trifo, Inc. | Monocular modes for autonomous platform guidance systems with auxiliary sensors |
US10437252B1 (en) | 2017-09-08 | 2019-10-08 | Perceptln Shenzhen Limited | High-precision multi-layer visual and semantic map for autonomous driving |
US10794710B1 (en) | 2017-09-08 | 2020-10-06 | Perceptin Shenzhen Limited | High-precision multi-layer visual and semantic map by autonomous units |
CN108114387A (en) * | 2017-11-28 | 2018-06-05 | 歌尔科技有限公司 | A kind of method that robot is sued and laboured in earthquake and earthquake is sued and laboured |
CN108114387B (en) * | 2017-11-28 | 2021-05-14 | 歌尔科技有限公司 | Earthquake rescue robot and earthquake rescue method |
US12105518B1 (en) | 2019-01-02 | 2024-10-01 | Trifo, Inc. | Autonomous platform guidance systems with unknown environment mapping |
US11774983B1 (en) | 2019-01-02 | 2023-10-03 | Trifo, Inc. | Autonomous platform guidance systems with unknown environment mapping |
CN109862112B (en) * | 2019-03-11 | 2021-10-26 | 上海救要救信息科技有限公司 | Rescue method and device |
CN109862112A (en) * | 2019-03-11 | 2019-06-07 | 上海救要救信息科技有限公司 | A kind of rescue mode and equipment |
CN111174765B (en) * | 2020-02-24 | 2021-08-13 | 北京航天飞行控制中心 | Target detection control method and device for planetary vehicle based on vision guidance |
CN111174765A (en) * | 2020-02-24 | 2020-05-19 | 北京航天飞行控制中心 | Planet vehicle target detection control method and device based on visual guidance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN207367052U (en) | A kind of life detection car, wearable device and virtual reality detection system | |
CN107329478A (en) | A kind of life detection car, wearable device and virtual reality detection system | |
CN207164772U (en) | A kind of anti-terrorism SEEK BUS, wearable device and virtual reality detection system | |
US12097955B2 (en) | Drone device security system for protecting a package | |
EP3967972B1 (en) | Positioning method, device, and computer-readable storage medium | |
CN107273881A (en) | A kind of anti-terrorism SEEK BUS, wearable device and virtual reality detection system | |
USRE42690E1 (en) | Abnormality detection and surveillance system | |
CN102447911B (en) | Image acquisition unit, its method and associated control element | |
CN111460865B (en) | Driving support method, driving support system, computing device, and storage medium | |
Bai et al. | A cloud and vision-based navigation system used for blind people | |
CN109840586A (en) | To the real-time detection and correction based on deep learning of problematic sensor in autonomous machine | |
Li et al. | An improved traffic lights recognition algorithm for autonomous driving in complex scenarios | |
RU2755603C2 (en) | System and method for detecting and countering unmanned aerial vehicles | |
RU2746090C2 (en) | System and method of protection against unmanned aerial vehicles in airspace settlement | |
CN108549862A (en) | Abnormal scene detection method and device | |
US20230419843A1 (en) | Unmanned aerial vehicle dispatching method, server, base station, system, and readable storage medium | |
US20220157178A1 (en) | Disaster and emergency surveillance using a distributed fleet of autonomous robots | |
JP2019028807A (en) | Mobile platform, information output method, program, and recording medium | |
Siewert et al. | Drone net architecture for UAS traffic management multi-modal sensor networking experiments | |
Hanna et al. | Using unmanned aerial vehicles (UAVs) in locating wandering patients with dementia | |
US10025798B2 (en) | Location-based image retrieval | |
JP2000163671A (en) | Crisis management system | |
Manjari et al. | CREATION: Computational constRained travEl aid for objecT detection in outdoor eNvironment | |
EP3798907A1 (en) | System and method for detecting unmanned aerial vehicles | |
JP2022108823A (en) | Search assistance system and rescue assistance program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171107 |
|
WW01 | Invention patent application withdrawn after publication |