[go: up one dir, main page]

CN101701828B - Blind autonomous navigation method based on stereoscopic vision and information fusion - Google Patents

Blind autonomous navigation method based on stereoscopic vision and information fusion Download PDF

Info

Publication number
CN101701828B
CN101701828B CN200910234985A CN200910234985A CN101701828B CN 101701828 B CN101701828 B CN 101701828B CN 200910234985 A CN200910234985 A CN 200910234985A CN 200910234985 A CN200910234985 A CN 200910234985A CN 101701828 B CN101701828 B CN 101701828B
Authority
CN
China
Prior art keywords
image
output
obtains
information fusion
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200910234985A
Other languages
Chinese (zh)
Other versions
CN101701828A (en
Inventor
周东翔
刘云辉
刘顺
杨延光
范才智
蔡宣平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHANGZHOU HYPER MEDIUM AND SENSING TECHNOLOGY INSTITUTE Co Ltd
National University of Defense Technology
Original Assignee
CHANGZHOU HYPER MEDIUM AND SENSING TECHNOLOGY INSTITUTE Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHANGZHOU HYPER MEDIUM AND SENSING TECHNOLOGY INSTITUTE Co Ltd filed Critical CHANGZHOU HYPER MEDIUM AND SENSING TECHNOLOGY INSTITUTE Co Ltd
Priority to CN200910234985A priority Critical patent/CN101701828B/en
Publication of CN101701828A publication Critical patent/CN101701828A/en
Application granted granted Critical
Publication of CN101701828B publication Critical patent/CN101701828B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Processing Or Creating Images (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明涉及导航系统的技术领域,尤其是一种基于立体视觉和信息融合的盲人自主导航方法。左边图像与右边图像输出连接图像预处理,图像预处理输出连接图像分割,图像分割连接特征提取,特征提取输出连接立体匹配,立体匹配输出连接获取视差图,获取视差图输出连接获取深度图像和目标探测与标识,获取深度图像输出连接目标探测与标识,目标探测与标识输出连接目标尺寸、距离和方向参与计算,目标尺寸、距离和方向参与计算输出连接多传感器信息融合,多传感器信息融合输出连接语音输出障碍位置、大小、距离及方向等信息,超声波测距、电子罗盘测向和GPS定位输出连接多传感器信息融合。本发明的算法对于提高盲人的自主性和独立行走能力有很大的实用价值。

The invention relates to the technical field of navigation systems, in particular to an autonomous navigation method for blind people based on stereo vision and information fusion. The output of the left image and the right image is connected to image preprocessing, the output of image preprocessing is connected to image segmentation, the output of image segmentation is connected to feature extraction, the output of feature extraction is connected to stereo matching, the output of stereo matching is connected to obtain disparity map, and the output connection of obtaining disparity map is connected to obtain depth image and target Detection and identification, acquisition of depth image output connection target detection and identification, target detection and identification output connection target size, distance and direction participate in calculation, target size, distance and direction participate in calculation output connection multi-sensor information fusion, multi-sensor information fusion output connection Voice output obstacle position, size, distance and direction and other information, ultrasonic ranging, electronic compass direction finding and GPS positioning output connection multi-sensor information fusion. The algorithm of the invention has great practical value for improving the autonomy and independent walking ability of blind people.

Description

Blind autonomous navigation method based on stereoscopic vision and information fusion
Technical field
The present invention relates to a kind of navigational system, relate in particular to a kind of blind autonomous navigation method based on stereoscopic vision and information fusion.
Background technology
There are up to ten million disadvantaged group in China; When going on a journey, they can face various complicacies, changeable traffic environment; Various barriers in increasing motor vehicles, the road all will threaten blind person's trip; Blind person's trip is walking for one type, and one type is long journey by bus, and blind person's walking needs a navigating instrument or the dog of navigating.
Stereoscopic vision (stereo vision) is one of core technology of computer vision, but utilizes its construction to go out the depth information of object in image in the environment.Binocular vision is at present by a kind of three-dimensional sensory perceptual system of extensive concern, and it has compact conformation, energy consumption is little, quantity of information is abundant, higher advantages such as recognition accuracy.But be subject to the influence of environment light source and veil based on the guide accessory of stereoscopic vision method design; The noise that three-dimensional coupling produces makes the main target thing be difficult for and background separation; Cause the barrier False Rate to increase, and the target depth information of obtaining often exist than mistake.Ultrasonic sensor has higher range sensing precision, but in complex environment, uses limited.Present guide accessory is many to be main with above-mentioned a kind of among both, combines other sensor to carry out the detection of barrier again, lacks the independent navigation algorithm that the advantage of effectively utilizing different sensors is had complementary advantages.
Summary of the invention
The technical matters that the present invention will solve is: the independent navigation algorithm that present blind man navigation appearance shortage effectively utilizes the advantage of different sensors to have complementary advantages provides a kind of blind autonomous navigation method based on stereoscopic vision and information fusion.
In order to overcome the defective that exists in the background technology; The technical solution adopted for the present invention to solve the technical problems is: the present invention includes bottom layer driving and data communication module, embedded hardware platform, I/O interface driver module and Linux real time operating system; It mainly comprises stereoscopic vision and multi-sensor information fusion aspect; And follow these steps to carry out: left image and right image output connection layout are as pre-service, and image pre-service output connects image segmentation, and the image segmentation connection features is extracted; Feature extraction output connects three-dimensional coupling; The output of solid coupling connects obtains disparity map, obtains disparity map output connection and obtains depth image and target detection and sign, obtains depth image output linking objective and surveys and sign; Target detection is calculated with sign output linking objective size, distance and direction parameter; Target size, distance and direction are participated in calculating output and are connected multi-sensor information fusion, and multi-sensor information fusion output connects information such as voice output obstacle location, size, distance and direction, and ultrasonic ranging, electronic compass direction finding and the output of GPS location are connected multi-sensor information fusion.
According to another embodiment of the invention, comprise that further said image pre-service comprises image filtering, figure image intensifying, image calibration, colour correction.
According to another embodiment of the invention; Comprise that further the fixed threshold that said image segmentation is based on the HSV color space cuts apart; Methods such as rim detection and region template coupling split the road surface in the image; And pass through shape filtering noise piece and other are suspected the target rejecting, adopt the chain code following algorithm to be communicated with block analysis (claiming that also Blob analyzes) then.
According to another embodiment of the invention, comprise that further said feature extraction is in left and right two width of cloth images, to extract common characteristic.
According to another embodiment of the invention; Comprise that further said three-dimensional coupling is based on characteristic, zone and relevant matches and finds the corresponding relation between two width of cloth images, utilize SIFT algorithm (having brightness, translation, rotation and yardstick unchangeability) to realize the registration of the image that left and right cameras is obtained simultaneously.
According to another embodiment of the invention, comprise that further the said disparity map that obtains is through it being carried out level and smooth and classification, can draw the disparity map of road surface and barrier, utilizing the latter to obtain the depth information of barrier.
According to another embodiment of the invention, comprise that further the said depth image that obtains is according to parallax and geometric relationship, obtain the target three-dimensional information.
According to another embodiment of the invention, comprise that further said target detection is the same with the parallax value of sign and barrier, utilization is obtained disparity map and is identified barrier.
According to another embodiment of the invention, comprise that further it is after obtaining target location coordinate that said target size, distance and direction parameter calculate, obtain its corresponding piece, and then estimate distance, size and the direction parameter of barrier.
According to another embodiment of the invention; Comprise that further said multi-sensor information fusion is based on stereoscopic vision and obtains the relative position of barrier, direction, size and distance; The range information that obtains in conjunction with ultrasonic sensor; The absolute location information that directional information that electronic compass obtains and GPS obtain further improves range sensing precision and location, the capacity of orientation of blind guiding system.
The invention has the beneficial effects as follows: algorithm of the present invention can improve the performance of blind guiding system, have great practical value for the independence and the independent ambulation ability that improve the blind person.Can improve blind person's life, work quality effectively, enlarge the crowd who links up, expansion blind person's the world adapts to the development of information society, and application prospect is obvious, and economic benefit and social benefit are considerable.
Description of drawings
Below in conjunction with accompanying drawing and embodiment the present invention is further specified.
Fig. 1 is an algorithm flow synoptic diagram of the present invention;
Fig. 2 is a system software synoptic diagram of the present invention.
Embodiment
Combine accompanying drawing and preferred embodiment that the present invention is done further detailed explanation now.These accompanying drawings are the synoptic diagram of simplification, basic structure of the present invention only is described in a schematic way, so it only show the formation relevant with the present invention.
Fig. 2 is a system software synoptic diagram of the present invention; Build system software structure based on embedded hardware platform and Linux real time operating system among the figure, mainly be divided into multi-sensor data collection, data communication and transmitting and storing, information processing and fusion, information demonstration and user interface several sections.The navigation algorithm major side focuses on stereoscopic vision and multi-sensor information fusion aspect; Algorithm flow chart is as shown in Figure 1; Mainly comprise left image and right image output connection layout among the figure as pre-service, image pre-service output connects image segmentation, and the image segmentation connection features is extracted; Feature extraction output connects three-dimensional coupling; The output of solid coupling connects obtains disparity map, obtains disparity map output connection and obtains depth image and target detection and sign, obtains depth image output linking objective and surveys and sign; Target detection is calculated with sign output linking objective size, distance and direction parameter; Target size, distance and direction are participated in calculating output and are connected multi-sensor information fusion, and multi-sensor information fusion output connects information such as voice output obstacle location, size, distance and direction, and ultrasonic ranging, electronic compass direction finding and the output of GPS location are connected multi-sensor information fusion.
The present invention comprises the steps: that mainly the image pre-service comprises image filtering, figure image intensifying, image calibration, colour correction; The fixed threshold that image segmentation is based on the HSV color space is cut apart; Methods such as rim detection and region template coupling split the road surface in the image; And pass through shape filtering noise piece and other are suspected the target rejecting, adopt the chain code following algorithm to be communicated with block analysis (claiming that also Blob analyzes) then; Feature extraction is in left and right two width of cloth images, to extract common characteristic; Three-dimensional coupling is based on characteristic, zone and relevant matches and finds the corresponding relation between two width of cloth images, and SIFT algorithm capable of using (having brightness, translation, rotation and yardstick unchangeability) is realized the registration of the image that left and right cameras is obtained simultaneously; Obtaining disparity map is through it being carried out level and smooth and classification, can draw the disparity map of road surface and barrier, utilizing the latter to obtain the depth information of barrier; Obtaining depth image is according to parallax and geometric relationship, obtains the target three-dimensional information; Target detection is the same with the parallax value of sign and barrier, and the disparity map that obtains capable of using identifies barrier; Calculation of parameter is after obtaining target location coordinate, can obtain its corresponding piece, and then estimates distance, size and the direction parameter of barrier; Multi-sensor information fusion is based on stereoscopic vision and obtains the relative position of barrier, direction, size and distance; The range information that obtains in conjunction with ultrasonic sensor; The absolute location information that directional information that electronic compass obtains and GPS obtain further improves range sensing precision and location, the capacity of orientation of blind guiding system; Information output: obtain position, size, shape, the distance of barrier, the information of direction, and point out the user with the voice or the vibration way of output.
Algorithm of the present invention can improve the performance of blind guiding system, have great practical value for the independence and the independent ambulation ability that improve the blind person.Can improve blind person's life, work quality effectively, enlarge the crowd who links up, expansion blind person's the world adapts to the development of information society, and application prospect is obvious, and economic benefit and social benefit are considerable.
With above-mentioned foundation desirable embodiment of the present invention is enlightenment, and through above-mentioned description, the related work personnel can carry out various change and modification fully in the scope that does not depart from this invention technological thought.The technical scope of this invention is not limited to the content on the instructions, must confirm its technical scope according to the claim scope.

Claims (1)

1. blind autonomous navigation method based on stereoscopic vision and information fusion; Comprise bottom layer driving and data communication module, embedded hardware platform, I/O interface driver module and Linux real time operating system; It is characterized in that: it mainly comprises stereoscopic vision and multi-sensor information fusion aspect; And follow these steps to carry out: left image and right image output connection layout are as pre-service, and image pre-service output connects image segmentation, and the image segmentation connection features is extracted; Feature extraction output connects three-dimensional coupling; The output of solid coupling connects obtains disparity map, obtains disparity map output connection and obtains depth image and target detection and sign, obtains depth image output linking objective and surveys and sign; Target detection is calculated with sign output linking objective size, distance and direction parameter; Target size, distance and direction parameter calculate output and are connected multi-sensor information fusion, and multi-sensor information fusion output connects the information of voice output barrier position, size, distance and direction, and ultrasonic ranging, electronic compass direction finding and the output of GPS location are connected multi-sensor information fusion; Said image pre-service comprises image filtering, figure image intensifying, image calibration, colour correction; The fixed threshold that said image segmentation is based on the HSV color space is cut apart; The method of rim detection and region template coupling splits the road surface in the image; And pass through shape filtering noise piece and other are suspected the target rejecting, adopt the chain code following algorithm to be communicated with block analysis then; Said feature extraction is in left and right two width of cloth images, to extract common characteristic; Said three-dimensional coupling is based on characteristic, zone and relevant matches and finds the corresponding relation between two width of cloth images, utilizes the SIFT algorithm to realize the registration of the image that left and right cameras is obtained simultaneously; The said disparity map that obtains is through it being carried out level and smooth and classification, can draw the disparity map of road surface and barrier, utilizing the latter to obtain the depth information of barrier; The said depth image that obtains is according to parallax and geometric relationship, obtains the target three-dimensional information; Said target detection is the same with the parallax value of sign and barrier, and utilization is obtained disparity map and identified barrier; It is after obtaining target location coordinate that said target size, distance and direction parameter calculate, and obtains its corresponding piece, and then estimates distance, size and the direction parameter of barrier; Said multi-sensor information fusion is based on stereoscopic vision and obtains the relative position of barrier, direction, size and distance; The range information that obtains in conjunction with ultrasonic sensor; The absolute location information that directional information that electronic compass obtains and GPS obtain further improves range sensing precision and location, the capacity of orientation of blind guiding system.
CN200910234985A 2009-11-23 2009-11-23 Blind autonomous navigation method based on stereoscopic vision and information fusion Expired - Fee Related CN101701828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910234985A CN101701828B (en) 2009-11-23 2009-11-23 Blind autonomous navigation method based on stereoscopic vision and information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910234985A CN101701828B (en) 2009-11-23 2009-11-23 Blind autonomous navigation method based on stereoscopic vision and information fusion

Publications (2)

Publication Number Publication Date
CN101701828A CN101701828A (en) 2010-05-05
CN101701828B true CN101701828B (en) 2012-10-03

Family

ID=42156748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910234985A Expired - Fee Related CN101701828B (en) 2009-11-23 2009-11-23 Blind autonomous navigation method based on stereoscopic vision and information fusion

Country Status (1)

Country Link
CN (1) CN101701828B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201219752A (en) * 2010-11-08 2012-05-16 Ind Tech Res Inst Automatic navigation method and automatic navigation system
US9098908B2 (en) 2011-10-21 2015-08-04 Microsoft Technology Licensing, Llc Generating a depth map
CN103284866A (en) * 2012-02-24 2013-09-11 鸿富锦精密工业(深圳)有限公司 Walking auxiliary system and walking auxiliary method
DE102012209316A1 (en) * 2012-06-01 2013-12-05 Robert Bosch Gmbh Method and device for processing sensor data of a stereo sensor system
CN103971356B (en) * 2013-02-04 2017-09-08 腾讯科技(深圳)有限公司 Street view image Target Segmentation method and device based on parallax information
CN103750986A (en) * 2014-01-24 2014-04-30 成都万先自动化科技有限责任公司 Blind-guide suit
US10028102B2 (en) * 2014-12-26 2018-07-17 Here Global B.V. Localization of a device using multilateration
JP6899369B2 (en) * 2015-08-03 2021-07-07 トムトム グローバル コンテント ベスローテン フエンノートシャップ Methods and systems for generating and using localization reference data
CN105342816A (en) * 2015-11-24 2016-02-24 上海斐讯数据通信技术有限公司 Real-time obstacle avoidance system for guiding blind
CN106909141A (en) * 2015-12-23 2017-06-30 北京机电工程研究所 Obstacle detection positioner and obstacle avoidance system
CN105698763A (en) * 2016-01-22 2016-06-22 吉林大学 Device and method for detecting barriers through stereoscopic vision
CN105997448B (en) * 2016-04-30 2019-04-26 中国海洋大学 Frequency Domain Projection Ultrasonic Echo Location Guide for the Blind
CN106197452A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment and system
CN106197429A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of Multi-information acquisition location equipment and system
CN106289290A (en) * 2016-07-21 2017-01-04 触景无限科技(北京)有限公司 A kind of path guiding system and method
CN106289232B (en) * 2016-07-24 2019-06-21 广东大仓机器人科技有限公司 A robot obstacle avoidance method based on depth sensor
CN106408613A (en) * 2016-09-18 2017-02-15 合肥视尔信息科技有限公司 Stereoscopic vision building method suitable for virtual lawsuit advisor
JP7216672B2 (en) * 2017-07-12 2023-02-01 ジェンテックス コーポレイション Visual, Depth, and Microvibration Data Extraction Using an Integrated Imager
CN107610152B (en) * 2017-08-31 2020-02-28 杭州视氪科技有限公司 Passage detection method for avoiding water surface and obstacles
CN107909009B (en) * 2017-10-27 2021-09-17 北京中科慧眼科技有限公司 Obstacle detection method and device based on road surface learning
CN108012325B (en) * 2017-10-30 2021-01-19 上海神添实业有限公司 Navigation positioning method based on UWB and binocular vision
CN107802468B (en) * 2017-11-14 2020-01-10 石化盈科信息技术有限责任公司 Blind guiding method and blind guiding system
CN108627816A (en) * 2018-02-28 2018-10-09 沈阳上博智像科技有限公司 Image distance measuring method, device, storage medium and electronic equipment
CN109931946A (en) * 2019-04-10 2019-06-25 福州大学 Visual ranging navigation method for blind people based on Android smartphone
CN110194085B (en) * 2019-05-12 2019-11-19 猫头鹰安防科技有限公司 The electron assistant platform of view-based access control model detection
CN110346004B (en) * 2019-08-16 2020-08-21 杭州山科智能科技股份有限公司 Flow measurement data fusion method of dual-channel ultrasonic time difference method
CN110470307A (en) * 2019-08-28 2019-11-19 中国科学院长春光学精密机械与物理研究所 A kind of visually impaired patient navigation system and method
CN113108780A (en) * 2021-03-30 2021-07-13 沈奥 Unmanned ship autonomous navigation method based on visual inertial navigation SLAM algorithm
CN113420704A (en) * 2021-06-18 2021-09-21 北京盈迪曼德科技有限公司 Object identification method and device based on visual sensor and robot
CN116549218B (en) * 2023-05-12 2024-12-03 江西恒必达实业有限公司 Intelligent blind guiding glasses based on obstacle monitoring and reminding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5107467A (en) * 1990-04-13 1992-04-21 Jorson Enterprises, Inc. Echo location system for vision-impaired persons
US6198395B1 (en) * 1998-02-09 2001-03-06 Gary E. Sussman Sensor for sight impaired individuals
CN2500297Y (en) * 2001-09-13 2002-07-17 潘爱武 Ultrasonic echo navigator for blind person
CN101368828A (en) * 2008-10-15 2009-02-18 同济大学 Navigation method and system for the blind based on computer vision
CN101483806A (en) * 2009-02-24 2009-07-15 南京师范大学 Outdoor blind guidance service system and method oriented to blind disturbance people

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5107467A (en) * 1990-04-13 1992-04-21 Jorson Enterprises, Inc. Echo location system for vision-impaired persons
US6198395B1 (en) * 1998-02-09 2001-03-06 Gary E. Sussman Sensor for sight impaired individuals
CN2500297Y (en) * 2001-09-13 2002-07-17 潘爱武 Ultrasonic echo navigator for blind person
CN101368828A (en) * 2008-10-15 2009-02-18 同济大学 Navigation method and system for the blind based on computer vision
CN101483806A (en) * 2009-02-24 2009-07-15 南京师范大学 Outdoor blind guidance service system and method oriented to blind disturbance people

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王臣豪等.基于Blob的车辆识别及其跟踪算法研究.《第十二届全国信号处理学术年会(CCSP-2005)论文集》.2005,全文. *
黄穗等.基于边缘矩阵的链编码追踪算法.《石河子大学学报(自然科学版)》.2005,第23卷(第5期),全文. *

Also Published As

Publication number Publication date
CN101701828A (en) 2010-05-05

Similar Documents

Publication Publication Date Title
CN101701828B (en) Blind autonomous navigation method based on stereoscopic vision and information fusion
CN112101128B (en) A perception planning method for unmanned formula racing car based on multi-sensor information fusion
CN108196535B (en) Automatic driving system based on reinforcement learning and multi-sensor fusion
CN105946853B (en) The system and method for long range automatic parking based on Multi-sensor Fusion
Schreiber et al. Laneloc: Lane marking based localization using highly accurate maps
EP3647734A1 (en) Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle
WO2018133851A1 (en) Point cloud data processing method and apparatus, and computer storage medium
EP2574958B1 (en) Road-terrain detection method and system for driver assistance systems
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
CN103760569B (en) A kind of drivable region detection method based on laser radar
CN111837014A (en) System and method for anonymizing navigation information
CN107389084B (en) Driving path planning method and storage medium
CN107392103A (en) The detection method and device of road surface lane line, electronic equipment
CN110390831A (en) route determination device
CN107657825A (en) Park method and device
CN114119896B (en) Driving path planning method
US20220266824A1 (en) Road information generation apparatus
CN116147613A (en) Method and system for generating and locating environmental models using cross sensor feature point references
US12025714B2 (en) Method and apparatus for determining location by correcting global navigation satellite system based location and electronic device thereof
CN116543043A (en) Method and processor device for locating a motor vehicle in an environment during driving, and correspondingly equipped motor vehicle
CN115112130A (en) Vehicle position estimation device
CN118111442A (en) Navigation system based on virtual three-dimensional live-action indoor and outdoor and vehicle searching guidance
US20230314162A1 (en) Map generation apparatus
JP4957021B2 (en) VEHICLE MAP DATA CREATION DEVICE AND VEHICLE MAP DATA UPDATE DEVICE
CN114944073A (en) Map generation device and vehicle control device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: CHANGZHOU INSTITUTE OF SUPERMEDIA AND SENSING TECH

Free format text: FORMER OWNER: CHANGZHOU DAQI INFORMATION TECHNOLOGY CO., LTD.

Effective date: 20111213

C41 Transfer of patent application or patent right or utility model
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Zhou Dongxiang

Inventor after: Liu Yunhui

Inventor after: Liu Shun

Inventor after: Yang Yanguang

Inventor after: Fan Caizhi

Inventor after: Cai Xuanping

Inventor before: Liu Yunhui

Inventor before: Yang Yanguang

Inventor before: Liu Shun

Inventor before: Fan Caizhi

Inventor before: Zhou Dongxiang

Inventor before: Cai Xuanping

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: LIU YUNHUI YANG YANGUANG LIU SHUN FAN CAIZHI ZHOU DONGXIANG CAI XUANPING TO: ZHOU DONGXIANG LIU YUNHUI LIU SHUN YANG YANGUANG FAN CAIZHI CAI XUANPING

Free format text: CORRECT: ADDRESS; FROM: 213011 CHANGZHOU, JIANGSU PROVINCE TO: 213000 CHANGZHOU, JIANGSU PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20111213

Address after: 213000 Jiangsu Province, the Clock Tower District, ERON Road, No. 213 High-tech Venture Center, room 1008

Applicant after: Changzhou Hyper Medium and Sensing Technology Institute Co., Ltd.

Address before: 213011 1008, hi tech Innovation Service Center, 213 ERON Road, bell tower area, Jiangsu, Changzhou

Applicant before: Changzhou Daqi Information Technology Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: DEFENSIVE SCIENTIFIC AND TECHNOLOGICAL UNIV., PLA

Effective date: 20130205

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20130205

Address after: 213000 Jiangsu Province, the Clock Tower District, ERON Road, No. 213 High-tech Venture Center, room 1008

Patentee after: Changzhou Hyper Medium and Sensing Technology Institute Co., Ltd.

Patentee after: National University of Defense Technology of People's Liberation Army of China

Address before: 213000 Jiangsu Province, the Clock Tower District, ERON Road, No. 213 High-tech Venture Center, room 1008

Patentee before: Changzhou Hyper Medium and Sensing Technology Institute Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121003

Termination date: 20181123