CN114943809A - Map model generation method and device and storage medium - Google Patents
Map model generation method and device and storage medium Download PDFInfo
- Publication number
- CN114943809A CN114943809A CN202210610804.8A CN202210610804A CN114943809A CN 114943809 A CN114943809 A CN 114943809A CN 202210610804 A CN202210610804 A CN 202210610804A CN 114943809 A CN114943809 A CN 114943809A
- Authority
- CN
- China
- Prior art keywords
- images
- rescued
- positions
- map model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 106
- 238000012545 processing Methods 0.000 claims description 56
- 238000004422 calculation algorithm Methods 0.000 claims description 48
- 230000008569 process Effects 0.000 claims description 42
- 238000004590 computer program Methods 0.000 claims description 13
- 238000009877 rendering Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 abstract description 19
- 230000006870 function Effects 0.000 description 13
- 238000005457 optimization Methods 0.000 description 9
- 238000005070 sampling Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000009795 derivation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 238000006424 Flood reaction Methods 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
本申请提供一种地图模型的生成方法、装置及存储介质,涉及通信领域,用于提高地图模型(如三维模型)的精确度。该方法包括:获取多个第一图像和多个第一位置,多个第一图像包括第一待救援对象和场景对象,第一位置为采集第一图像的位置,多个第一位置与多个第一图像相对应。根据多个第一图像和多个第一位置,生成第一地图模型,第一地图模型用于反映受灾场景,第一地图模型包括场景对象。获取第一待救援对象的位置信息。通过第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型,第二地图模型包括第一待救援对象的位置信息、第一待救援对象和场景对象。
The present application provides a method, device and storage medium for generating a map model, which relate to the field of communications and are used to improve the accuracy of a map model (eg, a three-dimensional model). The method includes: acquiring a plurality of first images and a plurality of first positions, the plurality of first images include a first object to be rescued and a scene object, the first position is a position where the first images are collected, the plurality of first positions and the plurality of first positions are corresponding to the first image. A first map model is generated according to the plurality of first images and the plurality of first locations, the first map model is used to reflect the disaster scene, and the first map model includes scene objects. Acquire location information of the first object to be rescued. The first map model is processed according to the position information of the first object to be rescued to generate a second map model, where the second map model includes the position information of the first object to be rescued, the first object to be rescued and the scene object.
Description
技术领域technical field
本申请涉及通信领域,尤其涉及一种地图模型的生成方法、装置及存储介质。The present application relates to the field of communications, and in particular, to a method, device and storage medium for generating a map model.
背景技术Background technique
近年来,随着无人机技术的发展,无人机可以应用在多种场景中。例如,在地震、洪涝、泥石流等灾害发生后,救援人员可以通过无人机采集的视频来了解现场情况。In recent years, with the development of drone technology, drones can be used in a variety of scenarios. For example, after disasters such as earthquakes, floods, and mudslides occur, rescuers can learn about the scene through videos collected by drones.
目前,无人机在灾害现场时,需要先采集视频,并将采集的视频回传给终端。之后,救援人员可以根据终端播放的视频内容,了解灾害现场的情况,做出救援计划。但是,上述技术方案中,无人机仅采集灾害现场视频,采集的信息量较少,严重影响救援人员对灾害现场的判断,同时延缓了救援时间。At present, when the drone is at the disaster site, it needs to collect video first, and then transmit the collected video back to the terminal. After that, rescuers can learn about the situation at the disaster site and make rescue plans based on the video content played on the terminal. However, in the above technical solution, the drone only collects the video of the disaster scene, and the amount of information collected is small, which seriously affects the judgment of the rescuers on the disaster scene, and at the same time delays the rescue time.
发明内容SUMMARY OF THE INVENTION
本申请提供一种地图模型的生成方法、装置及存储介质,用于提高地图模型(如三维模型)的精确度。The present application provides a method, device and storage medium for generating a map model, which are used to improve the accuracy of a map model (eg, a three-dimensional model).
为达到上述目的,本申请采用如下技术方案:To achieve the above object, the application adopts the following technical solutions:
根据本申请的第一方面,提供一种地图模型的生成方法。该方法包括:According to a first aspect of the present application, a method for generating a map model is provided. The method includes:
地图模型的生成装置(可以简称为“生成装置”)获取多个第一图像和多个第一位置,多个第一图像包括第一待救援对象和场景对象,第一位置为采集第一图像的位置,多个第一位置与多个第一图像相对应。生成装置根据多个第一图像和多个第一位置,生成第一地图模型,第一地图模型用于反映受灾场景,第一地图模型包括场景对象。生成装置获取第一待救援对象的位置信息。生成装置通过第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型,第二地图模型包括第一待救援对象的位置信息、第一待救援对象和场景对象。The map model generating device (may be referred to as a "generating device") acquires multiple first images and multiple first positions, the multiple first images include the first object to be rescued and the scene object, and the first position is to collect the first image The positions of the plurality of first positions correspond to the plurality of first images. The generating device generates a first map model according to a plurality of first images and a plurality of first positions, the first map model is used to reflect the disaster scene, and the first map model includes scene objects. The generating device obtains the position information of the first object to be rescued. The generating device processes the first map model according to the position information of the first object to be rescued to generate a second map model, where the second map model includes the position information of the first object to be rescued, the first object to be rescued and the scene object.
可选的,该地图模型的生成方法还包括:生成装置获取多个第二图像,多个第二图像为采集的全部图像。生成装置获取多个第一图像,包括:生成装置根据目标图像对应的第一位置和多个第一位置,从多个第二图像中确定多个第一图像,目标图像为任一第二图像,多个第一图像对应的第一位置与目标图像对应的第一位置之间的距离小于预设距离阈值。Optionally, the method for generating a map model further includes: the generating device acquires a plurality of second images, and the plurality of second images are all collected images. The generating device obtains a plurality of first images, including: the generating device determines a plurality of first images from the plurality of second images according to the first position and the plurality of first positions corresponding to the target image, and the target image is any second image , the distances between the first positions corresponding to the multiple first images and the first positions corresponding to the target image are smaller than the preset distance threshold.
可选的,该地图模型的生成方法还包括:生成装置根据多个第一位置和多个第一图像,生成多个第二位置,第二位置采集的第一图像的重投影误差小于第一位置采集的第一图像的重投影误差,多个第二位置与多个第一图像相对应。生成装置根据多个第二位置和多个第一图像,生成第一地图模型。Optionally, the method for generating a map model further includes: the generating device generates a plurality of second positions according to a plurality of first positions and a plurality of first images, and the reprojection error of the first images collected at the second positions is smaller than the first position. The reprojection error of the first image collected at the position, and the plurality of second positions correspond to the plurality of first images. The generating means generates a first map model according to the plurality of second positions and the plurality of first images.
可选的,该地图模型的生成方法还包括:生成装置通过目标预设算法对多个第二位置和多个第一图像进行处理,确定稠密点云,目标预设算法用于确定多个第一图像的像素点对应的坐标集合,稠密点云由多个第一图像的像素点对应的坐标集合确定。生成装置对稠密点云进行处理,生成三角网模型。生成装置通过多个第一图像对三角网模型进行渲染处理,生成第一地图模型。Optionally, the method for generating the map model further includes: the generating device processes multiple second positions and multiple first images through a target preset algorithm to determine a dense point cloud, and the target preset algorithm is used to determine a plurality of first images. A set of coordinates corresponding to pixels of an image, and a dense point cloud is determined by sets of coordinates corresponding to pixels of a plurality of first images. The generating device processes the dense point cloud to generate a triangulation model. The generating device performs rendering processing on the triangulation model by using the plurality of first images to generate a first map model.
可选的,多个第一图像包括第二待救援对象,第二待救援对象包括第一待救援对象。该地图模型的生成方法还包括:生成装置对第二待救援对象进行去重处理,得到第一待救援对象。生成装置根据多个第一位置和多个目标位置关系,确定第一待救援对象的位置信息,目标位置关系为第一位置与第一待救援对象的位置之间的关系。Optionally, the plurality of first images include a second object to be rescued, and the second object to be rescued includes the first object to be rescued. The method for generating a map model further includes: the generating device performs deduplication processing on the second object to be rescued to obtain the first object to be rescued. The generating device determines the position information of the first object to be rescued according to a plurality of first positions and a plurality of target position relationships, where the target position relationship is a relationship between the first position and the position of the first object to be rescued.
根据本申请的第二方面,提供一种地图模型的生成装置,该装置包括获取模块和处理模块。According to a second aspect of the present application, there is provided an apparatus for generating a map model, the apparatus including an acquisition module and a processing module.
获取模块,用于获取多个第一图像和多个第一位置,多个第一图像包括第一待救援对象和场景对象,第一位置为采集第一图像的位置,多个第一位置与多个第一图像相对应。处理模块,用于根据多个第一图像和多个第一位置,生成第一地图模型,第一地图模型用于反映受灾场景,第一地图模型包括场景对象。获取模块,还用于获取第一待救援对象的位置信息。处理模块,具体用于通过第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型,第二地图模型包括第一待救援对象的位置信息、第一待救援对象和场景对象。The acquisition module is configured to acquire a plurality of first images and a plurality of first positions, the plurality of first images include a first object to be rescued and a scene object, the first position is the position where the first image is collected, and the plurality of first positions are the same as the first position. A plurality of first images correspond. The processing module is configured to generate a first map model according to a plurality of first images and a plurality of first positions, the first map model is used to reflect the disaster scene, and the first map model includes scene objects. The acquiring module is further configured to acquire the location information of the first object to be rescued. The processing module is specifically configured to process the first map model according to the position information of the first object to be rescued to generate a second map model, where the second map model includes the position information of the first object to be rescued, the first object to be rescued and the scene object.
可选的,获取模块,还用于获取多个第二图像,多个第二图像为采集的全部图像。处理模块,还用于根据目标图像对应的第一位置和多个第一位置,从多个第二图像中确定多个第一图像,目标图像为任一第二图像,多个第一图像对应的第一位置与目标图像对应的第一位置之间的距离小于预设距离阈值。Optionally, the acquiring module is further configured to acquire multiple second images, where the multiple second images are all acquired images. The processing module is further configured to determine a plurality of first images from a plurality of second images according to a first position and a plurality of first positions corresponding to the target image, the target image is any second image, and the plurality of first images correspond to The distance between the first position of the target image and the first position corresponding to the target image is less than the preset distance threshold.
可选的,处理模块,还用于根据多个第一位置和多个第一图像,生成多个第二位置,第二位置采集的第一图像的重投影误差小于第一位置采集的第一图像的重投影误差,多个第二位置与多个第一图像相对应。处理模块,还用于根据多个第二位置和多个第一图像,生成第一地图模型。Optionally, the processing module is further configured to generate a plurality of second positions according to a plurality of first positions and a plurality of first images, and the reprojection error of the first image collected at the second position is smaller than that of the first image collected at the first position. The reprojection error of the image, the plurality of second positions correspond to the plurality of first images. The processing module is further configured to generate a first map model according to the plurality of second positions and the plurality of first images.
可选的,处理模块,还用于通过目标预设算法对多个第二位置和多个第一图像进行处理,确定稠密点云,目标预设算法用于确定多个第一图像的像素点对应的坐标集合,稠密点云由多个第一图像的像素点对应的坐标集合确定。处理模块,还用于对稠密点云进行处理,生成三角网模型。处理模块,还用于通过多个第一图像对三角网模型进行渲染处理,生成第一地图模型。Optionally, the processing module is further configured to process multiple second positions and multiple first images through a target preset algorithm to determine a dense point cloud, and the target preset algorithm is used to determine the pixel points of the multiple first images. The corresponding coordinate set, the dense point cloud is determined by the coordinate set corresponding to the pixel points of the multiple first images. The processing module is also used to process the dense point cloud to generate a triangulation model. The processing module is further configured to perform rendering processing on the triangulation model by using the plurality of first images to generate a first map model.
可选的,多个第一图像包括第二待救援对象,第二待救援对象包括第一待救援对象。处理模块,还用于对第二待救援对象进行去重处理,得到第一待救援对象。处理模块,还用于根据多个第一位置和多个目标位置关系,确定第一待救援对象的位置信息,目标位置关系为第一位置与第一待救援对象的位置之间的关系。Optionally, the plurality of first images include a second object to be rescued, and the second object to be rescued includes the first object to be rescued. The processing module is further configured to perform deduplication processing on the second object to be rescued to obtain the first object to be rescued. The processing module is further configured to determine the position information of the first object to be rescued according to a plurality of first positions and a plurality of target position relationships, where the target position relationship is the relationship between the first position and the position of the first object to be rescued.
根据本申请的第三方面,提供一种地图模型的生成装置,该装置包括:处理器和存储器。处理器和存储器耦合。存储器用于存储一个或多个程序,该一个或多个程序包括计算机执行指令,当该地图模型的生成装置运行时,处理器执行该存储器存储的该计算机执行指令,以实现如第一方面和第一方面的任一种可能的实现方式中所描述的地图模型的生成方法。According to a third aspect of the present application, there is provided an apparatus for generating a map model, the apparatus comprising: a processor and a memory. The processor and the memory are coupled. The memory is used to store one or more programs, the one or more programs include computer-executable instructions, and when the generating apparatus for the map model runs, the processor executes the computer-executable instructions stored in the memory to implement the first aspect and The method for generating a map model described in any possible implementation manner of the first aspect.
根据本申请的第四方面,提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行上述第一方面和第一方面的任一种可能的实现方式中所描述的地图模型的生成方法。According to a fourth aspect of the present application, a computer-readable storage medium is provided, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is made to execute any one of the first aspect and the first aspect. The generation method of the map model described in the possible implementations.
根据本申请的第五方面,提供一种计算机程序产品,包括计算机程序,当其计算机程序被处理器执行时,使得计算机实现如第一方面和第一方面的任一种可能的实现方式中所描述的地图模型的生成方法。According to a fifth aspect of the present application, there is provided a computer program product, comprising a computer program, which, when the computer program is executed by a processor, enables a computer to implement the first aspect and any possible implementation manner of the first aspect Describes the generation method of the map model.
上述方案中,地图模型的生成装置、计算机设备、计算机存储介质或者计算机程序产品所能解决的技术问题以及实现的技术效果可以参见上述第一方面所解决的技术问题以及技术效果,在此不再赘述。In the above solution, the technical problems and technical effects that can be solved by the generating device, computer equipment, computer storage medium or computer program product of the map model can be referred to the technical problems and technical effects solved in the first aspect above, and will not be repeated here. Repeat.
本申请提供的技术方案至少带来以下有益效果:生成装置可以获取多个第一图像和多个第一位置,多个第一图像包括第一待救援对象和场景对象,第一位置为采集第一图像的位置,多个第一位置与多个第一图像相对应。之后,生成装置可以根据多个第一图像和多个第一位置生成第一地图模型,该第一地图模型用于反映受灾场景,第一地图模型包括场景对象。并且,生成装置可以获取第一待救援对象的位置信息,并通过第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型,该第二地图模型中包括第一待救援对象的位置信息、第一待救援对象和场景对象。也就是说,生成装置可以将第一待救援对象的位置信息和受灾场景中的对象(如第一待救援对象和场景对象)标记在第二地图模型。如此,可以提高地图模型中的信息量,进而可以为救援人员合理分配救援力量、确定救灾重点区域、选择安全救援路线以及灾后重建选址等提供很有价值的参考。此外,相较于受灾场景的视频,本申请技术方案中,第二地图模型的数据量较少。如此,可以减少生成装置传输的数据量,节约了生成装置将信息传输至终端的时间,进而可以加快救援进度,提高救援效率。The technical solution provided by the present application brings at least the following beneficial effects: the generating device can acquire a plurality of first images and a plurality of first positions, the plurality of first images include a first object to be rescued and a scene object, and the first position is the collection of the first position. The position of an image, the plurality of first positions correspond to the plurality of first images. Afterwards, the generating device may generate a first map model according to the plurality of first images and the plurality of first positions, where the first map model is used to reflect the disaster scene, and the first map model includes scene objects. In addition, the generating device can obtain the position information of the first object to be rescued, and process the first map model according to the position information of the first object to be rescued to generate a second map model, where the second map model includes the first object to be rescued The position information of the object, the first object to be rescued and the scene object. That is, the generating device may mark the position information of the first object to be rescued and objects in the disaster scene (eg, the first object to be rescued and the scene objects) on the second map model. In this way, the amount of information in the map model can be increased, and then it can provide a valuable reference for rescuers to reasonably allocate rescue forces, determine key areas for disaster relief, select safe rescue routes, and select locations for post-disaster reconstruction. In addition, compared with the video of the disaster scene, in the technical solution of the present application, the amount of data of the second map model is smaller. In this way, the amount of data transmitted by the generating device can be reduced, and the time for the generating device to transmit information to the terminal can be saved, thereby speeding up the rescue progress and improving the rescue efficiency.
附图说明Description of drawings
图1是根据一示例性实施例示出的一种通信系统的示意图;FIG. 1 is a schematic diagram of a communication system according to an exemplary embodiment;
图2是根据一示例性实施例示出的一种地图模型的生成方法的流程图;2 is a flowchart of a method for generating a map model according to an exemplary embodiment;
图3是根据一示例性实施例示出的另一种地图模型的生成方法的流程图;FIG. 3 is a flowchart of another method for generating a map model according to an exemplary embodiment;
图4是根据一示例性实施例示出的一种确定位置信息的实例示意图;4 is a schematic diagram of an example of determining location information according to an exemplary embodiment;
图5是根据一示例性实施例示出的一种标记位置信息的实例示意图;FIG. 5 is a schematic diagram of an example of marking position information according to an exemplary embodiment;
图6是根据一示例性实施例示出的另一种地图模型的生成方法的流程图;FIG. 6 is a flowchart of another method for generating a map model according to an exemplary embodiment;
图7是根据一示例性实施例示出的另一种地图模型的生成方法的流程图;FIG. 7 is a flowchart of another method for generating a map model according to an exemplary embodiment;
图8是根据一示例性实施例示出的一种生成地图模型的实例示意图;FIG. 8 is a schematic diagram of an example of generating a map model according to an exemplary embodiment;
图9是根据一示例性实施例示出的另一种地图模型的生成方法的流程图;FIG. 9 is a flowchart of another method for generating a map model according to an exemplary embodiment;
图10是根据一示例性实施例示出的另一种地图模型的生成方法的流程图;FIG. 10 is a flowchart of another method for generating a map model according to an exemplary embodiment;
图11是根据一示例性实施例示出的另一种地图模型的生成方法的流程图;Fig. 11 is a flowchart of another method for generating a map model according to an exemplary embodiment;
图12是根据一示例性实施例示出的一种地图模型的生成装置的结构框图;FIG. 12 is a structural block diagram of an apparatus for generating a map model according to an exemplary embodiment;
图13是根据一示例性实施例示出的一种地图模型的生成装置的结构示意图;FIG. 13 is a schematic structural diagram of an apparatus for generating a map model according to an exemplary embodiment;
图14是根据一示例性实施例示出的一种计算机程序产品的概念性局部视图。Figure 14 is a conceptual partial view of a computer program product, shown according to an exemplary embodiment.
具体实施方式Detailed ways
为了使本领域普通人员更好地理解本申请的技术方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。In order to make those skilled in the art better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the accompanying drawings.
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。It should be noted that the terms "first", "second", etc. in the description and claims of the present application and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that data so used may be interchanged under appropriate circumstances so that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as recited in the appended claims.
此外,本申请的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、系统、产品或设备没有限定于已列出的步骤或模块,而是可选地还包括其他没有列出的步骤或模块,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。Furthermore, references to the terms "comprising" and "having" in the description of this application, and any variations thereof, are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or modules is not limited to the listed steps or modules, but may optionally also include other unlisted steps or modules, or optionally also Other steps or modules inherent to these processes, methods, products or devices are included.
另外,在本申请实施例中,“示例性的”、或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”、或者“例如”等词旨在以具体方式呈现概念。In addition, in the embodiments of the present application, words such as "exemplary" or "for example" are used to represent examples, illustrations or illustrations. Any embodiment or design described in this application as "exemplary" or "such as" should not be construed as preferred or advantageous over other embodiments or designs. Rather, use of words such as "exemplary" or "such as" is intended to present concepts in a specific manner.
首先,对本申请实施例的应用场景进行介绍。First, the application scenarios of the embodiments of the present application are introduced.
本申请实施例的地图模型的生成方法应用于终端(如无人机)采集受灾场景信息的场景中。在相关技术中,无人机在采集受灾场景的信息时,需要先对受灾场景进行采集,并将通过采集得到的视频回传给终端。之后,救援人员可以根据终端播放的视频内容,了解受灾场景的情况,做出救援计划。但是,在目前的技术方案中,无人机仅采集受灾场景图像。这样一来,在没有对受灾场景的对象进行定位的情况下,严重影响救援人员对受灾场景的判断,同时延缓了救援时间。The method for generating a map model according to the embodiment of the present application is applied to a scenario in which a terminal (such as a drone) collects disaster-affected scenario information. In the related art, when the drone collects the information of the disaster-affected scene, it needs to collect the disaster-affected scene first, and transmit the video obtained through the collection back to the terminal. After that, rescuers can understand the situation of the disaster scene and make rescue plans based on the video content played by the terminal. However, in the current technical solution, the UAV only collects images of the disaster scene. In this way, without locating the objects of the disaster-affected scene, the judgment of the rescuer on the disaster-affected scene is seriously affected, and the rescue time is also delayed.
示例性的,无人机在检测到进入受灾村庄的情况下,可以对受灾村庄进行采集并将采集到的受灾村庄的视频回传给终端。之后,救援人员可以根据终端播放的视频内容,了解受灾村庄的情况,做出救援计划。假如终端播放的视频内存在待救援人员A和场景对象B,则救援人员无法根据终端播放的视频内容确定待救援人员A和场景对象B的具体位置。Exemplarily, when the drone detects that it has entered the disaster-stricken village, it can collect the disaster-stricken village and transmit the collected video of the disaster-affected village back to the terminal. After that, rescuers can learn about the situation of the disaster-stricken villages and make rescue plans based on the video content played by the terminal. If there are personnel A to be rescued and scene object B in the video played by the terminal, the rescue personnel cannot determine the specific positions of personnel A to be rescued and scene object B according to the video content played by the terminal.
综上,目前的技术方案中,无人机仅采集灾害现场图像,采集的信息量较少,严重影响救援人员对受灾场景的判断,同时延缓了救援时间。To sum up, in the current technical solution, the drone only collects images of the disaster scene, and the amount of information collected is small, which seriously affects the judgment of the rescuers on the disaster scene, and at the same time delays the rescue time.
为了解决上述问题,本申请实施例提供一种地图模型的生成方法,无人机可以获取受灾场景的图像和采集位置(即无人机采集图像的位置)。之后,无人机可以根据受灾场景的图像和采集位置,生成受灾场景的地图模型。并且,无人机可以确定受灾场景的图像中待救援对象的位置信息。在生成受灾场景的地图模型和确定受灾场景的图像中待救援对象的位置信息的情况下,无人机可以在受灾场景的地图模型上标记待救援对象的位置信息。如此一来,救援人员可以根据无人机回传给终端的具有标记的地图模型,对受灾场景进行判断,并做出救援计划,提高了救援人员对受灾场景判断的精准度。并且,相较于视频传输,无人机回传给终端的具有标记的地图模型的数据量较少,传输的速度较快,可以加快救援进度。In order to solve the above problem, an embodiment of the present application provides a method for generating a map model, and the drone can obtain the image of the disaster scene and the collection position (ie, the position where the drone collects the image). After that, the drone can generate a map model of the disaster scene based on the image and collection location of the disaster scene. Moreover, the drone can determine the location information of the object to be rescued in the image of the disaster scene. In the case of generating a map model of the disaster scene and determining the position information of the object to be rescued in the image of the disaster scene, the drone can mark the position information of the object to be rescued on the map model of the disaster scene. In this way, rescuers can judge the disaster-affected scene according to the marked map model returned by the drone to the terminal, and make a rescue plan, which improves the accuracy of the rescuer's judgment on the disaster-affected scene. Moreover, compared with video transmission, the amount of data of the marked map model returned by the drone to the terminal is less, and the transmission speed is faster, which can speed up the rescue progress.
下面对本申请实施例的实施环境进行介绍。The implementation environment of the embodiments of the present application will be introduced below.
图1为本申请实施例提供的一种通信系统示意图,如图1所示,该通信系统中可以包括:终端01和终端02。其中,终端01可以获取采集信息(如图像和采集位置(即终端01采集图像的位置)),终端01与终端02可以进行无线通信。FIG. 1 is a schematic diagram of a communication system provided by an embodiment of the present application. As shown in FIG. 1 , the communication system may include: a terminal 01 and a terminal 02 . Wherein, the terminal 01 can acquire acquisition information (such as an image and an acquisition location (ie, the location where the terminal 01 acquires an image)), and the terminal 01 and the terminal 02 can perform wireless communication.
例如,终端01可以通过卫星通信与终端02进行通信。又例如,终端01可以通过扩频微波通信与终端02进行通信。又例如,终端01可以通过数传电台通信与终端02进行通信。又例如,终端01可以通过蓝牙与终端02进行通信。For example, terminal 01 may communicate with terminal 02 through satellite communication. For another example, the terminal 01 may communicate with the terminal 02 through spread spectrum microwave communication. For another example, the terminal 01 may communicate with the terminal 02 through digital radio communication. For another example, the terminal 01 may communicate with the terminal 02 through Bluetooth.
其中,终端01可以对采集信息进行处理,如通过图像和采集位置生成地图模型。终端01可以向终端02发送采集信息。终端02可以显示采集信息。Wherein, the terminal 01 can process the collected information, such as generating a map model through the image and the collection location. The terminal 01 may send the collection information to the terminal 02 . The terminal 02 can display the collected information.
在一些实施例中,终端01中可以存储数据以及对数据处理。In some embodiments, data may be stored and processed in the terminal 01 .
在另一些实施例中,终端01可以与服务器连接。终端01可以向服务器发送采集信息,服务器可以存储采集信息,并对采集信息进行处理。之后,服务器可以将处理后的采集信息发送至终端01。In other embodiments, the terminal 01 may be connected to the server. The terminal 01 can send the collected information to the server, and the server can store the collected information and process the collected information. Afterwards, the server may send the processed collection information to the terminal 01 .
终端(如终端01或终端02)可以是具有收发功能的手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、上网本,以及无人机等可以采集图像并定位的设备,本申请对该终端的具体形态不作特殊限制。其可以与用户通过键盘、触摸板、触摸屏、遥控器、语音交互或手写设备等一种或多种方式进行人机交互。The terminal (such as
在介绍了本申请实施例的应用场景和实施环境之后,下面结合上述实施环境,对本申请实施例提供的地图模型的生成方法进行详细介绍。After the application scenarios and implementation environments of the embodiments of the present application are introduced, the method for generating a map model provided by the embodiments of the present application will be described in detail below in combination with the above-mentioned implementation environments.
下面以终端01为无人机为例,对本申请实施例提供的技术方案进行具体阐述。Hereinafter, the technical solutions provided by the embodiments of the present application are described in detail by taking the terminal 01 as an unmanned aerial vehicle as an example.
图2是根据一示例性实施例示出的一种地图模型的生成方法的流程图。如图2所示,该方法可以包括S201-S205。Fig. 2 is a flow chart of a method for generating a map model according to an exemplary embodiment. As shown in FIG. 2, the method may include S201-S205.
S201、无人机获取多个第一图像。S201, the drone acquires a plurality of first images.
其中,多个第一图像包括待第一救援对象和场景对象。Wherein, the plurality of first images include the object to be rescued and the scene object.
需要说明的是,本申请实施例对第一救援对象不作限定。例如,第一救援对象可以是红色头发且蹲坐在地的人。又例如,第一救援对象可以是白色头发且靠墙站立的人。又例如,第一救援对象可以动物(如猫、狗、猪等)。又例如,第一救援对象可以是飞机、轮船、汽车等。本申请实施例对场景对象不作限定。例如,场景对象可以是灰色且断裂的桥梁。又例如,场景对象可以是白色且完整的圆顶建筑。又例如,场景对象可以是绿色且折断的树。It should be noted that, the embodiment of the present application does not limit the first rescue object. For example, the first rescue object may be a person with red hair and squatting on the ground. For another example, the first rescue object may be a person with white hair and standing against a wall. For another example, the first rescue object may be an animal (eg, cat, dog, pig, etc.). For another example, the first rescue object may be an airplane, a ship, a car, or the like. The embodiment of the present application does not limit the scene object. For example, a scene object can be a gray and broken bridge. As another example, the scene object may be a white and complete dome. As another example, the scene object may be a green and broken tree.
在一种可能的实现方式中,无人机可以采集目标视频,该目标视频为受灾场景的视频。之后,无人机可以获取目标视频中每一帧图像。其中,多个第一图像可以为目标视频中的全部图像帧或部分图像帧。In a possible implementation manner, the UAV can collect the target video, and the target video is the video of the disaster scene. After that, the drone can acquire every frame of the target video. The multiple first images may be all or part of the image frames in the target video.
示例性的,无人机通过采集得到视频A,视频A包括图像帧A、图像帧B和图像帧C。无人机可以将图像帧A、图像帧B和图像帧C作为第一图像。Exemplarily, the drone obtains a video A by collecting, and the video A includes an image frame A, an image frame B, and an image frame C. The drone can take image frame A, image frame B, and image frame C as the first image.
在另一种可能的实现方式中,无人机可以通过摄像头采集图像,获取第一图像。In another possible implementation manner, the drone may collect images through a camera to obtain the first image.
示例性的,无人机通过采集得到图像A,将图像A作为第一图像B。Exemplarily, the drone obtains the image A through acquisition, and uses the image A as the first image B.
可以理解的是,无人机可以通过不同方式获取第一图像。如此,可以增加无人机获取第一图像的途径,提高了操作性和灵活性。Understandably, the drone can acquire the first image in different ways. In this way, the ways for the UAV to obtain the first image can be increased, and the operability and flexibility are improved.
在一些实施例中,在无人机获取第一图像之前,无人机可以获取多个第二图像,多个第二图像为采集的全部图像。之后,无人机根据目标图像对应的第一位置和多个第一位置,从多个第二图像中确定多个第一图像。多个第一图像对应的第一位置与目标图像对应的第一位置之间的距离小于预设距离阈值。其中,目标图像为任一第二图像。In some embodiments, before the drone acquires the first image, the drone may acquire a plurality of second images, and the plurality of second images are all acquired images. After that, the drone determines a plurality of first images from the plurality of second images according to the first position and the plurality of first positions corresponding to the target image. The distances between the first positions corresponding to the plurality of first images and the first positions corresponding to the target image are smaller than the preset distance threshold. Wherein, the target image is any second image.
需要说明的是,对多个第一位置的介绍可以参考以下实施例中的描述,这里不予赘述。并且,本申请实施例对预设距离阈值不作限定。例如,预设距离阈值可以为高度差1米,水平距离2米。又例如,预设距离阈值可以为高度差0.5米,水平距离1.5米。又例如,预设距离阈值可以为高度差2米,水平距离3米。It should be noted that, for the introduction of the multiple first positions, reference may be made to the descriptions in the following embodiments, which are not repeated here. Moreover, the embodiment of the present application does not limit the preset distance threshold. For example, the preset distance threshold may be a height difference of 1 meter and a horizontal distance of 2 meters. For another example, the preset distance threshold may be a height difference of 0.5 meters and a horizontal distance of 1.5 meters. For another example, the preset distance threshold may be a height difference of 2 meters and a horizontal distance of 3 meters.
示例性的,假如无人机确定目标图像A,并且设置预设距离阈值为高度差0.5米,水平距离1.2米。若图像A与目标图像A的高度差为0.7米,水平距离为1.3米,则图像A的位置与目标图像A的位置大于预设距离阈值,确定图像A不为第一图像。若图像B与目标图像A的高度差为0.5米,水平距离为1.2米,则图像B的位置与目标图像A的位置等于预设距离阈值,确定图像B不为第一图像。若图像C与目标图像A的高度差为0.4米,水平距离为1.1米,则图像C的位置与目标图像A的位置小于预设距离阈值,确定图像C为第一图像。Exemplarily, suppose that the drone determines the target image A, and sets the preset distance threshold to a height difference of 0.5 meters and a horizontal distance of 1.2 meters. If the height difference between image A and target image A is 0.7 meters, and the horizontal distance is 1.3 meters, the position of image A and the position of target image A are greater than the preset distance threshold, and it is determined that image A is not the first image. If the height difference between image B and target image A is 0.5 m, and the horizontal distance is 1.2 m, the position of image B and the position of target image A are equal to the preset distance threshold, and it is determined that image B is not the first image. If the height difference between image C and target image A is 0.4 m, and the horizontal distance is 1.1 m, then the position of image C and the position of target image A are less than the preset distance threshold, and image C is determined to be the first image.
S202、无人机获取多个第一位置。S202, the drone acquires multiple first positions.
在本申请实施例中,第一位置为无人机采集第一图像的位置,多个第一位置与多个第一图像相对应。In the embodiment of the present application, the first position is the position where the drone collects the first image, and the plurality of first positions correspond to the plurality of first images.
示例性的,多个第一图像包括:图像A、图像B和图像C,多个第一位置包括:位置A、位置B和位置C。其中,位置A为无人机采集图像A的位置,位置B为无人机采集图像B的位置,位置C为无人机采集图像C的位置。Exemplarily, the plurality of first images include: image A, image B, and image C, and the plurality of first positions include: position A, position B, and position C. Wherein, the position A is the position where the drone collects the image A, the position B is the position where the drone collects the image B, and the position C is the position where the drone collects the image C.
在一种可能的实现方式中,无人机可以通过对采集第一图像的位置进行定位,获取第一位置。In a possible implementation manner, the drone may acquire the first position by locating the position where the first image is collected.
需要说明的是,本申请实施例对定位方式不作限定。例如,无人机可以通过全球定位系统(Global Positioning System,GPS)确定第一图像对应的第一位置。又例如,无人机可以通过北斗卫星导航系统(BeiDou Navigation Satellite System,BDS)确定第一图像对应的第一位置。又例如,无人机可以通过全球导航卫星系统(Global NavigationSatellite System,GLONASS)确定第一图像对应的第一位置。It should be noted that the embodiment of the present application does not limit the positioning method. For example, the drone may determine the first position corresponding to the first image through a Global Positioning System (Global Positioning System, GPS). For another example, the UAV may determine the first position corresponding to the first image through the BeiDou Navigation Satellite System (BeiDou Navigation Satellite System, BDS). For another example, the UAV may determine the first position corresponding to the first image through a Global Navigation Satellite System (Global Navigation Satellite System, GLONASS).
在一种可能的设计中,第一位置可以包括以下至少一项:无人机的经纬度信息、无人机的高度信息、无人机的航向角度。In a possible design, the first position may include at least one of the following: longitude and latitude information of the drone, altitude information of the drone, and heading angle of the drone.
示例性的,在获取图像A和图像B的情况下,无人机根据BDS确定获取图像A的位置为北纬36°,东经110°,高度900米,航向角72°,获取图像B的位置为北纬47°,东经99°,高度850米。即图像A对应的第一位置为北纬36°,东经110°,高度900米,航向角72°,图像B对应的第一位置为北纬47°,东经99°,高度850m。Exemplarily, in the case of acquiring image A and image B, according to the BDS, the drone determines that the location to acquire image A is 36° north latitude, 110° east longitude, 900 meters high, and 72° heading angle, and the location to acquire image B is 47° north latitude, 99° east longitude, 850 meters high. That is, the first position corresponding to image A is 36° north latitude, 110° east longitude, 900 meters high, and 72° heading angle, and the first position corresponding to image B is 47° north latitude, 99° east longitude, and 850 meters high.
需要说明的是,本申请实施例对第一位置的最小单位不作限定。例如,第一位置的经纬度可以精确到度,如第一位置为北纬35°,东经130°,高度720米。又例如,第一位置的经纬度可以精确到分,如第一位置为北纬31°22′,东经121°43′,高度900米。又例如,第一位置的高度可以精确到毫米,如第一位置为北纬36°,东经110°,高度900.005米。It should be noted that, the embodiment of the present application does not limit the minimum unit of the first position. For example, the latitude and longitude of the first location can be accurate to degrees, for example, the first location is 35° north latitude, 130° east longitude, and 720 meters high. For another example, the latitude and longitude of the first position may be accurate to the minute, for example, the first position is 31°22' north latitude, 121°43' east longitude, and a height of 900 meters. For another example, the height of the first position may be accurate to millimeters, for example, the first position is 36° north latitude, 110° east longitude, and has a height of 900.005 meters.
S203、无人机根据多个第一图像和多个第一位置,生成第一地图模型。S203. The drone generates a first map model according to the plurality of first images and the plurality of first positions.
其中,第一地图模型用于反应受灾场景。Wherein, the first map model is used to reflect the disaster scene.
在一种可能的设计中,第一地图模型包括场景对象。In one possible design, the first map model includes scene objects.
示例性的,第一地图模型包括场景对象A和场景对象B。其中,场景对象A为黑色且断裂的桥梁,场景对象B为红色且完整的尖顶建筑。Exemplarily, the first map model includes scene object A and scene object B. Among them, scene object A is a black and broken bridge, and scene object B is a red and complete spire building.
在一些实施例中,如图3所示,该地图模型的生成方法中,S203可以包括S301-S302。In some embodiments, as shown in FIG. 3 , in the method for generating a map model, S203 may include S301-S302.
S301、无人机根据多个第一位置和多个第一图像,生成多个第二位置。S301. The drone generates a plurality of second positions according to the plurality of first positions and the plurality of first images.
在本申请实施例中,无人机在第二位置采集的第一图像的重投影误差小于在第一位置采集的第一图像的重投影误差。多个第二位置与多个第一图像相对应。In the embodiment of the present application, the reprojection error of the first image collected by the drone at the second position is smaller than the reprojection error of the first image collected at the first position. The plurality of second locations correspond to the plurality of first images.
需要说明的是,重投影误差是指,真实三维空间点在图像平面上的投影(也就是图像上的像素点)和重投影(得到的虚拟的像素点)的差值。也就是说,通过在第二位置采集的图像生成的地图模型与实际场景的差异度,小于通过在第一位置采集的图像生成的地图模型与实际场景的差异度。即通过在第二位置采集的图像生成的地图模型的准确度较高。It should be noted that the reprojection error refers to the difference between the projection of the real three-dimensional space point on the image plane (that is, the pixel point on the image) and the reprojection (the obtained virtual pixel point). That is to say, the degree of difference between the map model generated by the image collected at the second position and the actual scene is smaller than the degree of difference between the map model generated by the image collected at the first position and the actual scene. That is, the accuracy of the map model generated by the images collected at the second location is high.
示例性的,多个第一图像包括:图像A、图像B和图像C,多个第一位置包括:第一位置A、第一位置B和第一位置C,多个第二位置包括:第二位置A、第二位置B和第二位置C。其中,第二位置A与图像A相对应,第二位置B与图像B相对应,第二位置C与图像C相对应。并且,无人机在第二位置A采集的图像A的重投影误差小于在第一位置A采集的图像A的重投影误差,无人机在第二位置B采集的图像B的重投影误差小于在第一位置B采集的图像B的重投影误差,无人机在第二位置C采集的图像C的重投影误差小于在第一位置C采集的图像C的重投影误差。Exemplarily, the plurality of first images include: image A, image B, and image C, the plurality of first positions include: first position A, first position B, and first position C, and the plurality of second positions include: first position Two position A, second position B and second position C. Wherein, the second position A corresponds to the image A, the second position B corresponds to the image B, and the second position C corresponds to the image C. Moreover, the reprojection error of the image A collected by the drone at the second position A is less than the reprojection error of the image A collected at the first position A, and the reprojection error of the image B collected by the drone at the second position B is less than The reprojection error of the image B collected at the first position B, the reprojection error of the image C collected by the drone at the second position C is smaller than the reprojection error of the image C collected at the first position C.
在一种可能的实现方式中,无人机可以通过空三优化算法,对多个第一位置和多个第一图像进行处理,生成多个第二位置。其中,空三优化算法用于确定第一图像对应的第二位置。In a possible implementation manner, the UAV may process multiple first positions and multiple first images through an aerial triangulation optimization algorithm to generate multiple second positions. Wherein, the space triangulation optimization algorithm is used to determine the second position corresponding to the first image.
需要说明的是,第一图像对应的第一位置与对应的第二位置可以是同一个位置,或者是两个不同的位置。It should be noted that the first position corresponding to the first image and the corresponding second position may be the same position or two different positions.
示例性的,假如第一图像A对应的第一位置为北纬27.2°,东经113.43°,高度740米,第一图像B对应的第一位置为北纬31°,东经105°,高度330米,则无人机通过空三优化算法,确定第一图像A对应的第二位置为北纬27°2′,东经113°43′,高度740米,确定第一图像B对应的第二位置为北纬31°,东经105°,高度330米。Exemplarily, if the first position corresponding to the first image A is 27.2° north latitude, 113.43° east longitude, and 740 meters high, and the first position corresponding to the first image B is 31° north latitude, 105° east longitude, and 330 meters high, then Through the aerial three optimization algorithm, the drone determines that the second position corresponding to the first image A is 27°2' north latitude, 113°43' east longitude, and the height is 740 meters, and the second position corresponding to the first image B is determined to be 31° north latitude. , 105° east longitude, 330 meters high.
S302、无人机根据多个第二位置和多个第一图像,生成第一地图模型。S302. The drone generates a first map model according to the plurality of second positions and the plurality of first images.
在一种可能的实现方式中,无人机可以根据多个第一位置和多个第一图像生成稠密点云,对稠密点云的介绍可以参考以下实施例中的描述,这里不予赘述。之后,无人机可以通过多个第二位置,对稠密点云进行处理,生成第一地图模型。In a possible implementation manner, the UAV may generate a dense point cloud according to multiple first positions and multiple first images. For the introduction of the dense point cloud, reference may be made to the descriptions in the following embodiments, which will not be repeated here. After that, the drone can process the dense point cloud through multiple second positions to generate the first map model.
在另一种可能的实现方式中,无人机可以根据多个第二位置和多个第一图像,生成三角网模型,并通过三角网模型构建第一地图模型。In another possible implementation manner, the UAV may generate a triangulation network model according to the plurality of second positions and the plurality of first images, and construct the first map model by using the triangulation network model.
可以理解的是,无人机可以根据多个第一位置和多个第一图像,生成多个第二位置,第二位置采集的第一图像的重投影误差小于第一位置采集的第一图像的重投影误差,多个第二位置与多个第一图像相对应。也就是说,相较于根据第一位置和第一位置对应的第一图像生成的稠密点云,由第二位置和第二位置对应的第一图像生成的稠密点云更加精准。之后,无人机根据生成的多个第二位置和多个第一图像,生成第一地图模型。由于无人机在第二位置采集的第一图像的重投影误差小于在第一位置采集的第一图像的重投影误差。如此,可以提高第一地图模型的精准度。It can be understood that the drone can generate multiple second positions according to multiple first positions and multiple first images, and the reprojection error of the first image collected at the second position is smaller than that of the first image collected at the first position. The reprojection error of the plurality of second positions corresponds to the plurality of first images. That is, compared with the dense point cloud generated according to the first position and the first image corresponding to the first position, the dense point cloud generated by the second position and the first image corresponding to the second position is more accurate. Afterwards, the drone generates a first map model according to the plurality of generated second positions and the plurality of first images. Because the reprojection error of the first image collected by the drone at the second position is smaller than the reprojection error of the first image collected at the first position. In this way, the accuracy of the first map model can be improved.
S204、无人机获取第一待救援对象的位置信息。S204, the drone obtains the location information of the first object to be rescued.
在一种可能的实现方式中,无人机可以根据多个第一位置和多个目标位置关系,确定第一待救援对象的位置信息,目标位置关系为第一位置与第一待救援对象的位置之间的关系。In a possible implementation manner, the UAV may determine the position information of the first object to be rescued according to a plurality of first positions and a plurality of target position relationships, and the target position relationship is the relationship between the first position and the first object to be rescued. relationship between locations.
在一种可能的设计中,目标位置关系可以包括以下至少一项:第一位置与第一待救援对象的位置之间方向位置、第一位置与第一待救援对象的位置之间的水平距离、第一位置与第一待救援对象的位置之间的高度差。In a possible design, the target position relationship may include at least one of the following: a directional position between the first position and the position of the first object to be rescued, a horizontal distance between the first position and the position of the first object to be rescued , the height difference between the first position and the position of the first object to be rescued.
示例性的,无人机可以确定第一待救援对象A位于第一位置A东偏南30°,水平距离50米的位置。或者,无人机可以确定第一待救援对象A位于第一位置A西偏南45°,水平距离40米,高15米的位置。或者,无人机可以确定第一待救援对象A位于第一位置A东偏北60°,水平距离60米,低10米的位置。Exemplarily, the drone may determine that the first object to be rescued A is located at a position 30° east-southeast of the first position A and a horizontal distance of 50 meters. Alternatively, the UAV can determine that the first object A to be rescued is located at a position 45° west-southwest of the first position A, with a horizontal distance of 40 meters and a height of 15 meters. Alternatively, the UAV may determine that the first object A to be rescued is located at a position 60° east-north of the first position A, with a horizontal distance of 60 meters and a lower position of 10 meters.
可选的,无人机可以通过预设方式确定目标位置关系。例如,无人机可以通过超声波传感器确定第一位置A位于第一待救援对象A东偏南30°,水平距离50米的位置。又例如,无人机可以通过光电传感器确定第一位置A位于第一待救援对象A西偏南45°,水平距离40米的位置。又例如,无人机可以通过激光测距仪确定第一位置A位于第一待救援对象A东偏北60°,水平距离60米的位置。Optionally, the UAV may determine the target position relationship in a preset manner. For example, the UAV can determine through the ultrasonic sensor that the first position A is located at 30° east-southeast of the first object A to be rescued and a horizontal distance of 50 meters. For another example, the unmanned aerial vehicle may determine through the photoelectric sensor that the first position A is located at 45° southwest of the first object A to be rescued and a horizontal distance of 40 meters. For another example, the UAV can use a laser range finder to determine that the first position A is located at a position 60° east-north of the first object A to be rescued and a horizontal distance of 60 meters.
下面对无人机根据多个第一位置和多个目标位置关系,确定第一待救援对象的位置信息进行介绍。The following describes how the drone determines the position information of the first object to be rescued according to the relationship between multiple first positions and multiple target positions.
示例性的,如图4所示,该图主要有三个坐标系,分别是以O1为原点的图像坐标系,以O2为原点的像机坐标系和以O3为原点的世界坐标系。其中,图像坐标系与相机坐标系平行,且O1O2垂直像机坐标系,世界坐标系原点为摄像头在水平面上的投影点,Y轴指向摄像头正前方。高度H用于表示摄像头高度(即无人机高度),角度α用于表示摄像头俯仰角,测量点Q用于表示第一待救援对象,投影点P用于表示测量点Q在世界坐标系Y轴上的投影位置,像素点坐标点Q1(u,v)用于表示测量点Q在图像坐标系的位置,像素点坐标点P1(u0,v)用于表示投影点P在图像坐标系的位置,坐标点O3用于表示无人机在世界坐标系的投影位置,坐标点O1(u0,v0)用于表示镜头中点在图像坐标系的位置,坐标点M用于表示图像坐标系中心(即原点O1)在世界坐标系Y轴上的位置。其中,图像坐标系上dx表示u轴的单位像素点的实际长度,dy表示v轴的单位像素点的实际长度,单位像素点表示的实际长度dx和dy的单位为毫米每像素点(mm/pix),焦距f表示原点O1与原点O2之间的距离,焦距f的单位为毫米(mm)。图中,O3M与俯仰角α的关系可表示为H=O3M·tanα。则可以计算出测量点Q到坐标点O3的Y轴方向的距离O3P。推导过程可以参考公式一、公式二和公式三。Exemplarily, as shown in Figure 4 , the figure mainly has three coordinate systems, namely, the image coordinate system with O1 as the origin, the camera coordinate system with O2 as the origin, and the world coordinate system with O3 as the origin. . Among them, the image coordinate system is parallel to the camera coordinate system, and O 1 O 2 is perpendicular to the camera coordinate system, the origin of the world coordinate system is the projection point of the camera on the horizontal plane, and the Y axis points to the front of the camera. The height H is used to indicate the height of the camera (that is, the height of the drone), the angle α is used to indicate the pitch angle of the camera, the measurement point Q is used to indicate the first object to be rescued, and the projection point P is used to indicate that the measurement point Q is in the world coordinate system Y The projection position on the axis, the pixel point coordinate point Q 1 (u, v) is used to represent the position of the measurement point Q in the image coordinate system, and the pixel point coordinate point P 1 (u 0 , v) is used to represent the projection point P in the image. The position of the coordinate system, the coordinate point O 3 is used to represent the projection position of the UAV in the world coordinate system, and the coordinate point O 1 (u 0 , v 0 ) is used to represent the position of the lens center point in the image coordinate system, and the coordinate point M It is used to indicate the position of the center of the image coordinate system (ie the origin O 1 ) on the Y-axis of the world coordinate system. Among them, dx on the image coordinate system represents the actual length of the unit pixel of the u-axis, dy represents the actual length of the unit pixel of the v-axis, and the units of the actual lengths dx and dy represented by the unit pixel are millimeters per pixel (mm/ pix), the focal length f represents the distance between the origin O 1 and the origin O 2 , and the unit of the focal length f is millimeters (mm). In the figure, the relationship between O 3 M and the pitch angle α can be expressed as H=O 3 M·tanα. Then the distance O 3 P in the Y-axis direction from the measuring point Q to the coordinate point O 3 can be calculated. The derivation process can refer to formula 1, formula 2 and formula 3.
β=α-γ 公式二。β=α-γ Formula two.
通过上式得到的O3P和ΔPO2Q~ΔP1O2Q1(即),则可以计算出测量点Q到坐标点O3的X轴方向的距离PQ。推导过程可以参考公式四、公式五和公式六。O 3 P and ΔPO 2 Q~ΔP 1 O 2 Q 1 obtained by the above formula (ie ), then the distance PQ in the X-axis direction from the measurement point Q to the coordinate point O 3 can be calculated. For the derivation process, please refer to Equation 4, Equation 5 and Equation 6.
则Q到世界坐标系原点的距离为:即第一待救援对象和无人机所处位置在实际地面的距离。之后,无人机可以将第一图像的第一位置的经纬度信息转换为投影坐标,根据已知的无人机在第一图像的第一位置的航向角和无人机与第一图像中第一待救援对象之间的距离,计算出该第一待救援对象在投影坐标系上的坐标,再将该第一待救援对象的坐标转化为经纬度坐标,得到第一待救援对象的位置信息。推导过程可以参考公式七、公式八、公式九、公式十和公式十一。Then the distance from Q to the origin of the world coordinate system is: That is, the distance between the first object to be rescued and the position of the drone on the actual ground. After that, the UAV can convert the latitude and longitude information of the first position of the first image into projection coordinates, according to the known heading angle of the UAV at the first position of the first image and the first position of the UAV and the first image in the first image. Once the distance between the objects to be rescued is calculated, the coordinates of the first object to be rescued on the projected coordinate system are calculated, and then the coordinates of the first object to be rescued are converted into latitude and longitude coordinates to obtain the location information of the first object to be rescued. For the derivation process, please refer to Formula 7, Formula 8, Formula 9,
L=I+L0 公式十一。L=I+L 0 Formula eleven.
S205、无人机通过对第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型。S205: The drone generates a second map model by processing the position information of the first object to be rescued on the first map model.
其中,第二地图模型包括第一待救援对象的位置信息、第一待救援对象和场景对象。The second map model includes location information of the first object to be rescued, the first object to be rescued, and a scene object.
在一种可能的实现方式中,无人机可以根据第一待救援对象的位置信息,将第一待救援对象标记在第一地图模型上,生成第二地图模型。In a possible implementation manner, the drone may mark the first object to be rescued on the first map model according to the position information of the first object to be rescued to generate the second map model.
示例性的,如图5所示,若第一待救援对象A的位置信息A对应第一地图模型的位置A,则对第一地图模型的位置A进行标记。若第一待救援对象B的位置信息B对应第一地图模型的位置B,则对第一地图模型的位置B进行标记。若第一待救援对象C的位置信息C对应第一地图模型的位置C,则对第一地图模型的位置C进行标记。Exemplarily, as shown in FIG. 5 , if the location information A of the first object A to be rescued corresponds to the location A of the first map model, the location A of the first map model is marked. If the location information B of the first object to be rescued B corresponds to the location B of the first map model, the location B of the first map model is marked. If the location information C of the first object C to be rescued corresponds to the location C of the first map model, the location C of the first map model is marked.
上述实施例提供的技术方案至少带来以下有益效果:生成装置可以获取多个第一图像和多个第一位置,多个第一图像包括第一待救援对象和场景对象,第一位置为采集第一图像的位置,多个第一位置与多个第一图像相对应。之后,生成装置可以根据多个第一图像和多个第一位置生成第一地图模型,该第一地图模型用于反映受灾场景,第一地图模型包括场景对象。并且,生成装置可以获取第一待救援对象的位置信息,并通过第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型,该第二地图模型中包括第一待救援对象的位置信息、第一待救援对象和场景对象。也就是说,生成装置可以将第一待救援对象的位置信息和受灾场景中的对象(如第一待救援对象和场景对象)标记在第二地图模型。如此,可以提高地图模型中的信息量,进而可以为救援人员合理分配救援力量、确定救灾重点区域、选择安全救援路线以及灾后重建选址等提供很有价值的参考。此外,相较于受灾场景的视频,本申请技术方案中,第二地图模型的数据量较少。如此,可以减少生成装置传输的数据量,节约了生成装置将信息传输至终端的时间,进而可以加快救援进度,提高救援效率。The technical solutions provided by the above embodiments bring at least the following beneficial effects: the generating device can acquire multiple first images and multiple first positions, the multiple first images include the first object to be rescued and the scene object, and the first position is collected The positions of the first images, the plurality of first positions correspond to the plurality of first images. Afterwards, the generating device may generate a first map model according to the plurality of first images and the plurality of first positions, where the first map model is used to reflect the disaster scene, and the first map model includes scene objects. In addition, the generating device can obtain the position information of the first object to be rescued, and process the first map model according to the position information of the first object to be rescued to generate a second map model, where the second map model includes the first object to be rescued The position information of the object, the first object to be rescued and the scene object. That is, the generating device may mark the position information of the first object to be rescued and objects in the disaster scene (eg, the first object to be rescued and the scene objects) on the second map model. In this way, the amount of information in the map model can be increased, and then it can provide a valuable reference for rescuers to reasonably allocate rescue forces, determine key areas for disaster relief, select safe rescue routes, and select locations for post-disaster reconstruction. In addition, compared with the video of the disaster scene, in the technical solution of the present application, the amount of data of the second map model is smaller. In this way, the amount of data transmitted by the generating device can be reduced, and the time for the generating device to transmit information to the terminal can be saved, thereby speeding up the rescue progress and improving the rescue efficiency.
在一些实施例中,如图6所示,该地图模型的生成方法中,S302可以包括S601-S603。In some embodiments, as shown in FIG. 6 , in the method for generating a map model, S302 may include S601-S603.
S601、无人机通过目标预设算法对多个第二位置和多个第一图像进行处理,确定稠密点云。S601. The drone processes multiple second positions and multiple first images through a target preset algorithm to determine a dense point cloud.
在一种可能的实现方式中,无人机中存储有目标预设算法,目标预设算法用于确定多个第一图像的像素点对应的坐标集合。无人机可以根据目标预设算法确定多个第一图像的像素点对应的坐标集合。之后,无人机以多个第二位置为参考点,根据多个第一图像的像素点对应的坐标集合,确定稠密点云。其中,稠密点云由多个第一图像的像素点对应的坐标集合确定。In a possible implementation manner, a target preset algorithm is stored in the UAV, and the target preset algorithm is used to determine a set of coordinates corresponding to the pixels of the plurality of first images. The UAV may determine a set of coordinates corresponding to the pixels of the multiple first images according to the target preset algorithm. After that, the UAV uses the plurality of second positions as reference points, and determines the dense point cloud according to the set of coordinates corresponding to the pixels of the plurality of first images. The dense point cloud is determined by a set of coordinates corresponding to the pixel points of the plurality of first images.
需要说明的是,本申请实施例对目标预设算法不作限定。It should be noted that, the embodiment of the present application does not limit the target preset algorithm.
示例性的,目标预设算法包括立体像对匹配算法和前方交会算法。假如第一图像A对应第一位置A,第一图像B对应第一位置B,第一图像A与第一图像B相邻,则通过立体匹配像对算法,确定第一图像A与第一图像B对应的同名像素点集合A。之后,通过前方交会算法,确定同名像素点集合A内像素点的三维坐标,得到三维坐标集合B。通过对三维坐标集合B在三维坐标系进行标记,确定稠密点云A。Exemplarily, the target presetting algorithm includes a stereo pair matching algorithm and a forward intersection algorithm. If the first image A corresponds to the first position A, the first image B corresponds to the first position B, and the first image A is adjacent to the first image B, the stereo matching algorithm is used to determine the first image A and the first image A set of pixels with the same name corresponding to B. After that, through the forward intersection algorithm, the three-dimensional coordinates of the pixel points in the pixel point set A with the same name are determined, and the three-dimensional coordinate set B is obtained. By marking the three-dimensional coordinate set B in the three-dimensional coordinate system, the dense point cloud A is determined.
在一些实施例中,稠密点云是通过预设的图像降采样系数对每张图像进行降采样,且指定多个点云采样密度等级得到的。其中,点云采样密度等级包括高密度等级、中密度等级与低密度等级。除了高密度等级时才对图像进行全像素深度图生成外,其他均采用间隔像素进行深度图生成,即在中密度等级时对图像水平方向和垂直方向上,每隔一个像素进行深度图生成;在低密度等级时对图像水平方向和垂直方向上,每隔两个像素进行深度图生成。In some embodiments, the dense point cloud is obtained by down-sampling each image through a preset image down-sampling coefficient and specifying multiple point cloud sampling density levels. Among them, the point cloud sampling density level includes high density level, medium density level and low density level. Except for the high-density level, the full-pixel depth map is generated for the image, and other pixels are used for depth map generation, that is, at the medium density level, the depth map is generated for every other pixel in the horizontal and vertical directions of the image; At low density levels, the depth map is generated every two pixels in the horizontal and vertical directions of the image.
示例性的,在对第一图像A确定稠密点云的情况下,假如预设的图像降采样系数为3,指定点云采样密度等级为高密度等级、中密度等级与低密度等级,则在第一图像A中每行和每列每隔3个像素取一个像素点组成第一图像B。若第一图像B的点云采样密度等级为高密度等级,则对第一图像B进行全像素点深度图生成。若第一图像B的点云采样密度等级为中密度等级,则对第一图像B水平方向和垂直方向上,每隔一个像素进行深度图生成。若第一图像B的点云采样密度等级为低密度等级,则对第一图像B水平方向和垂直方向上,每隔两个像素进行深度图生成。Exemplarily, in the case of determining a dense point cloud for the first image A, if the preset image downsampling coefficient is 3, and the specified point cloud sampling density level is high density level, medium density level and low density level, then in In the first image A, one pixel point is taken every 3 pixels in each row and each column to form the first image B. If the point cloud sampling density level of the first image B is a high density level, a full-pixel depth map is generated for the first image B. If the point cloud sampling density level of the first image B is a medium density level, the depth map is generated for every other pixel in the horizontal direction and the vertical direction of the first image B. If the point cloud sampling density level of the first image B is a low density level, the depth map is generated for every two pixels in the horizontal direction and the vertical direction of the first image B.
可以理解的是,通过指定降采样系数和点云采样密度,可以减少确定像素点的三维坐标的数量。如此一来,可以减少确定稠密点云的时间。It can be understood that by specifying the downsampling coefficient and the point cloud sampling density, the number of three-dimensional coordinates for determining pixel points can be reduced. This reduces the time to determine dense point clouds.
S602、无人机对稠密点云进行处理,生成三角网模型。S602, the drone processes the dense point cloud to generate a triangulation network model.
需要说明的是,无人机对稠密点云进行处理生成三角网模型的过程,可以参考常规技术中通过稠密点云生成三角网模型的方式,此处不予赘述。It should be noted that the process of generating a triangulation network model by processing a dense point cloud by a UAV can refer to the method of generating a triangulation network model through a dense point cloud in the conventional technology, which will not be repeated here.
示例性的,无人机可以通过德洛内(Delaunay)四面体剖分算法,对稠密点云进行处理,生成Delaunay四面体空间剖分,构建全局优化图,全局优化图的节点由空间剖分中的四面体构建成,全局优化图的边为相邻四面体的三角面。然后,无人机可以确定每个点到其所见相机的连线相交的Delaunay四面体剖分中的三角面,累加权重值1到全局优化图中相应的边,得到可见线约束的全局优化图。最后,无人机可以通过最小割最大流算法,对全局优化图进行分割,确定每个四面体与模型表面的内外关系,提取位于表面内外的相邻的四面体的共享三角面,构成最终的三角网模型。Exemplarily, the UAV can process the dense point cloud through the Delaunay tetrahedron algorithm, generate the Delaunay tetrahedron space division, and construct the global optimization graph, and the nodes of the global optimization graph are divided by the space. The tetrahedron in is constructed, and the edges of the global optimization graph are the triangular faces of the adjacent tetrahedron. Then, the UAV can determine the triangular faces in the Delaunay tetrahedron where the line connecting each point to the camera it sees intersects, accumulate the weight value 1 to the corresponding edge in the global optimization graph, and obtain the global optimization of visible line constraints picture. Finally, the UAV can segment the global optimization graph through the minimum cut maximum flow algorithm, determine the internal and external relationship between each tetrahedron and the model surface, and extract the shared triangular faces of adjacent tetrahedra located inside and outside the surface to form the final Triangulation model.
S603、无人机通过多个第一图像对三角网模型进行渲染处理,生成第一地图模型。S603 , the drone performs rendering processing on the triangulation model by using a plurality of first images to generate a first map model.
示例性的,无人机可以选择与每个三角网模型的三角面最近的可见相机作为其关联相机。然后,无人机将同一关联相机的空间上相连通的三角面分成同一组,获取其在关联相机上采集的图像块。最后,无人机将所有的图像块按照打包算法组合成一张纹理图,实现三角网模型的纹理映射,得到第一地图模型。Exemplarily, the drone may select the closest visible camera to the triangulation of each triangulation model as its associated camera. Then, the UAV divides the spatially connected triangular faces of the same associated camera into the same group, and obtains the image blocks collected on the associated camera. Finally, the UAV combines all the image blocks into a texture map according to the packing algorithm, realizes the texture mapping of the triangulation model, and obtains the first map model.
可以理解的是,无人机可以通过目标预设算法对多个第二位置和多个第一图像进行处理,确定稠密点云,目标预设算法用于确定多个第一图像的像素点对应的坐标集合,稠密点云由多个第一图像的像素点对应的坐标集合确定。并且相较于根据第一位置和第一位置对应的第一图像生成的稠密点云,由第二位置和第二位置对应的第一图像生成的稠密点云更加精准。之后,无人机对稠密点云进行处理,生成三角网模型,并通过多个第一图像对三角网模型进行渲染处理,生成第一地图模。如此,可以提高第一地图模型的精准度。It can be understood that the UAV can process multiple second positions and multiple first images through the target preset algorithm to determine the dense point cloud, and the target preset algorithm is used to determine the corresponding pixel points of the multiple first images. The set of coordinates of the dense point cloud is determined by the set of coordinates corresponding to the pixels of the multiple first images. And compared with the dense point cloud generated according to the first position and the first image corresponding to the first position, the dense point cloud generated by the second position and the first image corresponding to the second position is more accurate. After that, the drone processes the dense point cloud to generate a triangulation model, and renders the triangulation model through a plurality of first images to generate a first map model. In this way, the accuracy of the first map model can be improved.
在一些实施例中,无人机可以确定多个第一图像中每个第一图像的特征对象。之后,无人机可以根据具有相同特征对象的第一图像,对特征对象进行建模In some embodiments, the drone may determine a characteristic object for each of the plurality of first images. After that, the drone can model the characteristic object based on the first image with the same characteristic object
如图7所示,在执行S201之后,该地图模型的生成方法还可以包括S701-S702。As shown in FIG. 7, after performing S201, the method for generating a map model may further include S701-S702.
S701、无人机通过第一预设算法提取多个第一图像对应的M个特征对象。S701. The drone extracts M characteristic objects corresponding to a plurality of first images through a first preset algorithm.
其中,特征对象包括第一待救援对象和场景对象,多个第一图像与M个特征对象相对应,M为正整数。Wherein, the characteristic objects include a first object to be rescued and a scene object, a plurality of first images correspond to M characteristic objects, and M is a positive integer.
示例性的,3(即M为3)个特征对象包括:特征对象A、特征对象B和特征对象C,特征对象A可以为第一待救援对象A(如人员),特征对象B可以为场景对象A(如河流),特征对象C可以为场景对象B(如房屋)。Exemplarily, the 3 (that is, M is 3) feature objects include: feature object A, feature object B, and feature object C, where feature object A can be the first object to be rescued A (such as a person), and feature object B can be a scene. Object A (such as a river), and feature object C can be scene object B (such as a house).
需要说明的是,多个第一图像的数量与M可以相同,也可以不同,本申请实施例对此不作限定。通常情况下,多个第一图像的数量大于M。也就是说,不同第一图像的特征对象可以相同。It should be noted that the number of the multiple first images may be the same as or different from M, which is not limited in this embodiment of the present application. Generally, the number of the plurality of first images is greater than M. That is, the feature objects of different first images may be the same.
示例性的,多个第一图像包括:图像A、图像B、图像C和图像D,图像A包括特征对象A,图像B包括特征对象B,图像C和图像D包括相同的特征对象C。Exemplarily, the plurality of first images include: image A, image B, image C, and image D, image A includes feature object A, image B includes feature object B, and image C and image D include the same feature object C.
可选的,一个第一图像中可以包括多个特征对象。例如,第一图像A包括:第一待救援对象A和场景对象A。Optionally, a first image may include multiple feature objects. For example, the first image A includes: the first object A to be rescued and the scene object A.
在本申请实施例中,第一预设算法用于提取多个第一图像中的特征对象。In this embodiment of the present application, the first preset algorithm is used to extract characteristic objects in a plurality of first images.
需要说明的是,本申请实施例对第一预设算法不作限定。例如,第一预设算法可以为尺度不变特征变换(Scale Invariant Feature Transform,SIFT)算法。又例如,第一预设算法可以为加速稳健特征(Speeded Up Robust Features,SURF)算法。又例如,第一预设算法可以为快速特征对象提取和描述(Oriented Fast and Rotated Brief,ORB)算法。It should be noted that, the embodiment of the present application does not limit the first preset algorithm. For example, the first preset algorithm may be a Scale Invariant Feature Transform (SIFT) algorithm. For another example, the first preset algorithm may be a Speeded Up Robust Features (SURF) algorithm. For another example, the first preset algorithm may be an Oriented Fast and Rotated Brief (ORB) algorithm.
需要说明的是,本申请实施例对特征对象不作限定。例如,特征对象可以为颜色特征。又例如,特征对象可以为纹理特征。又例如,特征对象可以为形状特征。It should be noted that, the embodiment of the present application does not limit the feature object. For example, the feature object can be a color feature. For another example, the feature object may be a texture feature. For another example, the feature object may be a shape feature.
S702、无人机根据M个特征对象对多个第一图像进行处理,得到M个图像集合。S702 , the drone processes the multiple first images according to the M characteristic objects to obtain M image sets.
其中,M个图像集合与M个特征对象相对应。一个图像集合中包括至少一个第一图像,一个图像集合中的第一图像对应的特征对象相同。Among them, M image sets correspond to M feature objects. An image set includes at least one first image, and the characteristic objects corresponding to the first images in an image set are the same.
在一种可能的实现方式中,无人机可以根据第一图像的特征对象,对多个第一图像进行匹配,得到具有相同特征对象的图像集合。In a possible implementation manner, the drone may perform matching on multiple first images according to the characteristic objects of the first images, to obtain an image set with the same characteristic objects.
可选的,无人机可以设置预设像素尺寸范围。若特征对象的像素尺寸在预设像素尺寸范围内,则匹配具有相同的该特征对象的多个第一图像。若特征对象的像素尺寸未在预设像素尺寸范围内,则取消对具有相同的该特征对象的多个第一图像的匹配。Optionally, the drone can set a preset pixel size range. If the pixel size of the feature object is within the preset pixel size range, multiple first images having the same feature object are matched. If the pixel size of the feature object is not within the preset pixel size range, the matching of multiple first images having the same feature object is canceled.
示例性的,无人机设置预设像素尺寸范围为(2000,4000)。若特征对象A的像素尺寸为3000,则匹配具有相同的特征对象A的多个第一图像。若特征对象B的像素尺寸为5000,则取消对具有相同的特征对象B的多个第一图像的匹配。Exemplarily, the drone sets a preset pixel size range of (2000, 4000). If the pixel size of the feature object A is 3000, multiple first images with the same feature object A are matched. If the pixel size of the feature object B is 5000, the matching of multiple first images having the same feature object B is cancelled.
在一些实施例中,在无人机匹配具有相同特征对象的多个第一图像之后,无人机可以根据M个图像集合,对M个特征对象进行建模。In some embodiments, after the drone matches multiple first images with the same feature object, the drone may model the M feature objects from the M image sets.
可以理解的是,无人机根据从多个第一图像中提取的特征对象对多个第一图像进行匹配,可以确定特征对象相同的多个第一图像。如此,可以通过特征对象相同的多个第一图像,对该多个第一图像中相同的特征对象进行建模。这样一来,可以较为准确地对特征对象进行建模。并且,通过设置预设像素尺寸范围,可以对第一图像的特征对象进行筛选。如此,可以大大减少对具有相同的特征对象的多个第一图像的匹配时间。It can be understood that, the drone matches the plurality of first images according to the characteristic objects extracted from the plurality of first images, and can determine the plurality of first images with the same characteristic objects. In this way, the same feature objects in the multiple first images can be modeled by using multiple first images with the same feature objects. In this way, the feature object can be modeled more accurately. Moreover, by setting a preset pixel size range, the feature objects of the first image can be screened. In this way, the matching time for multiple first images having the same feature object can be greatly reduced.
下面结合具体示例,对本申请提供的地图模型的生成方法进行介绍。如图8所示,以建立特征对象A的地图模型为例,特征对象A对应的图像集合为图像集合A(即图像集合A中的第一图像的特征对象均为特征对象A)。无人机可以通过目标预设算法对图像集合A中每个第一图像对应的第二位置和图像集合A进行处理,得到如图8中的(a)所示的特征对象A的稠密点云A。无人机通过对稠密点云A进行处理,可生成图8中的(b)所示的特征对象A的三角网模型A。无人机通过图像集合A对三角网模型A进行渲染处理,可生成图8中的(c)所示的特征对象A的地图模型A。The method for generating the map model provided by the present application will be introduced below with reference to specific examples. As shown in FIG. 8 , taking building a map model of feature object A as an example, the image set corresponding to feature object A is image set A (that is, the feature objects of the first image in image set A are all feature object A). The UAV can process the second position corresponding to each first image in the image set A and the image set A through the target preset algorithm, and obtain the dense point cloud of the feature object A as shown in (a) in Figure 8. A. By processing the dense point cloud A, the drone can generate the triangulation model A of the feature object A shown in (b) in Figure 8. The UAV renders the triangulation model A through the image set A, and can generate the map model A of the feature object A shown in (c) in FIG. 8 .
在一些实施例中,如图9所示,在S204之前,该地图模型的生成方法还可以包括S901-S902。In some embodiments, as shown in FIG. 9, before S204, the method for generating the map model may further include S901-S902.
S901、无人机获取第二待救援对象。S901, the drone obtains the second object to be rescued.
在本申请实施例中,多个第一图像包括第二待救援对象,第二待救援对象包括第一待救援对象。In the embodiment of the present application, the plurality of first images include the second object to be rescued, and the second object to be rescued includes the first object to be rescued.
示例性的,第一图像A包括第二待救援对象A和第二待救援对象B。其中,第二待救援对象A包括第一待救援对象A、第一待救援对象B和第一待救援对象C,第二待救援对象B包括第一待救援对象D和第一待救援对象E。Exemplarily, the first image A includes a second object A to be rescued and a second object B to be rescued. The second object to be rescued A includes the first object to be rescued A, the first object to be rescued B and the first object to be rescued C, and the second object to be rescued B includes the first object to be rescued D and the first object to be rescued E .
在一种可能的设计中,无人机存储有目标检测算法,目标检测算法用于识别多个第一图像中的目标对象(即第二待救援对象和场景对象),并对多个第一图像中的目标对象进行特征提取。无人机通过目标检测算法对多个第一图像中的目标对象进行特征提取,对具有特征的目标对象配置一个独有的身份标识号(Identity document,ID)。In a possible design, the UAV stores a target detection algorithm, and the target detection algorithm is used to identify target objects (ie, the second object to be rescued and the scene object) in the plurality of first images, and detect the target objects in the plurality of first images. Feature extraction is performed on the target object in the image. The UAV performs feature extraction on target objects in multiple first images through a target detection algorithm, and configures a unique identity document (ID) for the target objects with features.
需要说明的是,本申请实施例对目标检测算法不作限定。例如,基于候选区域(Region Proposal)的R-CNN(Region-CNN,区域分割技术)系算法(如R-CNN、Fast R-CNN、Faster R-CNN等)。又例如one-stage(一个阶段/步骤)算法(如Yolo、SSD等)。It should be noted that, the embodiment of the present application does not limit the target detection algorithm. For example, an R-CNN (Region-CNN, region segmentation technology) algorithm based on a region proposal (eg, R-CNN, Fast R-CNN, Faster R-CNN, etc.). Another example is a one-stage (one stage/step) algorithm (such as Yolo, SSD, etc.).
示例性的,假如第一图像A中包括人和建筑,则通过多目标跟踪算法,对第一图像A中的待救援对象进行识别并提取发色和体态等特征,对第一图像A中的场景对象进行识别并提取场景对象的颜色和形态等特征。之后,对第一图像A中黄色头发且蹲坐在地的人配置ID为待救援对象A,对第一图像A中白色头发且靠墙站立的人配置ID为待救援对象B,对第一图像A中白色且完整的圆顶建筑配置ID为场景对象A,对第一图像A中红色且断裂的桥梁配置ID为场景对象B。Exemplarily, if the first image A includes people and buildings, the multi-target tracking algorithm is used to identify the object to be rescued in the first image A and extract features such as hair color and posture, and then use the multi-target tracking algorithm to identify the object to be rescued in the first image A Scene objects are identified and features such as color and shape of scene objects are extracted. After that, the person with yellow hair and squatting on the ground in the first image A is assigned the ID as the object to be rescued A, the person with white hair and standing against the wall in the first image A is assigned the ID as the object to be rescued B, and the first image A is assigned the ID as the object to be rescued. The configuration ID of the white and complete dome building in the image A is the scene object A, and the configuration ID of the red and broken bridge in the first image A is the scene object B.
可以理解的是,通过对多个第一图像中的目标对象的识别,救援人员可以结合受灾场景中的第二待救援对象的救援需求和场景对象的损坏情况,制定救援计划。如此一来,在加速救援进度的同时,也保障了救援人员的安全。It can be understood that, by identifying the target objects in the multiple first images, rescuers can formulate a rescue plan based on the rescue needs of the second object to be rescued in the disaster scene and the damage of the scene objects. In this way, while speeding up the rescue progress, it also ensures the safety of the rescuers.
在一种可能的实现方式中,无人机可以对实时采集的多个第一图像进行处理。In a possible implementation manner, the drone may process multiple first images collected in real time.
示例性的,无人机在时刻A采集到第一图像A,并对第一图像A进行处理。或者,无人机在时刻B采集到第一图像B和第一图像C,并对第一图像B和第一图像C进行处理。又或者,无人机在时刻C采集到第一图像D、第一图像E和第一图像F,并对第一图像D、第一图像E和第一图像F进行处理。Exemplarily, the drone collects the first image A at time A, and processes the first image A. Alternatively, the drone collects the first image B and the first image C at time B, and processes the first image B and the first image C. Alternatively, the drone collects the first image D, the first image E, and the first image F at time C, and processes the first image D, the first image E, and the first image F.
在另一种可能的实现方式中,无人机可以在对受灾场景的多个第一图像采集完成的情况下,对该多个第一图像进行处理。In another possible implementation manner, the drone may process the multiple first images of the disaster scene after the acquisition of the multiple first images is completed.
示例性的,无人机在对受灾场景采集完成的情况下,采集到第一图像A、第一图像B和第一图像C。之后,无人机对第一图像A、第一图像B和第一图像C进行处理。Exemplarily, the drone collects the first image A, the first image B, and the first image C when the collection of the disaster-affected scene is completed. After that, the drone processes the first image A, the first image B and the first image C.
可以理解的是,无人机可以通过不同方式对第一图像进行处理。如此,可以增加无人机对第一图像的处理方式,提高了操作性和灵活性。It will be appreciated that the drone can process the first image in different ways. In this way, the processing method of the UAV for the first image can be increased, and the operability and flexibility are improved.
S902、无人机通过对第二待救援对象进行去重处理,得到第一待救援对象。S902, the drone obtains the first object to be rescued by deduplicating the second object to be rescued.
在一种可能的实现方式中,无人机可以将第二待救援对象中重复的对象进行去重处理,得到第一待救援对象。In a possible implementation manner, the drone can perform deduplication processing on the duplicate objects in the second object to be rescued to obtain the first object to be rescued.
示例性的,无人机首先通过卡尔曼滤波算法确定第二待救援对象A多个运动状态定义量,多个运动状态定义量随着第二待救援对象A的移动而变化,第二待救援对象A的前一帧的位置和速度将用于预测当前帧的位置和速度,观察到的两个正态分布状态被线性加权以获得预测的当前帧的状态。然后通过匈牙利算法计算出第二待救援对象A前后两个帧的相似度,得到该第二待救援对象A前后两个帧的相似度矩阵,采用前后两个帧的重叠度(Intersection over Union,IOU)来得到相应的相似矩阵,通过对比前后两个帧的相似矩阵从而解决了前后两个帧的实际匹配目标的问题。Exemplarily, the drone first determines multiple motion state definitions of the second object A to be rescued through the Kalman filter algorithm, and the multiple motion state definitions vary with the movement of the second object A to be rescued, and the second object A to be rescued changes. The position and velocity of the previous frame of object A will be used to predict the position and velocity of the current frame, and the two observed normally distributed states are linearly weighted to obtain the predicted state of the current frame. Then, the similarity between the two frames before and after the second object A to be rescued is calculated by the Hungarian algorithm, and the similarity matrix of the two frames before and after the second object A to be rescued is obtained. IOU) to obtain the corresponding similarity matrix, and by comparing the similarity matrix of the two frames before and after, the problem of the actual matching target of the two frames before and after is solved.
如图10所示,假如已配置ID的第二待救援对象包括第二待救援对象A和第二待救援对象B,预测第二待救援对象包括预测第二待救援对象A和预测第二待救援对象B。其中,第二待救援对象A对应预测第二待救援对象A,第二待救援对象B对应预测第二待救援对象B。通过IOU对比计算,对第二待救援对象A的位置和速度与预测第二待救援对象A的位置和速度进行匹配。若第二待救援对象A的位置和速度与预测第二待救援对象A的位置和速度匹配成功,则对第二待救援对象A的位置和速度进行更新(即以预测第二待救援对象A的位置和速度作为预测下一帧的第二待救援对象的位置和速度),并进入IOU对比计算。若第二待救援对象A的位置和速度与预测第二待救援对象A的位置和速度匹配失败,则将预测第二待救援对象B的位置和速度与预测第二待救援对象A的位置和速度匹配进行匹配,若预测第二待救援对象B的位置和速度与预测第二待救援对象A的位置和速度匹配失败,则对预测第二待救援对象A配置新的ID为第二待救援对象C,并对第二待救援对象C下一帧的位置和速度预测,进入IOU对比计算。若第二待救援对象A与预测第二待救援对象A匹配失败,且预测第二待救援对象B的位置和速度与预测第二待救援对象A的位置和速度匹配成功,则删除第二待救援对象A。如此,最终得到第一待救援对象。As shown in FIG. 10 , if the second object to be rescued with the configured ID includes the second object to be rescued A and the second object to be rescued B, the predicted second object to be rescued includes the predicted second object to be rescued A and the predicted second object to be rescued Rescue object B. The second object to be rescued A corresponds to the predicted second object to be rescued A, and the second object to be rescued B corresponds to the predicted second object to be rescued B. The position and speed of the second object A to be rescued are matched with the predicted position and speed of the second object A to be rescued through the IOU comparison calculation. If the position and speed of the second object to be rescued A match the predicted position and speed of the second object to be rescued A successfully, update the position and speed of the second object to be rescued A (that is, to predict the second object A to be rescued) The position and speed of , are used to predict the position and speed of the second object to be rescued in the next frame), and enter the IOU comparison calculation. If the position and speed of the second object to be rescued A and the predicted position and speed of the second object to be rescued A fail to match, then the predicted position and speed of the second object to be rescued B and the predicted position and speed of the second object to be rescued A are compared. Speed matching is performed. If the position and speed of the predicted second object to be rescued B and the predicted position and speed of the second object to be rescued A fail to match, configure a new ID for the predicted second object to be rescued A as the second to be rescued Object C, and predicts the position and speed of the next frame of the second object C to be rescued, and enters the IOU comparison calculation. If the second object to be rescued A fails to match the predicted second object to be rescued A, and the predicted position and speed of the second object to be rescued B match the predicted position and speed of the second object to be rescued A successfully, delete the second object to be rescued. Rescue object A. In this way, the first object to be rescued is finally obtained.
可以理解的是,无人机对第一图像中的目标对象配置ID并进行去重处理。如此,可以减少需要定位的目标对象,加快了救援进度。It can be understood that the drone configures an ID for the target object in the first image and performs de-duplication processing. In this way, the target objects that need to be located can be reduced, and the rescue progress can be accelerated.
下面结合具体示例,对本申请提供的地图模型的生成方法进行介绍,如图11所示,地图模型的生成系统包括无人机摄像头和图像分析模块。其中,无人机摄像头可以采集原始图像(即第一图像和第一位置)。图像分析模块可以检测采集的原始图像中的目标(即第一待救援对象和场景对象)并分类,对分类后的目标进行去重处理,并对去重处理后的目标进行定位。之后,图像分析模块将去重处理且定位后的目标在地图模型(即第一地图模型)上展示。The following describes the method for generating a map model provided by the present application with reference to specific examples. As shown in FIG. 11 , the generating system for the map model includes a UAV camera and an image analysis module. Wherein, the drone camera can collect the original image (ie, the first image and the first position). The image analysis module can detect and classify the targets in the collected original images (ie, the first object to be rescued and the scene object), perform de-duplication processing on the classified targets, and locate the de-duplicated targets. After that, the image analysis module displays the deduplicated and located target on the map model (ie, the first map model).
上述主要从计算机设备的角度对本申请实施例提供的方案进行了介绍。可以理解的是,计算机设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请所公开的实施例描述的各示例的地图模型的生成方法,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。The solutions provided by the embodiments of the present application have been introduced above mainly from the perspective of computer equipment. It can be understood that, in order to realize the above-mentioned functions, the computer device includes corresponding hardware structures and/or software modules for executing each function. Those skilled in the art should easily realize that the method for generating map models of each example described in conjunction with the embodiments disclosed in this application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
本申请实施例还提供一种地图模型的生成装置。该地图模型的生成装置可以为计算机设备,也可以是上述计算机设备中的CPU,还可以是上述计算机设备中用于确定地图模型的生成的处理模块,还可以是上述计算机设备中用于地图模型的生成的客户端。The embodiments of the present application also provide a map model generating apparatus. The device for generating the map model may be a computer device, or a CPU in the above-mentioned computer device, or a processing module in the above-mentioned computer device for determining the generation of a map model, or a map model in the above-mentioned computer device. generated client.
本申请实施例可以根据上述方法示例对地图模型的生成进行功能模块或者功能单元的划分,例如,可以对应各个功能划分各个功能模块或者功能单元,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块或者功能单元的形式实现。其中,本申请实施例中对模块或者单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In this embodiment of the present application, the generation of the map model may be divided into functional modules or functional units according to the foregoing method examples. For example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated in in a processing module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules or functional units. Wherein, the division of modules or units in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
如图12所示,为本申请实施例提供的一种地图模型的生成装置的结构框图。地图模型的生成装置用于执行图2所示的地图模型的生成方法。地图模型的生成装置1200包括获取模块1201和处理模块1202。As shown in FIG. 12 , it is a structural block diagram of an apparatus for generating a map model provided by an embodiment of the present application. The map model generating apparatus is used to execute the map model generating method shown in FIG. 2 . The map
获取模块1201,用于获取多个第一图像和多个第一位置,多个第一图像包括第一待救援对象和场景对象,第一位置为采集第一图像的位置,多个第一位置与多个第一图像相对应。处理模块1202,用于根据多个第一图像和多个第一位置,生成第一地图模型,第一地图模型用于反映受灾场景,第一地图模型包括场景对象。获取模块1201,还用于获取第一待救援对象的位置信息。处理模块1202,具体用于通过第一待救援对象的位置信息对第一地图模型进行处理,生成第二地图模型,第二地图模型包括第一待救援对象的位置信息、第一待救援对象和场景对象。The
可选的,获取模块1201,还用于获取多个第二图像,多个第二图像为采集的全部图像。处理模块1202,还用于根据目标图像对应的第一位置和多个第一位置,从多个第二图像中确定多个第一图像,目标图像为任一第二图像,多个第一图像对应的第一位置与目标图像对应的第一位置之间的距离小于预设距离阈值。Optionally, the acquiring
可选的,处理模块1202,还用于根据多个第一位置和多个第一图像,生成多个第二位置,第二位置采集的第一图像的重投影误差小于第一位置采集的第一图像的重投影误差,多个第二位置与多个第一图像相对应。处理模块1202,还用于根据多个第二位置和多个第一图像,生成第一地图模型。Optionally, the
可选的,处理模块1202,还用于通过目标预设算法对多个第二位置和多个第一图像进行处理,确定稠密点云,目标预设算法用于确定多个第一图像的像素点对应的坐标集合,稠密点云由多个第一图像的像素点对应的坐标集合确定。处理模块1202,还用于对稠密点云进行处理,生成三角网模型。处理模块1202,还用于通过多个第一图像对三角网模型进行渲染处理,生成第一地图模型。Optionally, the
可选的,多个第一图像包括第二待救援对象,第二待救援对象包括第一待救援对象。处理模块1202,还用于对第二待救援对象进行去重处理,得到第一待救援对象。处理模块1202,还用于根据多个第一位置和多个目标位置关系,确定第一待救援对象的位置信息,目标位置关系为第一位置与第一待救援对象的位置之间的关系。Optionally, the plurality of first images include a second object to be rescued, and the second object to be rescued includes the first object to be rescued. The
图13示出了上述实施例中所涉及的地图模型的生成装置的又一种可能的结构。该地图模型的生成装置包括:处理器1301和通信接口1302。处理器1301用于对装置的动作进行控制管理,例如,执行上述方法实施例中所示的方法流程中的各个步骤,和/或用于执行本文所描述的技术的其它过程。通信接口1302用于支持该地图模型的生成装置与其他网络实体的通信。地图模型的生成装置还可以包括存储器1303和总线1304,存储器1303用于存储装置的程序代码和数据。FIG. 13 shows yet another possible structure of the map model generating apparatus involved in the above embodiment. The apparatus for generating the map model includes: a
其中,上述处理器1301可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,单元和电路。该处理器可以是中央处理器,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,单元和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器(Digital Signal Processor,DSP)和微处理器的组合等。The above-mentioned
存储器1303可以包括易失性存储器,例如随机存取存储器;该存储器也可以包括非易失性存储器,例如只读存储器,快闪存储器,硬盘或固态硬盘;该存储器还可以包括上述种类的存储器的组合。The
总线1304可以是扩展工业标准结构(Extended Industry StandardArchitecture,EISA)总线等。总线1304可以分为地址总线、数据总线、控制总线等。为便于表示,图13中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The
在实际实现时,获取模块1201可以由图13所示的通信接口1302实现,处理模块1202可以由图13所示的处理器1301调用存储器1303中的程序代码来实现。其具体的执行过程可参考图2所示的地图模型的生成方法部分的描述,这里不再赘述。In actual implementation, the acquiring
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。From the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above functional modules is used as an example for illustration. In practical applications, the above functions can be allocated as required. It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. For the specific working process of the system, apparatus and unit described above, reference may be made to the corresponding process in the foregoing method embodiments, and details are not described herein again.
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有指令,当该指令在计算机上运行时,使得该计算机执行上述方法实施例所示的方法流程中的地图模型的生成方法。Embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is made to execute the map model in the method flow shown in the above method embodiments generation method.
其中,计算机可读存储介质,例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、寄存器、硬盘、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合、或者本领域熟知的任何其它形式的计算机可读存储介质。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于特定用途集成电路(Application Specific Integrated Circuit,ASIC)中。在本申请实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination of the above. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (Read-Only Memory, ROM), erasable programmable read-only memory (Erasable Programmable Read Only Memory, EPROM), registers, hard disk, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM) ), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, or any other form of computer-readable storage medium known in the art. An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and the storage medium may be located in an Application Specific Integrated Circuit (ASIC). In the embodiments of the present application, the computer-readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
图14示意性地示出本申请实施例提供的计算机程序产品的概念性局部视图,计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。FIG. 14 schematically shows a conceptual partial view of a computer program product provided by an embodiment of the present application, where the computer program product includes a computer program for executing a computer process on a computing device.
在一个实施例中,计算机程序产品是使用信号承载介质1400来提供的。信号承载介质1400可以包括一个或多个程序指令,其当被一个或多个处理器运行时可以提供以上针对图2描述的功能或者部分功能。因此,例如,参考图2中所示的实施例,S201~S205的一个或多个特征可以由与信号承载介质1400相关联的一个或多个指令来承担。此外,图14中的程序指令也描述示例指令。In one embodiment, the computer program product is provided using the signal bearing medium 1400 . Signal bearing medium 1400 may include one or more program instructions that, when executed by one or more processors, may provide the functions, or portions thereof, described above with respect to FIG. 2 . Thus, for example, with reference to the embodiment shown in FIG. 2 , one or more of the features of S201 - S205 may be undertaken by one or more instructions associated with the signal bearing medium 1400 . In addition, the program instructions in Figure 14 also describe example instructions.
在一些示例中,信号承载介质1400可以包含计算机可读介质1401,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(read-only memory,ROM)或随机存储记忆体(random access memory,RAM)等等。In some examples, the signal bearing medium 1400 may include a computer readable medium 1401 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (read only memory) -only memory, ROM) or random access memory (random access memory, RAM), etc.
在一些实施方式中,信号承载介质1400可以包含计算机可记录介质1402,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。In some implementations, the signal bearing medium 1400 may include a
在一些实施方式中,信号承载介质1400可以包含通信介质1403,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。In some embodiments, signal bearing medium 1400 may include
信号承载介质1400可以由无线形式的通信介质1403来传达。一个或多个程序指令可以是,例如,计算机可执行指令或者逻辑实施指令。Signal bearing medium 1400 may be conveyed by wireless form of
在一些示例中,诸如针对图12描述的地图模型的生成装置可以被配置为响应于通过计算机可读介质1401、计算机可记录介质1402、和/或通信介质1403中的一个或多个程序指令,提供各种操作、功能、或者动作。In some examples, a generating apparatus such as the map model described with respect to FIG. 12 may be configured to, in response to one or more program instructions via computer readable medium 1401 ,
由于本发明的实施例中的地图模型的生成装置、计算机可读存储介质、计算机程序产品可以应用于上述方法,因此,其所能获得的技术效果也可参考上述方法实施例,本发明实施例在此不再赘述。Since the device for generating map models, the computer-readable storage medium, and the computer program product in the embodiments of the present invention can be applied to the above methods, the technical effects that can be obtained may also refer to the above method embodiments, the embodiments of the present invention. It is not repeated here.
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present application should be covered within the protection scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
Claims (13)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210610804.8A CN114943809B (en) | 2022-05-31 | 2022-05-31 | A method, device and storage medium for generating a map model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210610804.8A CN114943809B (en) | 2022-05-31 | 2022-05-31 | A method, device and storage medium for generating a map model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114943809A true CN114943809A (en) | 2022-08-26 |
CN114943809B CN114943809B (en) | 2024-12-27 |
Family
ID=82909945
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210610804.8A Active CN114943809B (en) | 2022-05-31 | 2022-05-31 | A method, device and storage medium for generating a map model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114943809B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704386A (en) * | 2023-08-01 | 2023-09-05 | 四川开澜科技有限公司 | AI-based accurate emergency rescue method and device |
CN117765010A (en) * | 2024-01-15 | 2024-03-26 | 武汉大学 | Tetrahedron surface marking Mesh construction method and system combined with unmanned aerial vehicle segmented image |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678754A (en) * | 2015-12-31 | 2016-06-15 | 西北工业大学 | Unmanned aerial vehicle real-time map reconstruction method |
CN109239725A (en) * | 2018-08-20 | 2019-01-18 | 广州极飞科技有限公司 | Ground mapping method and terminal based on laser ranging system |
CN110799985A (en) * | 2018-09-29 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Method for identifying target object based on map and control terminal |
CN111459166A (en) * | 2020-04-22 | 2020-07-28 | 北京工业大学 | A method for constructing a scenario map with location information of trapped persons in a post-disaster rescue environment |
CN111951397A (en) * | 2020-08-07 | 2020-11-17 | 清华大学 | A method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map |
CN113029169A (en) * | 2021-03-03 | 2021-06-25 | 宁夏大学 | Air-ground cooperative search and rescue system and method based on three-dimensional map and autonomous navigation |
CN113326769A (en) * | 2021-05-28 | 2021-08-31 | 北京三快在线科技有限公司 | High-precision map generation method, device, equipment and storage medium |
CN113920263A (en) * | 2021-10-18 | 2022-01-11 | 浙江商汤科技开发有限公司 | Map construction method, map construction device, map construction equipment and storage medium |
CN113925389A (en) * | 2021-10-15 | 2022-01-14 | 北京盈迪曼德科技有限公司 | Target object identification method and device and robot |
WO2022077296A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市大疆创新科技有限公司 | Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium |
CN114459467A (en) * | 2021-12-30 | 2022-05-10 | 北京理工大学 | A target localization method based on VI-SLAM in unknown rescue environment |
-
2022
- 2022-05-31 CN CN202210610804.8A patent/CN114943809B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678754A (en) * | 2015-12-31 | 2016-06-15 | 西北工业大学 | Unmanned aerial vehicle real-time map reconstruction method |
CN109239725A (en) * | 2018-08-20 | 2019-01-18 | 广州极飞科技有限公司 | Ground mapping method and terminal based on laser ranging system |
CN110799985A (en) * | 2018-09-29 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Method for identifying target object based on map and control terminal |
CN111459166A (en) * | 2020-04-22 | 2020-07-28 | 北京工业大学 | A method for constructing a scenario map with location information of trapped persons in a post-disaster rescue environment |
CN111951397A (en) * | 2020-08-07 | 2020-11-17 | 清华大学 | A method, device and storage medium for multi-machine cooperative construction of three-dimensional point cloud map |
WO2022077296A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市大疆创新科技有限公司 | Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium |
CN113029169A (en) * | 2021-03-03 | 2021-06-25 | 宁夏大学 | Air-ground cooperative search and rescue system and method based on three-dimensional map and autonomous navigation |
CN113326769A (en) * | 2021-05-28 | 2021-08-31 | 北京三快在线科技有限公司 | High-precision map generation method, device, equipment and storage medium |
CN113925389A (en) * | 2021-10-15 | 2022-01-14 | 北京盈迪曼德科技有限公司 | Target object identification method and device and robot |
CN113920263A (en) * | 2021-10-18 | 2022-01-11 | 浙江商汤科技开发有限公司 | Map construction method, map construction device, map construction equipment and storage medium |
CN114459467A (en) * | 2021-12-30 | 2022-05-10 | 北京理工大学 | A target localization method based on VI-SLAM in unknown rescue environment |
Non-Patent Citations (1)
Title |
---|
黄金鑫;赵勇;: "一种改进的未知环境无人机三维地图实时创建方法", 机械与电子, no. 01, 24 January 2015 (2015-01-24) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704386A (en) * | 2023-08-01 | 2023-09-05 | 四川开澜科技有限公司 | AI-based accurate emergency rescue method and device |
CN116704386B (en) * | 2023-08-01 | 2023-10-20 | 四川开澜科技有限公司 | AI-based accurate emergency rescue method and device |
CN117765010A (en) * | 2024-01-15 | 2024-03-26 | 武汉大学 | Tetrahedron surface marking Mesh construction method and system combined with unmanned aerial vehicle segmented image |
Also Published As
Publication number | Publication date |
---|---|
CN114943809B (en) | 2024-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102126724B1 (en) | Method and apparatus for restoring point cloud data | |
JP6255085B2 (en) | Locating system and locating method | |
US9189853B1 (en) | Automatic pose estimation from uncalibrated unordered spherical panoramas | |
KR102526542B1 (en) | 2d vehicle localizing using geoarcs | |
KR20200075727A (en) | Method and apparatus for calculating depth map | |
US11113896B2 (en) | Geophysical sensor positioning system | |
CN110703805B (en) | Method, device and equipment for planning three-dimensional object surveying and mapping route, unmanned aerial vehicle and medium | |
US10706617B2 (en) | 3D vehicle localizing using geoarcs | |
EP2856431A2 (en) | Combining narrow-baseline and wide-baseline stereo for three-dimensional modeling | |
CN114286923A (en) | Global coordinate system defined by data set corresponding relation | |
CN114943809B (en) | A method, device and storage medium for generating a map model | |
CN113610702B (en) | Picture construction method and device, electronic equipment and storage medium | |
WO2022247548A1 (en) | Positioning method, apparatus, electronic device, and storage medium | |
Sambolek et al. | Person Detection and Geolocation Estimation in Drone Images | |
CN114674328B (en) | Map generation method, map generation device, electronic device, storage medium, and vehicle | |
Farkoushi et al. | Generating Seamless Three-Dimensional Maps by Integrating Low-Cost Unmanned Aerial Vehicle Imagery and Mobile Mapping System Data | |
US20250036129A1 (en) | System and Method for Finding Prospective Locations of Drone Operators Within an Area | |
RU2759773C1 (en) | Method and system for determining the location of the user | |
JP7577608B2 (en) | Location determination device, location determination method, and location determination system | |
Liu et al. | Real-scene 3D measurement algorithm and program implementation based on Mobile terminals | |
KR20250090838A (en) | Method for retrieving object in 3d space and server performing the method | |
JP2023094344A (en) | Augmented reality display device, method, and program | |
CN119559215A (en) | Feature processing method, visual tracking method, device, equipment and medium | |
CN116863093A (en) | Terrain modeling method, apparatus, computer device and medium | |
CN119579848A (en) | Unified visual positioning architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |