[go: up one dir, main page]

CN109073398B - Map establishing method, positioning method, device, terminal and storage medium - Google Patents

Map establishing method, positioning method, device, terminal and storage medium Download PDF

Info

Publication number
CN109073398B
CN109073398B CN201880001095.5A CN201880001095A CN109073398B CN 109073398 B CN109073398 B CN 109073398B CN 201880001095 A CN201880001095 A CN 201880001095A CN 109073398 B CN109073398 B CN 109073398B
Authority
CN
China
Prior art keywords
image data
positioning
map
view
positioning result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880001095.5A
Other languages
Chinese (zh)
Other versions
CN109073398A (en
Inventor
易万鑫
廉士国
林义闽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Publication of CN109073398A publication Critical patent/CN109073398A/en
Application granted granted Critical
Publication of CN109073398B publication Critical patent/CN109073398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

Some embodiments of the present application provide a method, a positioning method, an apparatus, a terminal and a storage medium for establishing a map. The method for establishing the map is applied to a terminal or a cloud terminal and comprises the following steps: acquiring image data of N different visual angles; wherein N is a positive integer; forming the image data of N different visual angles into image data of a full visual angle; and establishing a full-view map according to the image data of the full view.

Description

Map establishing method, positioning method, device, terminal and storage medium
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, a positioning method, an apparatus, a terminal, and a storage medium for establishing a map.
Background
The equipment such as the intelligent robot or the unmanned vehicle needs real-time mapping and positioning to sense the surrounding environment in an unknown environment, and navigation and other functions of the equipment such as the intelligent robot or the unmanned vehicle can be guaranteed only if the mapping and the positioning are successfully established. Currently, mapping and positioning methods are generally performed according to map information collected from a single view.
The inventors have discovered in the course of studying the prior art that there are many limitations to positioning at a single viewing angle. Due to the fact that the view angle of a single sensor device (such as a camera) used in mapping is limited, acquired map information is limited, and therefore devices such as an intelligent robot can be positioned only under the view angle of the mapping. Once equipment such as intelligent robot deviates from original visual angle or the visual field is sheltered from in the original visual angle, will lead to the location failure, then influence location effect and user experience light, then endanger other people life safety seriously.
Therefore, how to improve the positioning capability is a problem to be solved.
Disclosure of Invention
One technical problem to be solved by some embodiments of the present application is how to improve the positioning capability.
One embodiment of the present application provides a method of building a map, including: acquiring image data of N different visual angles; wherein N is a positive integer; forming the image data of N different visual angles into image data of a full visual angle; and establishing a full-view map according to the image data of the full view.
An embodiment of the present application further provides a positioning method, including: acquiring first image data of N different visual angles; wherein N is a positive integer; determining a first positioning result according to the first image data and the map of the N different visual angles; the map comprises a full-view-angle map, the full-view-angle map is established according to M second image data of different view angles, and M is a positive integer.
An embodiment of the present application further provides an apparatus for creating a map, including: the system comprises an acquisition module, a merging module and a graph building module; the acquisition module is used for acquiring image data of N different visual angles; wherein N is a positive integer; the merging module is used for combining the image data of the N different visual angles into image data of a full visual angle; the mapping module is used for establishing a full-view map according to the image data of the full view.
An embodiment of the present application also provides a positioning apparatus, including: the device comprises an acquisition module and a positioning module; the acquisition module is used for acquiring first image data of N different visual angles; wherein N is a positive integer; the positioning module is used for determining a first positioning result according to the first image data and the map of the N different visual angles; the map comprises a full-view-angle map, the full-view-angle map is established according to M second image data of different view angles, and M is a positive integer.
An embodiment of the present application also provides a terminal comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method of building a map as set forth in the above embodiments.
An embodiment of the present application also provides a terminal comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the positioning method mentioned in the above embodiments.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the method for creating a map mentioned in the above embodiment.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the positioning method mentioned in the above embodiment.
Compared with the prior art, the embodiment of the application establishes the full-view-angle map according to the image data of different viewing angles. Because the full-view-angle map is stored in the terminal, even if the view angle of the sensor in the positioning process of the terminal is different from the view angle of the sensor in the map building process of the terminal, the terminal can also perform positioning according to the full-view-angle map. The method solves the problem of positioning failure caused by visual angle deviation or view shielding, reduces the positioning blind area of the terminal and improves the positioning capability of the terminal.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method of creating a map according to a first embodiment of the present application;
FIG. 2 is a schematic view of a sensor profile of a first embodiment of the present application;
FIG. 3 is a flow chart of a method of creating a map according to a second embodiment of the present application;
fig. 4 is a flowchart of a positioning method according to a third embodiment of the present application;
fig. 5 is a flowchart of a positioning method according to a fourth embodiment of the present application;
FIG. 6 is a diagram illustrating a method for combining a map building method and a positioning method according to a fourth embodiment of the present application;
fig. 7 is a schematic structural diagram of an apparatus for creating a map according to a fifth embodiment of the present application;
FIG. 8 is a schematic structural diagram of another apparatus for creating a map according to a fifth embodiment of the present application;
FIG. 9 is a schematic view of a positioning device according to a sixth embodiment of the present application;
FIG. 10 is a schematic view of another positioning device according to a sixth embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to a seventh embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal according to an eighth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, some embodiments of the present application will be described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The first embodiment of the application relates to a method for establishing a map, which is applied to a terminal or a cloud terminal. The terminal can be an intelligent robot, an unmanned vehicle, a blind person navigation device and the like. The cloud end is in communication connection with the terminal, and provides a map for positioning for the terminal or directly provides a positioning result for the terminal. In this embodiment, a terminal is taken as an example to explain an execution process of the method for establishing a map, and the cloud executes the process of the method for establishing a map by referring to the contents of the embodiment of the present application. As shown in fig. 1, the method for establishing a map includes the following steps:
step 101: acquiring image data of N different visual angles. Wherein N is a positive integer.
Specifically, the terminal acquires image data of different perspectives of the environment surrounding the terminal through one or more sensors.
Step 102: and forming the image data of the N different visual angles into image data of a full visual angle.
Specifically, the terminal combines the image data of N different viewing angles by an image processing technique to obtain image data of a full viewing angle. The method for merging the image data of N different views into the image data of the full view includes, but is not limited to, the following three methods:
the method comprises the following steps: the terminal determines the similar areas among the image data of the N different visual angles, and combines the image data of the N different visual angles according to the similar areas among the image data of the N different visual angles.
In specific implementation, the different orientations of the terminal are respectively provided with one sensor, and the image data of the N different viewing angles respectively correspond to one sensor. In the sensors corresponding to the image data of the N different visual angles, a common visual field exists between two adjacent sensors. Due to the fact that a common view field exists between two adjacent sensors, a similar area exists between image data shot by the two adjacent sensors. And the terminal combines the image data of the N different visual angles according to the similar areas among the image data of the N different visual angles.
In another specific implementation, a sensor is installed on the terminal, and the terminal controls the sensor to rotate so as to acquire image data of N different viewing angles. For example, in the process of establishing the map by the terminal, image data is acquired at intervals of a preset distance. The terminal respectively performs the following operations in the process of acquiring image data each time: controlling a sensor to shoot image data at an initial angle, and after shooting is finished, turning to a first preset angle, wherein the sensor shoots the image data at the first preset angle; wherein a common field of view exists between the sensor at the initial angle and the sensor at a first preset angle; after shooting is finished, the sensor turns to a second preset angle, and image data at the second preset angle are shot; wherein the sensor has a common field of view … … at the first predetermined angle and the sensor at the second predetermined angle until the sensor is rotated back to the initial angle. The sensor captures image data at each angle to obtain image data for N different perspectives.
The method 2 comprises the following steps: and the terminal acquires a pre-established merging model and merges the image data of the N different visual angles according to the merging model.
In specific implementation, the different orientations of the terminal are respectively provided with one sensor, and the image data of the N different viewing angles respectively correspond to one sensor. The merging model is used for indicating the merging sequence of the image data of the N different visual angles, and the merging sequence of the image data of the N different visual angles is determined according to the arrangement sequence of the sensors corresponding to the image data of the N different visual angles. For example, 5 sensors are installed on the terminal, and the sensor distribution diagram is shown in fig. 2. When merging the image data, the image data of all the sensors are merged in the clockwise or counterclockwise direction in the order of arrangement of the sensors.
The method 3 comprises the following steps: method 1 and method 2 were combined. Specifically, the terminal acquires a pre-established merging model, and determines the merging sequence of the image data of the N different viewing angles according to the merging model. And the terminal arranges the image data of the N different visual angles according to the merging sequence of the image data of the N different visual angles. And the terminal combines the image data of the N different visual angles according to the similar area between two adjacent image data in the arranged image data of the N different visual angles.
Step 103: and establishing a full-view map according to the image data of the full view.
Specifically, the terminal creates a full view map through a Visual instantaneous positioning And Mapping (VSLAM) technique based on image data of a full view, for example, an instantaneous positioning And Mapping (ORG _ SLAM) technique based on a FAST And robust binary descriptor.
It should be noted that, as can be understood by those skilled in the art, in practical application, the full-view map may also be created through other map creating technologies, and the method for creating the map according to the full-view data is not limited in this embodiment.
Compared with the prior art, the method for establishing the map provided by the embodiment establishes the full-view-angle map according to the image data of different viewing angles. Because the full-view-angle map is stored in the terminal, even if the view angle of the sensor in the positioning process of the terminal is different from the view angle of the sensor in the map building process of the terminal, the terminal can also perform positioning according to the full-view-angle map. The method solves the problem of positioning failure caused by visual angle deviation or view shielding, reduces the positioning blind area of the terminal and improves the positioning capability of the terminal.
The second embodiment of the present application relates to a method for building a map, and the present embodiment is a further improvement of the first embodiment, and the specific improvement is that: additional related steps are added after step 103.
As shown in fig. 3, the present embodiment includes steps 201 to 204. Steps 201 to 203 are substantially the same as steps 101 to 103 in the first embodiment, and are not described in detail, and the following differences are mainly introduced:
step 201 to step 203 are executed.
Step 204: and respectively establishing N single-view-angle maps according to the N image data with different viewing angles.
In the concrete implementation, the terminal is provided with N sensors. After the full view map is built, the terminal creates a single view map, for example, ORB _ SLAM technology, using VSLAM technology, respectively, according to image data photographed by each sensor.
It should be noted that after the full view angle map is created, the single view angle map is created for the image data of each view angle, so that the terminal can be positioned according to the single view angle map after the full view angle map fails to be positioned, and the positioning capability of the terminal is further improved.
It should be noted that, in this embodiment, for clarity, the step 204 is taken as a subsequent step of the step 202 and the step 203, and those skilled in the art can understand that, in practical application, the step 204 may also be taken as a previous step of the step 202 and the step 203, and the execution order of the step 202, the step 203, and the step 204 is not limited in this embodiment.
Compared with the prior art, according to the method for establishing the map, after the full-view-angle map is established according to the image data of the N different viewing angles, the single-view-angle map is respectively established according to the image data of each viewing angle, so that the terminal can be positioned according to the single-view-angle map after the full-view-angle map is failed to be positioned, and the positioning capability of the terminal is further improved.
A third embodiment of the present application relates to a positioning method, which is applied to a terminal or a cloud. The terminal can be an intelligent robot, an unmanned vehicle, a blind person navigation device and the like. The cloud end is in communication connection with the terminal and provides a positioning result for the terminal. In this embodiment, a terminal is taken as an example to explain an execution process of the positioning method, and reference may be made to the contents of the embodiment of the present application in a process of executing the positioning method by the cloud. As shown in fig. 4, the positioning method includes the following steps:
step 301: first image data of N different visual angles are acquired. Wherein N is a positive integer.
In a specific implementation, a plurality of sensors are installed on the terminal. The terminal controls the plurality of sensors to acquire the first image data at the same time, or the terminal controls the plurality of sensors to acquire the first image data sequentially.
In another specific implementation, a sensor is mounted on the terminal. The terminal controls the sensor to shoot a first image data, and after the first image data is shot, the sensor rotates according to a preset direction and a preset angle, and the first image data is obtained again until N first image data with different visual angles are obtained.
Step 302: and determining a first positioning result according to the first image data and the map of the N different visual angles.
Specifically, the map includes a full-view map, which is created based on second image data of M different views, where M is a positive integer. The terminal matches the first image data of the N different visual angles with the map respectively, and determines a first positioning result according to the matching result.
For example, the sensors in the positioning process of the terminal are arranged in the same way as the sensors in the map building process of the terminal, i.e., N equals M.
Compared with the prior art, the positioning method provided by the embodiment matches the acquired image data of different visual angles with the full-visual-angle map, so that the terminal can be positioned by using any visual angle, and the dead zone of the terminal is reduced. Because the full-view map is stored in the terminal, even if the visual angle of the sensor in the positioning process of the terminal is different from the visual angle of the sensor in the map building process, the terminal can be positioned according to the full-view map, the problem of positioning failure caused by visual angle deviation or visual field shielding is solved, and the positioning capability of the terminal is improved.
A fourth embodiment of the present application relates to a positioning method, and this embodiment is a further refinement of the third embodiment, specifically describing step 302.
As shown in fig. 5, the present embodiment includes steps 401 to 406. Step 401 is substantially the same as step 301 in the third embodiment, and will not be described in detail here, and the following differences are mainly described:
step 402: and respectively matching the first image data of the N different visual angles with a full-visual-angle map, and determining a second positioning result according to the matching results of the first image data of the N different visual angles with the full-visual-angle map.
In the specific implementation, the terminal judges whether a matching result indicating successful positioning exists in matching results of the first image data of the N different visual angles and the full-visual-angle map respectively, if yes, the terminal determines that the second positioning result indicates successful positioning, and determines pose data in the second positioning result according to the pose data in the matching result indicating successful positioning.
Step 403: and judging whether the second positioning result indicates that the positioning is successful.
Specifically, if the terminal determines that the second positioning result indicates that the positioning is successful, step 404 is executed, otherwise, step 405 is executed.
Step 404: and determining a first positioning result according to the second positioning result. The flow is then ended.
Specifically, the terminal takes the pose data included in the second positioning result as the pose data in the first positioning result.
Step 405: and matching the first image data of the N different visual angles with the M single-visual-angle maps, and determining a third positioning result according to the matching result of the first image data of the N different visual angles and the M single-visual-angle maps.
Specifically, the map further includes M single-view maps, which are respectively created based on the M second image data of different views.
The method of determining the third positioning result is exemplified below.
The method A comprises the following steps: the terminal respectively performs the following operations for each first image data: matching the first image data with M single-view maps respectively; and determining a fourth positioning result corresponding to the first image data according to the matching result. Wherein the fourth positioning result indicates a positioning success or a positioning failure. And the terminal determines a third positioning result according to the fourth positioning results respectively corresponding to the N first image data with different visual angles.
The method B comprises the following steps: the terminal determines the corresponding relation between the first image data of N different visual angles and M single-visual-angle maps, and carries out the following operations respectively aiming at each first image data: matching the first image data with a single-view-angle map corresponding to the first image data; and determining a fourth positioning result corresponding to the first image data according to the matching result. Wherein the fourth positioning result indicates a positioning success or a positioning failure. And the terminal determines a third positioning result according to the fourth positioning results respectively corresponding to the N first image data with different visual angles.
In a specific implementation, the method for determining the third positioning result by the terminal according to the fourth positioning results respectively corresponding to the N first image data with different viewing angles is as follows: the terminal judges whether a fourth positioning result indicating successful positioning exists in fourth positioning results respectively corresponding to the first image data of the N different visual angles; if the situation is determined to exist, the terminal determines pose data contained in each fourth positioning result indicating successful positioning, calculates the average value of the pose data in all the fourth positioning results indicating successful positioning, and takes the average value as a third positioning result; and if the third positioning result does not exist, the terminal determines that the third positioning result indicates positioning failure.
Step 406: and determining the first positioning result according to the third positioning result.
Compared with the prior art, the positioning method provided by the embodiment uses the single-view-angle map for positioning after the positioning of the full-view-angle map fails, so that the positioning capability of the terminal is improved, the final positioning result is determined according to the positioning results of the multiple single-view-angle maps, and the positioning accuracy of the terminal is improved.
It should be noted that, as will be understood by those skilled in the art, in practical applications, the method for establishing a map and the positioning method mentioned in the embodiments of the present application may be used in combination. In specific implementation, a schematic diagram of a method for combining a map building method and a positioning method is shown in fig. 6.
The following describes the process of establishing a map and positioning of the terminal in combination with an actual scene. The terminal is provided with 5 sensors (sensor 1, sensor 2, sensor 3, sensor 4 and sensor 5), and the viewing angle of each sensor is different. And in the process of drawing by the terminal, acquiring second image data shot by the sensor. And the terminal establishes 1 full-view-angle map according to the second image data shot by the sensors 1 to 5. The terminal establishes a single-view-angle map 1 according to second image data shot by the sensor 1, establishes a single-view-angle map 2 according to second image data shot by the sensor 2, establishes a single-view-angle map 3 according to second image data shot by the sensor 3, establishes a single-view-angle map 4 according to second image data shot by the sensor 4, and establishes a single-view-angle map 5 according to second image data shot by the sensor 5. In the positioning process of the terminal, 5 first image data shot by 5 sensors are obtained, the 5 first image data are respectively matched with a full-view-angle map, and a matching result corresponding to each first image data is determined. And if the terminal determines that a matching result indicating successful positioning exists, determining a first positioning result according to the matching result. And if the terminal determines that a matching result indicating successful positioning does not exist, matching the first image data shot by the sensor i with the single-view-angle map i, and determining a fourth positioning result corresponding to the first image data shot by the sensor i according to the matching result. Wherein i is 1, 2, 3, 4, 5. And if the terminal determines that the fourth positioning result indicating successful positioning exists in the fourth positioning results respectively corresponding to all the first image data, determining the first positioning result according to the fourth positioning result indicating successful positioning. Otherwise, determining that the first positioning result indicates positioning failure.
A fifth embodiment of the present application relates to an apparatus for creating a map, as shown in fig. 7, including an obtaining module 501, a merging module 502, and a map creating module 503; the acquiring module 501 is configured to acquire image data of N different viewing angles; wherein N is a positive integer; the merging module 502 is configured to combine the image data of N different views into image data of a full view; the map building module 503 is configured to build a full-view map according to the image data of the full view.
In a specific implementation, a schematic structural diagram of another map building apparatus is shown in fig. 8, where the map building apparatus further includes N sensors 504, and the N sensors 504 are used to acquire image data from different perspectives.
It should be understood that the present embodiment is a system embodiment corresponding to the first embodiment, and the present embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
A sixth embodiment of the present application relates to a positioning device, as shown in fig. 9, including: an acquisition module 601 and a positioning module 602; the acquiring module 601 is configured to acquire first image data of N different viewing angles; wherein N is a positive integer. The positioning module 602 is configured to determine a first positioning result according to the first image data and the map at the N different viewing angles; the map comprises a full-view-angle map, the full-view-angle map is established according to M second image data of different view angles, and M is a positive integer.
In a specific implementation, a schematic structural diagram of another positioning apparatus is shown in fig. 10, and the positioning apparatus further includes a single-view map loading module 603 and a full-view map loading module 604. The single-view map loading module 603 is configured to load single-view maps respectively established according to the M second image data of different views, and the full-view map loading module 604 is configured to load a full-view map.
It should be understood that this embodiment is a system embodiment corresponding to the third embodiment, and the present embodiment and the third embodiment can be implemented in cooperation. The related technical details mentioned in the third embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the third embodiment.
It should be noted that each module related in the fifth embodiment and the sixth embodiment is a logic module, and in practical application, one logic unit may be one physical unit, may be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A seventh embodiment of the present application is directed to a terminal, as shown in fig. 11, comprising at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701. The memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701 to enable the at least one processor 701 to perform the above-mentioned map building method.
An eighth embodiment of the present application is directed to a terminal, as shown in fig. 12, including at least one processor 801; and a memory 802 communicatively coupled to the at least one processor 801. The memory 802 stores instructions executable by the at least one processor 801, and the instructions are executed by the at least one processor 801 to enable the at least one processor 801 to perform the positioning method.
In the seventh embodiment and the eighth embodiment, the processor is exemplified by a Central Processing Unit (CPU), and the Memory is exemplified by a Random Access Memory (RAM). The processor and the memory may be connected by a bus or other means, and fig. 11 and 12 illustrate the connection by a bus. Memory, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the full-view map stored in the memory in the embodiments of the present application. The processor executes various functional applications and data processing of the device by running nonvolatile software programs, instructions and modules stored in the memory, namely, the map building method and the positioning method are realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be connected to the external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the method of establishing a map and the method of locating in any of the above-described method embodiments.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A ninth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the method of building a map as described in any of the method embodiments above.
A tenth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program, when executed by a processor, implements the positioning method described in any of the method embodiments above.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (7)

1. A method of positioning, comprising:
acquiring first image data of N different visual angles; wherein N is a positive integer;
determining a first positioning result according to the N first image data with different visual angles and the map; the map comprises a full-view-angle map, the full-view-angle map is established according to second image data of M different viewing angles, and M is a positive integer;
the map further comprises M single-view-angle maps, wherein the M single-view-angle maps are respectively established according to the M second image data of different view angles;
determining a first positioning result according to the N first image data and the map at different viewing angles, specifically including:
respectively matching the first image data of the N different visual angles with the full-visual-angle map, and determining a second positioning result according to the matching results of the first image data of the N different visual angles with the full-visual-angle map;
judging whether the second positioning result indicates that the positioning is successful;
if so, determining the first positioning result according to the second positioning result;
if not, matching the first image data of the N different visual angles with the M single-visual-angle maps, and determining a third positioning result according to the matching result of the first image data of the N different visual angles with the M single-visual-angle maps; and determining the first positioning result according to the third positioning result.
2. The positioning method according to claim 1, wherein the matching the first image data of the N different view angles with the M single-view maps, and determining a third positioning result according to a matching result of the first image data of the N different view angles with the M single-view maps specifically includes:
for each first image data, the following operations are respectively performed: matching the first image data with the M single-view maps, respectively; determining a fourth positioning result corresponding to the first image data according to the matching result, wherein the fourth positioning result indicates success or failure of positioning;
and determining the third positioning result according to the fourth positioning results respectively corresponding to the N first image data with different visual angles.
3. The positioning method according to claim 1, wherein the matching the first image data of the N different view angles with the M single-view maps, and determining a third positioning result according to a matching result of the first image data of the N different view angles with the M single-view maps specifically includes:
determining the corresponding relation between the first image data of the N different visual angles and the M single-visual-angle maps;
for each first image data, the following operations are respectively performed: matching the first image data with a single-view map corresponding to the first image data; determining a fourth positioning result corresponding to the first image data according to the matching result, wherein the fourth positioning result indicates success or failure of positioning;
and determining the third positioning result according to the fourth positioning results respectively corresponding to the N first image data with different visual angles.
4. The positioning method according to claim 2 or 3, wherein the determining the third positioning result according to the fourth positioning results respectively corresponding to the first image data of the N different viewing angles specifically includes:
judging whether a fourth positioning result indicating successful positioning exists in fourth positioning results respectively corresponding to the N first image data with different visual angles;
if the determination is positive, determining pose data contained in each fourth positioning result indicating successful positioning, calculating an average value of the pose data in all the fourth positioning results indicating successful positioning, and taking the average value as the third positioning result;
and if the third positioning result does not exist, determining that the third positioning result indicates positioning failure.
5. A positioning device, comprising: the device comprises an acquisition module and a positioning module;
the acquisition module is used for acquiring first image data of N different visual angles; wherein N is a positive integer;
the positioning module is used for determining a first positioning result according to the first image data and the map of the N different visual angles; the map comprises a full-view-angle map, the full-view-angle map is established according to second image data of M different viewing angles, and M is a positive integer;
the map further comprises M single-view-angle maps, wherein the M single-view-angle maps are respectively established according to the M second image data of different view angles;
the positioning module determines a first positioning result according to the N first image data and the map at different viewing angles, and specifically includes:
respectively matching the first image data of the N different visual angles with the full-visual-angle map, and determining a second positioning result according to the matching results of the first image data of the N different visual angles with the full-visual-angle map;
judging whether the second positioning result indicates that the positioning is successful;
if so, determining the first positioning result according to the second positioning result;
if not, matching the first image data of the N different visual angles with the M single-visual-angle maps, and determining a third positioning result according to the matching result of the first image data of the N different visual angles with the M single-visual-angle maps; and determining the first positioning result according to the third positioning result.
6. A terminal, comprising at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the positioning method of any one of claims 1 to 4.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the positioning method according to any one of claims 1 to 4.
CN201880001095.5A 2018-07-20 2018-07-20 Map establishing method, positioning method, device, terminal and storage medium Active CN109073398B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/096374 WO2020014941A1 (en) 2018-07-20 2018-07-20 Map establishment method, positioning method and apparatus, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN109073398A CN109073398A (en) 2018-12-21
CN109073398B true CN109073398B (en) 2022-04-08

Family

ID=64789237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880001095.5A Active CN109073398B (en) 2018-07-20 2018-07-20 Map establishing method, positioning method, device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN109073398B (en)
WO (1) WO2020014941A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109965797B (en) * 2019-03-07 2021-08-24 深圳市愚公科技有限公司 Floor sweeping robot map generation method, floor sweeping robot control method and terminal
CN110415174B (en) * 2019-07-31 2023-07-07 达闼科技(北京)有限公司 Map fusion method, electronic device and storage medium
CN114683270A (en) * 2020-12-30 2022-07-01 深圳乐动机器人有限公司 Robot-based composition information acquisition method and robot system
CN117036484B (en) * 2023-08-25 2025-07-01 西安电子科技大学 Visual positioning and mapping method, system, equipment and medium based on geometry and semantics

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123727A (en) * 2011-11-21 2013-05-29 联想(北京)有限公司 Method and device for simultaneous positioning and map building
CN103247225A (en) * 2012-02-13 2013-08-14 联想(北京)有限公司 Instant positioning and map building method and equipment
CN103389103A (en) * 2013-07-03 2013-11-13 北京理工大学 Geographical environmental characteristic map construction and navigation method based on data mining
DE102015004923A1 (en) * 2015-04-17 2015-12-03 Daimler Ag Method for self-localization of a vehicle
WO2016119117A1 (en) * 2015-01-27 2016-08-04 Nokia Technologies Oy Localization and mapping method
CN107223244A (en) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 Localization method and device
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107885871A (en) * 2017-11-24 2018-04-06 南京华捷艾米软件科技有限公司 Synchronous superposition method, system, interactive system based on cloud computing
CN108053473A (en) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 A kind of processing method of interior three-dimensional modeling data
CN109074676A (en) * 2018-07-03 2018-12-21 深圳前海达闼云端智能科技有限公司 Map building method, positioning method, terminal and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11051000B2 (en) * 2014-07-14 2021-06-29 Mitsubishi Electric Research Laboratories, Inc. Method for calibrating cameras with non-overlapping views
CN106251399B (en) * 2016-08-30 2019-04-16 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method and implementing device based on lsd-slam
CN106443687B (en) * 2016-08-31 2019-04-16 欧思徕(北京)智能科技有限公司 A kind of backpack mobile mapping system based on laser radar and panorama camera

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123727A (en) * 2011-11-21 2013-05-29 联想(北京)有限公司 Method and device for simultaneous positioning and map building
CN103247225A (en) * 2012-02-13 2013-08-14 联想(北京)有限公司 Instant positioning and map building method and equipment
CN103389103A (en) * 2013-07-03 2013-11-13 北京理工大学 Geographical environmental characteristic map construction and navigation method based on data mining
WO2016119117A1 (en) * 2015-01-27 2016-08-04 Nokia Technologies Oy Localization and mapping method
DE102015004923A1 (en) * 2015-04-17 2015-12-03 Daimler Ag Method for self-localization of a vehicle
CN107223244A (en) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 Localization method and device
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107885871A (en) * 2017-11-24 2018-04-06 南京华捷艾米软件科技有限公司 Synchronous superposition method, system, interactive system based on cloud computing
CN108053473A (en) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 A kind of processing method of interior three-dimensional modeling data
CN109074676A (en) * 2018-07-03 2018-12-21 深圳前海达闼云端智能科技有限公司 Map building method, positioning method, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN109073398A (en) 2018-12-21
WO2020014941A1 (en) 2020-01-23

Similar Documents

Publication Publication Date Title
CN109073398B (en) Map establishing method, positioning method, device, terminal and storage medium
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN111612852B (en) Method and apparatus for verifying camera parameters
CN111123912B (en) Calibration method and device for travelling crane positioning coordinates
CN112136137A (en) A kind of parameter optimization method, device and control equipment, aircraft
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device
CN110969048A (en) Target tracking method and device, electronic equipment and target tracking system
CN109584299B (en) Positioning method, positioning device, terminal and storage medium
CN110490938A (en) For verifying the method, apparatus and electronic equipment of camera calibration parameter
CN113342055A (en) Unmanned aerial vehicle flight control method and device, electronic equipment and storage medium
CN109313809B (en) Image matching method, device and storage medium
CN111582296B (en) Remote sensing image comprehensive matching method and device, electronic equipment and storage medium
CN109073390B (en) Positioning method and device, electronic equipment and readable storage medium
CN109658451B (en) Depth sensing method and device and depth sensing equipment
CN116958452A (en) Three-dimensional reconstruction method and system
US20250037401A1 (en) System and methods for validating imagery pipelines
CN113450389B (en) Target tracking method and device and electronic equipment
CN110750094B (en) Method, device and system for determining posture change information of movable device
CN111639662A (en) Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
KR20210030136A (en) Apparatus and method for generating vehicle data, and vehicle system
CN117437303B (en) Method and system for calibrating camera external parameters
WO2023070441A1 (en) Movable platform positioning method and apparatus
CN116740681A (en) Target detection method, device, vehicle and storage medium
CN109074676B (en) Method for establishing map, positioning method, terminal and computer readable storage medium
CN118052867A (en) Positioning method, terminal equipment, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210210

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.