CN107564064B - Positioning point, coding method thereof, positioning method and system thereof - Google Patents
Positioning point, coding method thereof, positioning method and system thereof Download PDFInfo
- Publication number
- CN107564064B CN107564064B CN201710818044.9A CN201710818044A CN107564064B CN 107564064 B CN107564064 B CN 107564064B CN 201710818044 A CN201710818044 A CN 201710818044A CN 107564064 B CN107564064 B CN 107564064B
- Authority
- CN
- China
- Prior art keywords
- positioning
- sequence
- coding
- positioning point
- coding sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 108091026890 Coding region Proteins 0.000 claims abstract description 78
- 238000006243 chemical reaction Methods 0.000 claims abstract description 17
- 125000004122 cyclic group Chemical group 0.000 claims description 20
- 230000004807 localization Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 abstract description 4
- 230000005484 gravity Effects 0.000 abstract description 3
- 230000004397 blinking Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000010924 continuous production Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a positioning point, a coding method thereof, a positioning method and a system thereof. Wherein, the method comprises the following steps: dividing the brightness variation range of the positioning point into two or more brightness intervals, wherein the brightness intervals correspond to one coding value; converting the positioning point number into a coding sequence through a preset conversion rule; the coding sequence is composed of a plurality of coding values which are sequentially arranged. The positioning points can be distinguished based on the luminous brightness, all the positioning points can be observed by the camera in each exposure, and the observation continuity is guaranteed. In the image obtained by the camera, the positioning point still keeps the circular feature, and the accuracy and the stability of the extraction of the gravity center position of the positioning point are ensured.
Description
Technical Field
The present invention relates to the field of position tracking technologies, and in particular, to a positioning point, an encoding method thereof, a positioning method and a system thereof.
Background
In general, in a VR (virtual reality) system, optical tracking positioning for a VR device is performed by capturing the position of a positioning point arranged in advance on the device. By knowing the position of the positioning point of the scene position in the camera plane and the position relation between different positioning points, the position of the VR device in the camera plane can be calculated, and the position of the VR device in the scene can be calculated.
The existing optical positioning technology based on positioning points mainly has two schemes. The first scheme is to place only one anchor point on the device, and in actual use, the position of the anchor point in the scene is solved to replace the position of the VR device in the scene. However, because a certain distance exists between the position of the positioning point and the position of the actual VR device, and the extraction of the position of the positioning point also has the problems of accuracy and stability, and a single positioning point is easily shielded, a stable and accurate VR device position cannot be provided.
In order to further improve the performance of the positioning system, the second scheme uses a plurality of positioning points, and when part of the positioning points are blocked or the positions of the positioning points are not extracted correctly, the positions of the VR device can still be calculated through other positioning points. Since a plurality of positioning points need to be used, in the implementation of the second positioning scheme, in addition to the positioning points and the environmental noise region, a plurality of observed positioning points need to be distinguished and identified.
The prior art solutions usually have several positioning systems for distinguishing optical positioning points, including: a positioning system (such as sony psvr helmet and PS Move handle) for distinguishing positioning points based on visible light sources, a positioning system (chinese patent 201521106162.X) for distinguishing positioning points using the wavelength of infrared light sources, or a positioning system (chinese patent 201520857693.6) for distinguishing positioning points based on the shape structure of specially set positioning points.
In the process of implementing the invention, the inventor finds that the prior art has the following problems: the system for distinguishing the positioning points based on the visible light source is relatively low in robustness, and the positioning points can be greatly influenced when the same color of environmental interference is encountered. However, the positioning system using different wavelengths to distinguish positioning points needs to add an additional structure outside the camera for replacing the optical filter. And every time an additional positioning point is added, an optical filter is added. In the same second, if the number of the positioning points is larger, the observation time which can be obtained by each positioning point is correspondingly reduced. The positioning information acquired in this way is discontinuous in time and space, and the tracking performance of the device is greatly influenced. And when the wavelength division is too thin, misrecognition may occur.
The positioning system based on the shape structure of the specially designed reflective positioning point can cause the position calculation accuracy of the positioning point to be reduced, the stability to be reduced and even the error decoding to be generated due to the reflective characteristics of the reflective material, the camera observation angle, the special form and other factors, thereby affecting the tracking performance of the whole system.
Disclosure of Invention
The technical problem mainly solved by the embodiment of the invention is to provide a positioning point, a coding method thereof, a positioning method and a system thereof, which can solve the problems that the system robustness of the positioning system in the prior art is not strong and the positioning point is easy to generate error decoding.
In order to solve the above technical problem, one technical solution adopted by the embodiment of the present invention is: an anchor point encoding method is provided. The method comprises the following steps: dividing the brightness variation range of the positioning point into two or more brightness intervals, wherein the brightness intervals correspond to one coding value; converting the positioning point number into a coding sequence through a preset conversion rule; the coding sequence is composed of a plurality of coding values which are sequentially arranged.
In order to solve the above technical problem, one technical solution adopted by the embodiment of the present invention is: an anchor point encoding method is provided. The method comprises the following steps: respectively converting i and i + n into a corresponding second coding sequence and a corresponding third coding sequence through a preset conversion rule, wherein i is a positioning point number, and n is a positive integer; the second and third coding sequences are composed of a plurality of coding values which are arranged in sequence; splicing the second coding sequence and the third coding sequence to form a combined coding sequence; and corresponding the combined coding sequence to the positioning point number.
In order to solve the above technical problem, one technical solution adopted by the embodiment of the present invention is: an anchor point encoding method is provided. The method comprises the following steps: respectively converting i and i + n into a corresponding second coding sequence and a corresponding third coding sequence through a preset conversion rule, wherein i is a positioning point number, and n is a positive integer; the second and third coding sequences are composed of a plurality of coding values which are arranged in sequence; splicing the second coding sequence and the third coding sequence to form a combined coding sequence; circularly shifting the combined coding sequence to obtain a cyclic code corresponding to the combined coding sequence; and corresponding a coding sequence set to the positioning point number, wherein the coding sequence set comprises: and combining the coded sequence and all the corresponding cyclic codes.
Optionally, the predetermined conversion rule is a binary conversion.
In order to solve the above technical problem, another technical solution adopted by the embodiment of the present invention is: an anchor point is provided. The locating point has a unique locating point number; the anchor points flash in a predetermined pattern, which is determined by the anchor point coding method as described above according to the anchor point number.
Optionally, the light source of the positioning point is an infrared active light source.
The embodiment of the invention adopts a technical scheme that: a positioning system is provided. The positioning system comprises a camera, a plurality of positioning points and a processor; the camera is used for acquiring a plurality of images to form an image sequence; the image comprises a plurality of positioning points; the processor is used for determining a coding value according to the area of the positioning point in the image so as to obtain a first coding sequence corresponding to the image sequence; the processor is further configured to: and determining the positioning point number of the positioning point and calculating to obtain the position of the camera according to the first coding sequence.
Optionally, the processor is specifically configured to: and searching a positioning point number corresponding to the first coding sequence in a preset database.
Optionally, the preset database includes: and the combined coding sequence is formed by splicing a second coding sequence and a third coding sequence, wherein the second coding sequence and the third coding sequence are respectively converted from i and i + n through a preset conversion rule, i is the positioning point number, and n is a positive integer.
Optionally, the preset database includes: a set of coding sequences corresponding to the anchor point numbers, the set of coding sequences comprising: combining the coding sequence and all the corresponding cyclic codes;
the cyclic code is obtained by cyclically shifting the combined code sequence.
The embodiment of the invention adopts a technical scheme that: a positioning method. The positioning method comprises the following steps: synchronizing an exposure period of a camera and a flashing period of a positioning point, and acquiring an image sequence containing a plurality of images through the camera;
determining a corresponding coding value by the area of a positioning point in the image and acquiring a coding sequence corresponding to the image sequence; searching a positioning point number corresponding to the coding sequence in a preset database; and calculating the current position of the camera according to the positioning point number.
The positioning points of the embodiment of the invention adopt a mode of distinguishing the positioning points based on the luminous brightness, and all the positioning points can be observed by the camera in each exposure, thereby ensuring the continuity of observation. And a series of image sequences are decoded to obtain the final positioning point number, so that the position tracking of the equipment is completed. In the image obtained by the camera, the positioning point still keeps the circular feature, and the accuracy and the stability of the extraction of the gravity center position of the positioning point are ensured.
Drawings
Fig. 1 is a schematic application environment diagram of a positioning system provided in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for encoding anchor points according to an embodiment of the present invention;
fig. 3 is a flowchart of a method of a positioning method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the data processing procedure of the positioning method shown in FIG. 4;
FIG. 5 is a flowchart of a method for anchor point coding according to another embodiment of the present invention;
fig. 6 is a schematic application environment diagram of a positioning system according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The positioning point and the positioning system corresponding to the positioning point provided by the embodiment of the invention can be applied to various different position tracking systems or position positioning scenes, for example, in virtual reality, the position of a specific VR device (such as a helmet, a handle and the like) in a three-dimensional space is tracked and positioned, so that corresponding feedback is provided for a user. In a positioning system, each positioning point has known position coordinates in a three-dimensional space and a unique positioning point number. After the positioning system determines the positioning point number, the positioning system can obtain the position coordinates corresponding to the positioning point, and according to the relative position relation of the positioning points, the position coordinates of the VR equipment in the current three-dimensional space are solved.
Fig. 1 is an example of an application environment of a positioning system according to an embodiment of the present invention. In the present application environment, a Virtual Reality (VR) stereo space 10, a VR device 20, several localization points 30 and a processor 50 are included. The VR device 20 is provided with an image collecting device 40 for collecting a series of image information including a positioning point.
The image capture device 40 may be any suitable electronic device having at least one light sensing element (e.g., CCD, CMOS), such as a video camera, still camera, video recorder, etc. The VR device 20 may be any type of peripheral device that interacts with or provides virtual reality services to a user, such as a VR headset, VR gamepad, etc.
The processor 50 may specifically be any suitable type of electronic computing device, such as a multi-core central processing unit, a computer, a server, or a game console. The processor 50 may receive a series of image information, decode the positioning points in a suitable manner, and track the position of the VR device in the three-dimensional space according to the positioning points.
The processor 50 provides various different types of immersive experiences for the user based on the position tracking information of the VR device, such as by detecting changes in the position of the VR gamepad, responding accordingly in the frames displayed in the virtual reality, causing the character to raise his hand or change the position of certain items in the game. In some embodiments, the processor 50 may also be disposed within the VR device 20 or may be disposed separately, and the VR device 20 may establish a communication connection with the processor 50 through wireless/wired communication.
For example, as shown in fig. 1, the VR device 20 may be a VR headset worn on the head of the user, and the image capture device 40 may be a camera disposed on the VR headset. The helmet is arranged in front of the helmet, and can acquire images of the front face of the head of a user.
Different anchor points 30 are respectively provided at respective positions in the three-dimensional space, and blink at a certain frequency or pattern. Therefore, the image captured by the camera includes a plurality of anchor point images (light spots) having a specific area size determined by the brightness of the anchor points 30.
The processor 50 may determine the anchor point number by identifying and reading the anchor point image in such image information, and decoding the anchor point image from an image sequence formed by a plurality of images.
Alternatively, the number of images specifically captured by the camera 40 and the period of capturing the images may be determined according to actual conditions. In other embodiments, the camera 40 may be located elsewhere on the helmet to capture a plurality of images from different perspectives. The cameras 40 may also be arranged in two or more configurations to capture images in more directions (or views) to provide more accurate and stable position calculations.
In the practical application process, the positioning system can also add or omit some devices according to the practical needs. Although only 1 VR device 20 and 1 processor 50 are shown in fig. 1, one skilled in the art will appreciate that the positioning system may include any number of VR devices 20 and processors 50.
It should be noted that, in order to determine the position of the VR device in space, only the relative position between the camera 40 and the positioning point 30 needs to be acquired, and is not limited to the example shown in fig. 1. For example, as shown in fig. 6, in other embodiments, the camera 40 may also be disposed at a specific position in space, and the VR device (helmet) is disposed with a corresponding positioning point 30. The positioning system may also determine the position of the VR device in space by using the data of the positioning point 30 captured by the camera 40.
The following further describes a specific operation process of the positioning system provided by the embodiment of the present invention with reference to specific embodiments:
in the embodiment of the present invention, the positioning point 30 may adopt an infrared light source. Compared with a visible light source, the infrared light source has better environment adaptability and is not easy to have the problem of misidentification. Correspondingly, the camera may be an infrared camera corresponding to the infrared light source.
The anchor point 30 continues to flash in a predetermined pattern. The anchor point 30 is flashed to display its corresponding anchor point number. I.e. the anchor point number determines the blinking pattern of the anchor point 30. As shown in fig. 2, the blinking pattern may be determined by the following calculation:
210: converting the positioning point number into a coding sequence through a preset conversion rule; the coding sequence is composed of a plurality of coding values which are sequentially arranged. The conversion rule may be determined according to actual conditions, for example, using binary conversion.
220: and dividing the brightness variation range of the positioning point into two or more brightness intervals, wherein the brightness intervals correspond to one coding value.
For example, for an anchor point with anchor point number 15, the number 15 may be first converted into a corresponding 6-bit binary code 001111 (step 210). The anchor point may then represent a value of 1 with a larger source brightness (corresponding to a larger area of the blob on the image) and a value of 0 with a smaller source brightness (corresponding to a smaller area of the blob on the image) (step 220). Finally, the anchor points are flashed cyclically in a pattern of dark-bright at a certain period, corresponding to their own anchor point number 15.
Alternatively, the specific set light source brightness may be determined according to actual conditions, for example, the light source brightness value representing the value 0 may be 50, and the light source brightness value representing the value 1 may be 100. In addition, the blinking period of the anchor point 30 can also be set or adjusted according to actual conditions. Under the condition of meeting hardware and accuracy, the speed of the camera for completing image sequence acquisition can be improved by using higher flicker frequency.
In addition, other different code conversion methods can be used to convert the anchor point number, for example, converting the anchor point number into 8-ary code, 6-ary code, etc. Correspondingly, in order to completely represent the anchor point number by the flicker of the anchor point, the luminance variation range of the anchor point may be further divided into a corresponding number of intervals, for example, into 6 intervals, which respectively represent 6 code values of 1-6 in the 6-ary code.
Based on the anchor point of the above circular flashing, the identification of the anchor point (i.e. confirming the anchor point number of the anchor point) can be specifically accomplished in the following manner. Fig. 3 is a schematic method diagram of a positioning method according to an embodiment of the present invention. As shown in fig. 3, the positioning method may include the steps of:
310: and synchronizing the exposure period of the camera and the flicker period of the positioning point, and acquiring an image sequence containing a plurality of images through the camera.
As described above, the anchor point may blink at a predetermined period (e.g., 6 blinks per second). In order to accurately acquire the information output by the positioning point, the exposure period of the camera needs to be kept the same as the flicker period of the positioning point (6 images are taken every second).
During the positioning process, the camera acquires a plurality of images. The images have a certain precedence order. The term "image sequence" is used herein to denote such a set of images (as shown in fig. 4). In the embodiment of the present invention, the number of pictures included in each picture sequence is specifically determined by the number of coding bits corresponding to the anchor point number. For example, when the anchor point uses 6-bit binary coding, each image sequence contains 6 images. Of course, as shown in fig. 4, the shooting by the camera is a continuous process, and the images collected in a certain period of time may include a plurality of image sequences.
320: and determining a corresponding code value according to the area of the positioning point in the image and acquiring a code sequence corresponding to the image sequence.
As shown in fig. 4, in the image obtained by the camera, a plurality of positioning points (shown as light spots) are captured. And the size of the positioning point area (light spot area) is related to the brightness of the light source at the positioning point. Thus, the encoded value of the anchor point at that time can be determined by the area of the spot in the image. For example, when binary encoding is used, the encoded value corresponding to the anchor point at this time can be determined by determining whether the area of the light spot is larger than a predetermined threshold.
The above steps are repeated for all the images in the image sequence, and a coded sequence with several bit code values, such as 000111, is obtained. Of course, a plurality of different anchor points can be captured in one image. Therefore, the coding sequence of a plurality of different anchor points (for example, as shown in fig. 4, the coding sequence 000111 of anchor point 1, the coding sequence 010110 of anchor point 2, the coding sequence 111001 of anchor point 3, and the coding sequence 010101 of anchor point 4) can also be obtained from one image sequence.
330: and searching a positioning point number corresponding to the coding sequence in a preset database.
Since the coded sequence corresponds to a specific anchor point number, the anchor point number of an anchor point can be determined from the coded sequence of anchor points obtained in step 200. The specific process can be obtained in a manner similar to a lookup table in a database with preset corresponding relationships, and of course, decoding can also be completed in other suitable manners.
340: and calculating the current position of the camera according to the positioning point number. After the positioning point number is determined, the processor 50 may calculate, according to the position coordinates of each positioning point, the mutual relationship between the positioning points, or the position relationship between the positioning point and the VR device, the position coordinates of the VR device in the virtual displayed three-dimensional space through a suitable algorithm, so as to complete the positioning.
In the embodiment of the present invention, the operation process of step 340 may be implemented by any suitable method in the prior art, which is well known to those skilled in the art, and the operation may be adjusted to some extent according to the actual use condition.
In actual use, the camera cannot usually determine the starting position of the positioning point flickering. Therefore, for the same localization point, it is possible to obtain a plurality of different coding sequences due to the difference of the starting position of the image sequence acquired by the camera. For example, for a certain anchor point using a 6-bit binary code, it is possible for the camera to acquire 6 different image sequences.
Therefore, in some embodiments, the coded sequences obtained after decoding the six different image sequences may all correspond to the positioning code of the positioning point, so that the image sequences and the positioning point start flicker positions do not need to be aligned.
Further, since one positioning point number needs to correspond to a plurality of different coding sequences, in step 330, the corresponding positioning point number needs to be found through the coding sequence according to the preset database storing the corresponding relationship between the positioning point number and the coding sequence.
Therefore, the correspondence between the code sequence and the anchor point number needs to satisfy the condition that one code sequence can only correspond to one anchor point number, and the condition that one code sequence and two anchor point numbers cannot occur.
Fig. 5 is a flowchart of a method for anchor point coding according to an embodiment of the present invention. After the encoding method shown in fig. 5 is applied to determine the corresponding relationship between the anchor point number and the encoding sequence, such corresponding relationship may be stored in the preset database for use in step 330, and the anchor point number of the anchor point is determined by decoding.
As shown in fig. 5, the method may include the steps of:
510: respectively converting i and i + n into a corresponding second coding sequence and a corresponding third coding sequence through a preset conversion rule, wherein i is a positioning point number, and n is a positive integer; the second and third code sequences are composed of a plurality of code values which are arranged in sequence. In the embodiment of the present invention, n may specifically take any suitable positive integer value, for example, n may take a value of 1.
In this embodiment, the second coding sequence and the third coding sequence may both select their own appropriate coding numbers according to actual situations. For example, if there are 20 anchor point numbers from 1 to 20, in order to satisfy the requirement of using anchor point numbers, a binary code of 5 bits or more (20 less than or equal to 2 to the power of 5) may be used.
520: splicing the second coding sequence and the third coding sequence to form a combined coding sequence. The "splicing" process refers to combining the code values contained in the second code sequence and the third code sequence into a new combined code sequence with corresponding digit code values, which may be simply joining the second code sequence and the third code sequence end to form a new code sequence or other suitable splicing manner.
For example, suppose the anchor point has an anchor point number of 15, and its corresponding binary code sequence is 001111. Let n be 1 and the third coding sequence be 010000. The two coding sequences are then spliced together to form the combined coding sequence 001111010000.
530: and circularly shifting the combined coding sequence to obtain a cyclic code corresponding to the combined coding sequence.
In the above example, the combined coding sequence is formed to have 12 bits. Therefore, there are 12 corresponding cyclic codes (which can be obtained by cyclic shift). The number of cyclic codes actually depends on the number of bits of the code values of the combined code sequence.
540: and corresponding a coding sequence set to the positioning point number, wherein the coding sequence set comprises: and combining the coded sequence and all the corresponding cyclic codes.
Since all cyclic codes may be acquired by the camera for the same anchor point. Thus, the cyclic codes may be grouped into a set, stored in the predetermined database, and corresponding to an anchor point number (e.g., 15).
It can be understood by those skilled in the art that through the steps 510-520 (the construction process of the combined code sequence), the bit expansion of the code sequence can be realized, and the situation that one code sequence may correspond to two anchor point numbers due to cyclic shift is avoided. For example, there is no identical cyclic code between the combined code sequence 000001000010 consisting of the digits 1 and 2 and the combined code sequence 000001000011 consisting of the digits 2 and 3.
Through the coding mode, the coded sequence and the positioning point number have uniqueness. There are no repeated elements between the sets of coding sequences corresponding to different anchor point numbers, so as to avoid errors occurring in the process of executing the decoding step by the processor 50.
In other embodiments, the correspondence relationship between the anchor point number and the code sequence of the combined code can be established by using only the strategy of constructing the combined code sequence or only the strategy of constructing the cyclic code.
The strategy of independently using the construction cycle code can ensure that the image sequence acquired by the camera at any time can be interpreted without being aligned with the starting position of the positioning point flicker. The number of the code values of the code sequence can be expanded by constructing the combined code sequence, the complexity of the combined code sequence is improved, and the unique corresponding relation between the code sequence and the positioning point number is kept.
In the operation process of the positioning system (as shown in fig. 1) provided by the embodiment of the present invention, the positioning point 30 periodically changes the brightness of the light source. The blinking pattern or the light source brightness variation pattern is related to the anchor point number of the anchor point 30, and the blinking pattern can be determined according to the anchor point number by using the method shown in fig. 2.
The camera 40 is synchronized with the flash period of the positioning point, collects a series of image sequences, and determines the coding sequence corresponding to the positioning point according to the image sequences.
The processor 50 queries the code sequence in a predetermined database to determine the anchor point number of the anchor point. The corresponding relationship of the preset database may be a coding method as shown in fig. 4, and the corresponding relationship between the positioning point number and the coding sequence is constructed. In other embodiments, instead of using a look-up table such as a preset database, the processor 50 may directly perform a decoding process corresponding to the encoding method to determine the anchor point number corresponding to a certain encoding sequence.
In the embodiment of the invention, the infrared active light source is used as the positioning point, and the environment adaptability is stronger than that of the scheme that the visible light is used as the positioning point. On one hand, compared with the mode of distinguishing different positioning points by different wavelengths, in the embodiment of the invention, all the positioning points can be observed by the camera through one-time exposure, and the observation continuity of the positioning points is ensured. On the other hand, compared with the scheme of distinguishing the positioning points in a special form, the positioning method provided by the embodiment of the invention reserves the circular characteristics of the positioning points in camera imaging, so that the accuracy and the stability of the extraction of the gravity center positions of the positioning points are guaranteed. In addition, the mode of constructing a combined coding sequence and circulating the coding sequence is adopted, so that the coding sequence corresponding to the serial number of each positioning point can be ensured not to be repeated, and the problem of error decoding is solved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (9)
1. An anchor point coding method, comprising:
respectively converting i and i + n into a corresponding second coding sequence and a corresponding third coding sequence through a preset conversion rule, wherein i is a positioning point number, and n is a positive integer; the second and third coding sequences are formed by sequentially arranging a plurality of coding values, wherein the brightness variation range of the positioning point can be divided into two or more brightness intervals, the brightness interval corresponds to one coding value, and the positioning point flickers in a preset period;
splicing the second coding sequence and the third coding sequence to form a combined coding sequence;
circularly shifting the combined coding sequence to obtain a cyclic code corresponding to the combined coding sequence;
and corresponding a coding sequence set to the positioning point number, wherein the coding sequence set comprises: and combining the coded sequence and all the corresponding cyclic codes.
2. The method of claim 1, wherein the predetermined conversion rule is a binary conversion.
3. A positioning point is characterized in that the positioning point has a unique positioning point number; the anchor point flashes in a predetermined pattern, which is determined by the anchor point coding method according to the anchor point number as claimed in claim 1.
4. The localization point according to claim 3, characterized in that the light source of the localization point is an infrared active light source.
5. A positioning system, characterized in that it comprises a camera, several positioning points according to claim 3 or 4 and a processor;
the camera is used for acquiring a plurality of images to form an image sequence; the image comprises a plurality of positioning points;
the processor is used for determining a coding value according to the area of the positioning point in the image so as to obtain a first coding sequence corresponding to the image sequence;
the processor is further configured to: and determining the positioning point number of the positioning point and calculating to obtain the position of the camera according to the first coding sequence.
6. The positioning system of claim 5, wherein the processor is specifically configured to: and searching a positioning point number corresponding to the first coding sequence in a preset database.
7. The positioning system of claim 6, wherein the pre-defined database comprises: and the combined coding sequence is formed by splicing a second coding sequence and a third coding sequence, wherein the second coding sequence and the third coding sequence are respectively converted from i and i + n through a preset conversion rule, i is the positioning point number, and n is a positive integer.
8. The positioning system of claim 7, wherein the pre-defined database comprises: a set of coding sequences corresponding to the anchor point numbers, the set of coding sequences comprising: combining the coding sequence and all the corresponding cyclic codes;
the cyclic code is obtained by cyclically shifting the combined code sequence.
9. A method of positioning, comprising:
synchronizing an exposure period of a camera and a flashing period of a positioning point, and acquiring an image sequence containing a plurality of images through the camera;
determining a corresponding code value according to the area of a positioning point in the image and acquiring a code sequence corresponding to the image sequence, wherein the code sequence is a code sequence set which comprises a combined code sequence and all cyclic codes corresponding to the combined code sequence, the combined code sequence is formed by splicing a second code sequence and a third code sequence, the cyclic codes are obtained by circularly shifting the combined code sequence, the second code sequence and the third code sequence are obtained by respectively converting i and i + n according to a preset conversion rule, wherein i is the positioning point number, and n is a positive integer; the second and third coding sequences are formed by sequentially arranging a plurality of coding values, wherein the brightness variation range of the positioning point can be divided into two or more brightness intervals, the brightness interval corresponds to one coding value, and the positioning point flickers in a preset period;
searching a positioning point number corresponding to the coding sequence in a preset database;
and calculating the current position of the camera according to the positioning point number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710818044.9A CN107564064B (en) | 2017-09-12 | 2017-09-12 | Positioning point, coding method thereof, positioning method and system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710818044.9A CN107564064B (en) | 2017-09-12 | 2017-09-12 | Positioning point, coding method thereof, positioning method and system thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107564064A CN107564064A (en) | 2018-01-09 |
CN107564064B true CN107564064B (en) | 2020-11-03 |
Family
ID=60980629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710818044.9A Active CN107564064B (en) | 2017-09-12 | 2017-09-12 | Positioning point, coding method thereof, positioning method and system thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107564064B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108261761B (en) * | 2018-01-18 | 2021-01-12 | 北京凌宇智控科技有限公司 | Space positioning method and device and computer readable storage medium |
US11902737B2 (en) | 2019-01-09 | 2024-02-13 | Hangzhou Taro Positioning Technology Co., Ltd. | Directional sound capture using image-based object tracking |
CN116689328B (en) * | 2023-08-09 | 2023-10-31 | 成都新西旺自动化科技有限公司 | Clamping control material distributing device and clamping control material distributing method for mobile phone rear cover product |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310215A (en) * | 2013-07-03 | 2013-09-18 | 天津工业大学 | Detecting and identifying method for annular coding mark point |
CN104764440A (en) * | 2015-03-12 | 2015-07-08 | 大连理工大学 | Rolling object monocular pose measurement method based on color image |
CN104866859A (en) * | 2015-05-29 | 2015-08-26 | 南京信息工程大学 | High-robustness visual graphical sign and identification method thereof |
CN105444752A (en) * | 2014-09-02 | 2016-03-30 | 深圳市芯通信息科技有限公司 | Light source brightness-based indoor positioning method, and apparatus and system thereof |
CN105718840A (en) * | 2016-01-27 | 2016-06-29 | 西安小光子网络科技有限公司 | Optical label based information interaction system and method |
CN105931272A (en) * | 2016-05-06 | 2016-09-07 | 上海乐相科技有限公司 | Method and system for tracking object in motion |
CN106028001A (en) * | 2016-07-20 | 2016-10-12 | 上海乐相科技有限公司 | Optical positioning method and device |
CN106372702A (en) * | 2016-09-06 | 2017-02-01 | 深圳市欢创科技有限公司 | Positioning identification and positioning method thereof |
CN106447636A (en) * | 2016-09-30 | 2017-02-22 | 乐视控股(北京)有限公司 | Noise elimination method and virtual reality device |
CN106445084A (en) * | 2016-09-30 | 2017-02-22 | 乐视控股(北京)有限公司 | Positioning method and acquisition equipment |
CN206097145U (en) * | 2016-07-18 | 2017-04-12 | 南京信息工程大学 | A augmented reality visual beacon for power equipment installation jobs |
CN106651948A (en) * | 2016-09-30 | 2017-05-10 | 乐视控股(北京)有限公司 | Positioning method and handle |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1152540A4 (en) * | 1999-11-15 | 2005-04-06 | Mitsubishi Electric Corp | Error control device and method using cyclic code |
CN102169366B (en) * | 2011-03-18 | 2012-11-07 | 汤牧天 | Multi-target tracking method in three-dimensional space |
CN102749072B (en) * | 2012-06-15 | 2014-11-05 | 易程科技股份有限公司 | Indoor positioning method, indoor positioning apparatus and indoor positioning system |
CN106372556B (en) * | 2016-08-30 | 2019-02-01 | 西安小光子网络科技有限公司 | A kind of recognition methods of optical label |
-
2017
- 2017-09-12 CN CN201710818044.9A patent/CN107564064B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310215A (en) * | 2013-07-03 | 2013-09-18 | 天津工业大学 | Detecting and identifying method for annular coding mark point |
CN105444752A (en) * | 2014-09-02 | 2016-03-30 | 深圳市芯通信息科技有限公司 | Light source brightness-based indoor positioning method, and apparatus and system thereof |
CN104764440A (en) * | 2015-03-12 | 2015-07-08 | 大连理工大学 | Rolling object monocular pose measurement method based on color image |
CN104866859A (en) * | 2015-05-29 | 2015-08-26 | 南京信息工程大学 | High-robustness visual graphical sign and identification method thereof |
CN105718840A (en) * | 2016-01-27 | 2016-06-29 | 西安小光子网络科技有限公司 | Optical label based information interaction system and method |
CN105931272A (en) * | 2016-05-06 | 2016-09-07 | 上海乐相科技有限公司 | Method and system for tracking object in motion |
CN206097145U (en) * | 2016-07-18 | 2017-04-12 | 南京信息工程大学 | A augmented reality visual beacon for power equipment installation jobs |
CN106028001A (en) * | 2016-07-20 | 2016-10-12 | 上海乐相科技有限公司 | Optical positioning method and device |
CN106372702A (en) * | 2016-09-06 | 2017-02-01 | 深圳市欢创科技有限公司 | Positioning identification and positioning method thereof |
CN106447636A (en) * | 2016-09-30 | 2017-02-22 | 乐视控股(北京)有限公司 | Noise elimination method and virtual reality device |
CN106445084A (en) * | 2016-09-30 | 2017-02-22 | 乐视控股(北京)有限公司 | Positioning method and acquisition equipment |
CN106651948A (en) * | 2016-09-30 | 2017-05-10 | 乐视控股(北京)有限公司 | Positioning method and handle |
Non-Patent Citations (2)
Title |
---|
三维数据拼接中编码标志点的设计与检测;马扬飚 等;《清华大学学报(自然科学版)》;20060228;第46卷(第2期);第1节 * |
马扬飚 等.三维数据拼接中编码标志点的设计与检测.《清华大学学报(自然科学版)》.2006,第46卷(第2期), * |
Also Published As
Publication number | Publication date |
---|---|
CN107564064A (en) | 2018-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110261823B (en) | Visible light indoor communication positioning method and system based on single LED lamp | |
CN106372702B (en) | Positioning identifier and positioning method thereof | |
CN107564064B (en) | Positioning point, coding method thereof, positioning method and system thereof | |
US11361466B2 (en) | Position information acquisition device, position information acquisition method, recording medium, and position information acquisition system | |
CN106597374A (en) | Indoor visible positioning method and system based on camera shooting frame analysis | |
US9218532B2 (en) | Light ID error detection and correction for light receiver position determination | |
CN116437062A (en) | Structured light three-dimensional (3D) depth map based on content filtering | |
JP2006030127A (en) | Camera calibration system and three-dimensional measurement system | |
TW201702989A (en) | Moving target instant detection and tracking method and target detecting device | |
WO2015144553A1 (en) | Locating a portable device based on coded light | |
WO2019020200A1 (en) | Method and apparatus for accurate real-time visible light positioning | |
CN108592823A (en) | A kind of coding/decoding method based on binocular vision color fringe coding | |
CN106446883B (en) | Scene reconstruction method based on optical label | |
CN109655014B (en) | VCSEL-based three-dimensional face measurement module and measurement method | |
JP4418935B2 (en) | Optical marker system | |
JP7505547B2 (en) | IMAGE PROCESSING DEVICE, CALIBRATION BOARD, AND 3D MODEL DATA GENERATION METHOD | |
CN111213368B (en) | Rigid body identification method, device and system and terminal equipment | |
CN106991702B (en) | Projector calibration method and device | |
CN111914716B (en) | Active light rigid body identification method, device, equipment and storage medium | |
JP6897389B2 (en) | Discrimination computer program, discriminating device and discriminating method, and communication system | |
US10989800B2 (en) | Tracking using encoded beacons | |
US20180006724A1 (en) | Multi-transmitter vlc positioning system for rolling-shutter receivers | |
Narasimman et al. | Tree-based single LED indoor visible light positioning technique | |
CN106028001B (en) | A kind of optical positioning method and device | |
CN109756667B (en) | Position acquisition system, position acquisition device, position acquisition method, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 518000, Floor 1801, Block C, Minzhi Stock Commercial Center, North Station Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province Patentee after: Shenzhen Huanchuang Technology Co.,Ltd. Address before: 518000, Building C, 606B, Huahan Science and Technology Park, No. 16 Langshan Road, North District, Nanshan District, Shenzhen, Guangdong Province Patentee before: SHENZHEN CAMSENSE TECHNOLOGIES Co.,Ltd. |
|
CP03 | Change of name, title or address |