CN114726996B - Method and system for establishing a mapping between a spatial location and an imaging location - Google Patents
Method and system for establishing a mapping between a spatial location and an imaging location Download PDFInfo
- Publication number
- CN114726996B CN114726996B CN202110002685.3A CN202110002685A CN114726996B CN 114726996 B CN114726996 B CN 114726996B CN 202110002685 A CN202110002685 A CN 202110002685A CN 114726996 B CN114726996 B CN 114726996B
- Authority
- CN
- China
- Prior art keywords
- information
- location
- spatial
- scene
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
A method and system for establishing a mapping between a spatial location in a scene with one or more cameras and one or more visual markers deployed therein and an imaging location in an image captured by a camera is provided, the method comprising: determining, by a device, spatial location information when the device is at least one location in a scene, wherein the at least one location is located in a field of view of the camera; acquiring an image by the camera while the device is in each of the at least one location; determining imaging location information of the device or its user in the image by analyzing the image; and establishing a mapping between a spatial position in the scene and an imaging position in an image captured by the camera based on the spatial position information and the imaging position information.
Description
Technical Field
The present invention relates to the field of information interaction, and in particular, to a method and system for establishing a mapping between a spatial position in a scene and an imaging position in an image captured by a camera.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art to the present disclosure.
In many scenarios, cameras are deployed in the scene to capture images of people or devices present in the scene based on security, monitoring, public services, etc. needs. However, in many cases, the manager or operator of the cameras only knows where each camera is roughly installed (e.g., installed in a first floor hall or at a certain intersection), but does not know the mapping relationship between the spatial position in the scene and the imaging position in the image captured by the camera (e.g., three-dimensional coordinates of the spatial position corresponding to a certain imaging position in the image captured by the camera), or the precise pose and pose information of these cameras in the scene (hereinafter referred to as "pose information"). Pose information typically has six degrees of freedom, for example, for rectangular coordinate systems, it may be a degree of freedom of movement along three coordinate axes of x, y, z (i.e., three position coordinates: x, y, z) and a degree of freedom of rotation about the three coordinate axes (i.e., three pose angles: pitch, yaw, roll). It is advantageous if the mapping between the spatial position in the scene and the imaging position in the image taken by the camera can be known, or the pose information of the camera can be known.
Disclosure of Invention
One aspect of the invention relates to a method for establishing a mapping between a spatial position in a scene having one or more cameras and one or more visual markers deployed therein and an imaging position in an image captured by the cameras, the method comprising: determining, by a device, spatial location information when the device is at least one location in a scene, wherein the at least one location is located in a field of view of the camera; acquiring an image by the camera while the device is in each of the at least one location; determining imaging location information of the device or its user in the image by analyzing the image; and establishing a mapping between a spatial position in the scene and an imaging position in an image captured by the camera based on the spatial position information and the imaging position information.
Another aspect of the invention relates to a method for establishing a mapping between a spatial position in a scene having one or more cameras and one or more visual markers deployed therein and an imaging position in an image captured by the cameras, the method comprising: determining, by a device, spatial location information when the device is at least one location in a scene, wherein the at least one location is located in a field of view of the camera; acquiring an image by the camera while the device is in each of the at least one location; determining imaging location information of the device or its user in the image by analyzing the image; and establishing a mapping between a spatial location in the scene and an imaging location in an image captured by the camera, wherein establishing the mapping comprises:
establishing a mapping between the spatial position information and the imaging position information based on the spatial position information and the imaging position information;
adjusting the spatial position information and establishing a mapping between the adjusted spatial position information and the imaging position information;
adjusting the imaging position information and establishing a mapping between the spatial position information and the adjusted imaging position information; or alternatively
And adjusting the spatial position information and the imaging position information, and establishing a mapping between the adjusted spatial position information and the adjusted imaging position information.
Another aspect of the invention relates to a system for establishing a mapping between a spatial position in a scene and an imaging position in an image taken by a camera, the system comprising: one or more cameras deployed in the scene; one or more visual markers deployed in the scene; and a device configured to implement the method described in the embodiments of the present application.
Another aspect of the invention relates to a storage medium in which a computer program is stored which, when being executed by a processor, can be used to implement the method described in the embodiments of the present application.
Another aspect of the invention relates to an electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, is operable to carry out the method described in the embodiments of the present application.
By adopting the scheme of the invention, the mapping relation between the space position in the scene and the imaging position in the image shot by the camera can be conveniently and rapidly determined. Additionally, in some embodiments, pose information of the camera in the scene may be further determined.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary visual cue;
FIG. 2 illustrates an optical communication device that may be used as a visual cue;
FIG. 3 illustrates a system for establishing a mapping between spatial locations in a scene and imaging locations in an image captured by a camera, according to one embodiment;
FIG. 4 illustrates a method for establishing a mapping between spatial locations in a scene and imaging locations in an image captured by a camera, according to one embodiment;
FIG. 5 illustrates a method for determining pose information of cameras deployed in a scene, according to one embodiment;
fig. 6 illustrates a method for determining pose information of cameras deployed in a scene according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
A visual sign refers to a sign that can be recognized by the human eye or an electronic device, which can have a variety of forms. In some embodiments, the visual indicia may be used to communicate information that is available to a smart device (e.g., a cell phone, smart glasses, etc.). For example, the visual indicia may be an optical communication device capable of emitting coded light information, or the visual indicia may be a graphic with coded information, such as a two-dimensional code (e.g., QR code, applet code), bar code, or the like. Fig. 1 shows an exemplary visual sign having a specific black and white pattern. Fig. 2 shows an optical communication device 100 that may be used as a visual marker, comprising three light sources (a first light source 101, a second light source 102, a third light source 103, respectively). The optical communication device 100 further comprises a controller (not shown in fig. 2) for selecting a respective driving mode for each light source in dependence of the information to be transferred. For example, in different driving modes, the controller may control the light emission manner of the light source using different driving signals, so that when the optical communication apparatus 100 is photographed using a device having an imaging function, the imaging of the light source therein may take on different appearances (e.g., different colors, patterns, brightness, etc.). By analyzing the imaging of the light sources in the optical communication apparatus 100, the driving pattern of each light source at the moment can be analyzed, and thus the information transferred by the optical communication apparatus 100 at the moment can be analyzed.
To provide the user with a corresponding service based on the visual signs, each visual sign may be assigned an identification Information (ID) for uniquely identifying or identifying the visual sign by the manufacturer, manager, or user of the visual sign, etc. The user may use the device to image capture the visual cue to obtain identification information conveyed by the visual cue, such that a corresponding service may be accessed based on the identification information, e.g., accessing a web page associated with the identification information, obtaining other information associated with the identification information (e.g., location or pose information of the visual cue corresponding to the identification information), and so forth. The devices referred to herein may be, for example, devices that a user carries or controls (e.g., cell phones, tablet computers, smart glasses, AR glasses, smart helmets, smart watches, automobiles, etc.), or may be machines that are capable of autonomous movement (e.g., unmanned automobiles, robots, etc.). The device may acquire an image containing the visual marker by image acquisition of the visual marker by an image acquisition device thereon, and may identify information conveyed by the visual marker and determine position or attitude information of the device relative to the visual marker by analyzing imaging of the visual marker in the image.
Fig. 3 shows a system for establishing a mapping between spatial locations in a scene and imaging locations in an image taken by a camera, the system comprising a visual marker 301, a camera 302, a device 303, according to one embodiment. The device 303 has image acquisition means thereon and is capable of identifying the visual marker 301 by the image acquisition means. The device 303 shown in fig. 3 is a device capable of autonomous movement or a device controlled by a person (e.g., a robot, an AGV cart, etc.), but it will be appreciated that the device 303 may also be a device carried by a person, such as a cell phone, smart glasses, smart watch, etc.
In one embodiment, a coordinate system (which may be referred to as a visual marker coordinate system) may be established based on the visual marker 301, and the coordinate system may, for example, have the visual marker 301 as the origin of coordinates. In one embodiment, a scene coordinate system may be established for the scene and the position and pose information of the visual markers 301 in the scene coordinate system may be determined. The scene coordinate system may be, for example, a coordinate system established for a certain place (e.g., a coordinate system established for a certain room, building, park, etc.) or a world coordinate system.
The device 303 may determine its spatial location information by scanning the visual marker 301, which may be its spatial location information relative to the visual marker 301 or its spatial location information in the visual marker coordinate system, or its spatial location information in the scene coordinate system.
In one embodiment, the device 303 may be used to capture an image of the visual marker 301; determining identification information of the visual marker 301 by analyzing the acquired image of the visual marker 301 and spatial position information of the device 303 relative to the visual marker 301; determining position and posture information of the visual marker 301 in space (e.g., position and posture information in a scene coordinate system) by the identification information of the visual marker 301; and determining spatial position information of the device 303 in the scene (e.g., spatial position information in a scene coordinate system) based on the position and pose information of the visual marker 301 in space and the spatial position information of the device 303 relative to the visual marker 301. The position and attitude information of the visual marker 301 in space may be calibrated in advance.
When the device 303 is within the field of view of the camera 302, the camera 302 may capture an image and determine the imaging location of the device 303 or its user in the image by analyzing the image.
Only one visual marker and one camera are shown in fig. 3, but it is to be understood that this is not a limitation and that in some embodiments, a system may have multiple visual markers or multiple cameras.
Fig. 4 illustrates a method for establishing a mapping between spatial locations in a scene and imaging locations in an image captured by a camera, which may be implemented using the system shown in fig. 3, according to one embodiment, the method comprising:
step 401: spatial location information is determined by the device scanning the visual markers when the device is at least one location in the scene, wherein the at least one location is located in the field of view of the camera.
The device may determine its spatial location information by scanning the visual marker, which may be its spatial location information relative to the visual marker or its spatial location information in a scene coordinate system.
In one embodiment, the device scan visual indicia may be used to determine spatial location information for the device at that time when the device is in any of the at least one location. In one embodiment, the device may not scan the visual marker at the at least one location, e.g., the device may scan the visual marker in advance to determine its spatial location information at that time before reaching the at least one location, after which the device may measure or track its location change by methods known in the art (e.g., inertial navigation, visual odometer, SLAM, VSLAM, SFM, etc.) using various sensors built-in (e.g., acceleration sensors, magnetic sensors, orientation sensors, gravity sensors, gyroscopes, cameras, etc.) to determine the real-time spatial location of the device. In this way, when the device travels to any of the at least one location, its then-current spatial location information may be determined without having to scan the visual markers again.
Step 402: an image is acquired by the camera while the device is in each of the at least one location.
When the device is in each of the at least one position, an indication may be sent to the camera to cause the camera to take an image. In one embodiment, the indication may be sent to the camera by the device. In one embodiment, the instructions may also be sent to the camera in other ways, for example, manually. After receiving the indication, the camera can shoot an image.
In one embodiment, instead of sending an indication to the camera, an image of the device in each of the at least one location may be selected from a plurality of images continuously acquired by the camera. For example, the time of acquisition of each of a plurality of images continuously acquired by the camera may be recorded, and a corresponding image may be selected from the plurality of images according to the time when the device is in each of the at least one location.
Step 403: the imaging position information of the device or its user in the image is determined by analyzing the image.
After the camera captures an image, the image may be analyzed to determine imaging location information of the device or a user carrying the device in the image. The identification of the device or user thereof in the image may be performed in a variety of possible ways. In one embodiment, the devices in the image may be identified based on characteristic information (e.g., size, shape, color, etc.) of the devices. In one embodiment, the device or user thereof in the image may be manually identified and imaging location information of the device or user thereof determined.
Step 404: based on the spatial position information and the imaging position information, a mapping between the spatial position in the scene and the imaging position in the image captured by the camera is established.
After spatial position information of at least one position in the scene and imaging position information corresponding to the spatial position information are obtained, a mapping between the spatial position in the scene and the imaging position in the image captured by the camera may be established. It will be appreciated that depending on the actual requirements (e.g., different application scenarios, different accuracy requirements, etc.), a different number of spatial locations in the scenarios may be selected to perform the above-described process. After a mapping between spatial locations in the scene and imaging locations in the images captured by the cameras is established, the mapping may be used to implement various applications such as positioning, navigation, monitoring, and the like.
The mapping between the spatial position in the scene and the imaging position in the image taken by the camera can be established in various different ways, depending on the actual need. In one embodiment, a mapping between the spatial position information and the imaging position information may be established directly based on the spatial position information and the imaging position information. In one embodiment, the spatial position information or the imaging position information may be appropriately adjusted or modified before the mapping is established in order to meet the actual requirements. The adjustment or modification may be performed manually or may be performed automatically based on predetermined rules. In one embodiment, the spatial position information may be adjusted and a mapping between the adjusted spatial position information and the imaging position information may be established. In one embodiment, the imaging position information may be adjusted and a mapping between the spatial position information and the adjusted imaging position information is established. In one embodiment, the spatial position information and the imaging position information may be adjusted, and a mapping between the adjusted spatial position information and the adjusted imaging position information may be established. For example, for a hall scene, in order to establish a mapping relationship between the spatial position of the hall floor and the imaging position, the spatial position information of the imaging device may be determined by a robot having the imaging device on the floor by scanning visual marks, and then the spatial position information may be adjusted according to the distance between the imaging device and the floor to determine the spatial position of the floor area where the robot is located. Correspondingly, the imaging position information of the robot can also be adjusted to determine the imaging position of the ground area where the robot is located. Then, a mapping relationship between the spatial position of the hall floor and the imaging position can be established.
In one embodiment, the method may further include: determining spatial location information of another spatial location (e.g., height information of the device above the ground) based on the spatial location information of the device and a relative positional relationship between the device and the another spatial location (e.g., a ground area directly below the device); determining imaging position information of the other spatial position in the image according to the imaging position information of the equipment and the relative position relation between the equipment and the other spatial position; and establishing a mapping between the spatial position information of the other spatial position and the imaging position information based on the spatial position information and the imaging position information of the other spatial position.
A method for establishing a mapping between a spatial position in a scene and an imaging position in an image taken by a camera according to another embodiment may include (some steps are similar to those shown in fig. 4 and are not repeated here):
determining spatial location information of a device at least one location in a scene by the device scanning visual markers, wherein the at least one location is located in a field of view of the camera;
acquiring an image by a camera while the device is in each of the at least one location;
determining the spatial position information of another spatial position according to the spatial position information of the equipment and the relative position relation between the equipment and the other spatial position;
determining imaging position information of the other spatial position in the image according to the imaging position information of the equipment and the relative position relation between the equipment and the other spatial position; and
and establishing a mapping between the spatial position information of the other spatial position and the imaging position information of the other spatial position based on the spatial position information of the other spatial position and the imaging position information of the other spatial position.
In one embodiment, the above process may be repeated at a plurality of different spatial locations within the camera field of view to construct three-dimensional spatial information of the scene corresponding to the image captured by the camera.
In one embodiment, the spatial location of a person or object may be determined using a mapping relationship between one or more spatial locations (not necessarily all) in an established scene and one or more imaging locations in an image captured by a camera, and the imaging location of the person or object in the image captured by the camera. In one embodiment, the mapping relationship between one or more spatial locations (not necessarily all) in the established scene and one or more imaging locations in the image captured by the camera, and the spatial location information of the person or object, may be used to determine the imaging location of the person or object in the image captured by the camera. For example, for a hall scene, a number of spatial positions on the hall floor may be selected and imaging positions of the positions in an image captured by a camera may be determined, after which a mapping relationship between the spatial positions and the imaging positions may be established, and a spatial position corresponding to a certain imaging position may be deduced based on the mapping relationship, or an imaging position corresponding to a certain spatial position may be deduced.
It will be appreciated by those skilled in the art that, in theory, the same imaging position in an image taken by a camera may correspond to a plurality of different spatial positions, but in many practical applications, the same imaging position in an image taken by a camera may correspond to only one spatial position, since a person or object (e.g., robot, dolly, etc.) is typically moving on a plane in a scene. It will be appreciated that the same imaging position may also be associated with a plurality of different spatial positions, as desired.
After the mapping between the spatial position in the scene and the imaging position in the image captured by the camera is established, pose information of the camera deployed in the scene can also be determined based on the mapping relationship. Fig. 5 illustrates a method for determining pose information of cameras deployed in a scene in which one or more cameras and one or more visual markers are deployed, according to one embodiment, which may include the following steps (some steps are similar to those in fig. 4 and are not repeated here):
step 501: spatial location information is determined by the device scanning the visual markers when the device is in each of at least three locations of the scene, the at least three locations being located in the field of view of the camera and not collinear.
In one embodiment, the device scan visual markers may be used at each of at least three locations to determine spatial location information for the device at that time. In one embodiment, the visual markers may not be scanned using the device at each of the at least three locations. For example, the device may scan visual markers at a first location to determine its spatial location information at that time, after which the device may use various built-in sensors to measure or track its location changes to determine the real-time spatial location of the device. In this way, as the device travels to the second location and the third location, its then-current spatial location information may be determined without having to scan the visual markers again at the second location or the third location. In one embodiment, the device may also scan the visual markers in advance before reaching the at least three locations, after which the device may travel to the at least three locations and determine its spatial location information at each of the at least three locations.
The at least three positions may be any position in the camera field of view that is not collinear. In one embodiment, the at least three positions include at least four positions, at least five positions, at least six positions, or more positions.
Step 502: an image is acquired by the camera when the device is in each of the at least three positions.
Step 503: imaging position information of the device in the image is determined by analyzing the image.
After the camera captures an image, the image may be analyzed to determine imaging location information of the device in the image. The identification of devices in the image may be performed in a variety of possible ways. In one embodiment, the devices in the image may be identified based on characteristic information (e.g., size, shape, color, etc.) of the devices. In one embodiment, the device in the image may be manually identified and the imaging location information of the device determined.
Step 504: based on the spatial position information and the imaging position information, a mapping between the spatial position in the scene and the imaging position in the image captured by the camera is established.
Step 505: pose information of the camera in the scene is determined based on a mapping between a spatial position in the scene and an imaging position in an image captured by the camera.
The position and pose information of the camera may be calculated using any feasible method based on a mapping between the spatial position information of the at least three positions and the at least three imaging position information corresponding to them. For example, the position and posture information of the camera may be calculated using a 3D-2D PnP (transparent-n-Point) method. In one embodiment, the camera's internal reference information, such as focal length, is used in calculating the camera's position and pose information. The pose information of the camera may be pose information in the visual marker coordinate system (for example, pose information with respect to the visual marker) or pose information in the scene coordinate system.
In one embodiment, there may be multiple cameras in a scene. Pose information of each camera may be determined by the above method, and relative pose information between the respective cameras may be determined based on the pose information of the cameras.
The camera may be mounted in a fixed position and have a fixed orientation, but it will be appreciated that the camera may also be movable (e.g., may change position or adjust orientation). For a movable camera, it may be kept stationary (e.g., in a certain reference state) during execution of the above method to determine its current pose information (hereinafter referred to as "reference pose information"). Then, if the camera moves or rotates, new pose information of the camera can be determined according to the reference pose information and the moving or rotating data of the camera. In one embodiment, the camera's internal parameters (e.g., focal length) may be kept unchanged during the determination of the camera pose.
Fig. 6 illustrates a method for determining pose information of cameras deployed in a scene having one or more cameras and one or more visual markers deployed therein, according to another embodiment, the method may include the steps of:
step 601: spatial location information of a movement trajectory of the device in the scene is determined by the device scanning the visual markers.
In one embodiment, the location in space traversed by the device may constitute a trajectory of movement of the device. The movement trajectory of the device in the scene may include at least three positions, such as three positions, four positions, five positions, or more positions, that are not collinear in the camera view. The movement track may be a movement track set in advance for the device, or may be a movement track generated by random movement of the device. The movement trajectory of the device may be continuous or discontinuous.
The device may determine its spatial location information by scanning the visual marker, which may be its spatial location information relative to the visual marker or its spatial location information in a scene coordinate system. The device may scan the visual markers to determine its spatial location information before or at the beginning of the movement track, or at any location in the movement track or at the end of the movement track. After the device determines its initial spatial location information by scanning the visual markers, it may use various built-in sensors to measure or track its location changes to determine the real-time spatial location of the device.
Step 602: and acquiring images through the camera when the equipment moves along the moving track.
Step 603: the imaging trajectory information of the device in the image is determined by analyzing the image.
Step 604: based on the spatial position information of the movement track of the device and the imaging track information of the device in the image, a mapping between the spatial position in the scene and the imaging position in the image shot by the camera is established.
It is understood that the spatial position information of the movement trajectory of the device is actually a set of spatial position information; similarly, the imaging trajectory information of the device in the image is actually a set of imaging position information. A mapping between the spatial position in the scene and the imaging position in the image taken by the camera may be established based on the spatial position information and the imaging position information in a similar manner as described in fig. 4 or fig. 5.
Step 605: pose information of the camera in the scene is determined based on a mapping between a spatial position in the scene and an imaging position in an image captured by the camera.
Pose information of the camera may be calculated using any feasible method based on spatial position information of a movement track of the device and imaging track information of the device in the image. In one embodiment, in calculating pose information of the camera, the pose information of the camera may be calculated not using the entire movement trajectory of the device, but by selecting a plurality of points therefrom. In one embodiment, the pose information of the camera may be calculated multiple times and a different set of points may be selected from the motion trajectory of the device each time, and the calculated multiple pose information may be cross-validated to reduce or eliminate errors.
The device involved in the embodiments of the present application may be a device capable of autonomous movement, a device controlled by a person, a device carried by a person, or the like, on which an image pickup device is mounted. For example, the device may be a robot, an unmanned vehicle, a cell phone, smart glasses, a smart watch, a tablet computer, a vehicle, or the like.
In one embodiment, the methods described herein may be implemented by a device. In one embodiment, the methods described herein may be implemented by a server that may receive information from cameras and devices. In one embodiment, the methods described herein may be implemented by a device and a server together. It will be appreciated that the methods described herein, or one or more of the steps thereof, may also be implemented by other means.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g. hard disk, optical disk, flash memory, etc.), which, when executed by a processor, can be used to carry out the method of the invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory, in which a computer program is stored which, when being executed by the processor, can be used to carry out the method of the invention.
Reference herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment" or the like, means that a particular feature, structure, or property described in connection with the embodiments is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment" in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic described in connection with or illustrated in one embodiment may be combined, in whole or in part, with features, structures, or characteristics of one or more other embodiments without limitation, provided that the combination is not logically or otherwise inoperable. The expressions appearing herein like "according to a", "based on a", "by a" or "using a" are meant to be non-exclusive, i.e. "according to a" may cover "according to a only" as well as "according to a and B", unless the meaning of "according to a only" is specifically stated. In this application, some exemplary operation steps are described in a certain order for clarity of explanation, but it will be understood by those skilled in the art that each of these operation steps is not essential, and some of them may be omitted or replaced with other steps. The steps do not have to be performed sequentially in the manner shown, but rather, some of the steps may be performed in a different order, or concurrently, as desired, provided that the new manner of execution is not non-logical or non-operational.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. While the invention has been described in terms of several embodiments, the invention is not limited to the embodiments described herein, but encompasses various changes and modifications that may be made without departing from the scope of the invention.
Claims (16)
1. A method for establishing a mapping between a spatial location in a scene and an imaging location in an image captured by a camera, wherein the scene has one or more cameras and one or more visual markers deployed therein, the method comprising:
determining, by a device, spatial location information when the device is at least one location in a scene, wherein the at least one location is located in a field of view of the camera;
capturing images by the camera in the scene while the device is in each of the at least one location;
determining imaging location information in the images of the device or its user when in each location by analyzing the images acquired at that location; and
for at least one position in a scene, a mapping between the spatial position of the position in the scene and an imaging position in an image captured by the camera is established based on the spatial position information and the imaging position information determined at the position.
2. The method of claim 1, wherein the at least one location comprises at least three locations that are not collinear, and wherein the determining imaging location information of the device or a user thereof in the image by analyzing the image comprises: determining imaging position information of the device in the image by analyzing the image,
the method further comprises the steps of:
pose information of the camera is determined based on a mapping between a spatial position in the scene and an imaging position in an image captured by the camera.
3. The method of claim 1, wherein the determining, by the device, spatial location information of the device at least one location in the scene by scanning the visual markers comprises:
the visual markers are scanned at the at least one location using the device to determine current spatial location information of the device.
4. The method of claim 1, wherein the determining, by the device, spatial location information of the device at least one location in the scene by scanning the visual markers comprises:
scanning the visual markers using the device to determine initial spatial location information of the device;
measuring or tracking a change in position of the device by a sensor in the device; and
spatial location information is determined when the device is in the at least one location based on the initial spatial location information of the device and the change in location of the device.
5. The method of claim 1, wherein determining spatial location information of a device by scanning the visual markers by the device comprises:
acquiring an image of the visual cue using the device;
determining identification information of the visual cue and a position of the device relative to the visual cue by analyzing the image;
obtaining the position and posture information of the visual mark in space through the identification information of the visual mark;
spatial location information of the device is determined based on the location and pose information of the visual marker in space and the location of the device relative to the visual marker.
6. The method of claim 2, wherein at least two cameras are deployed in the scene, the method further comprising:
determining pose information of each camera in the at least two cameras in a scene; and
and determining the relative pose information between the at least two cameras according to the pose information of each camera in the at least two cameras in the scene.
7. The method of claim 1, wherein the determining, by the device, spatial location information of the device at least one location in the scene by scanning the visual markers comprises: the visual markers are scanned by the device to determine spatial location information of a movement track of the device in a scene.
8. The method of claim 7, wherein,
acquiring an image including the device through the camera when the device moves along the movement track;
determining imaging trajectory information of the device in the image by analyzing the image;
based on the spatial position information of the movement track of the equipment and the imaging track information of the equipment in the image, establishing a mapping between the spatial position in the scene and the imaging position in the image shot by the camera; and
pose information of the camera is determined based on a mapping between a spatial position in the scene and an imaging position in an image captured by the camera.
9. The method of claim 1, further comprising:
and constructing three-dimensional space information of a scene corresponding to the image shot by the camera.
10. The method of claim 1, further comprising:
obtaining an imaging position of a person or an object in an image shot by the camera; and
based on the imaging position and the mapping, a spatial position of the person or object is determined.
11. The method of claim 1, further comprising:
obtaining a spatial position of a person or object in the scene; and
based on the spatial position and the mapping, an imaging position of the person or object in the image taken by the camera is determined.
12. The method of claim 1, further comprising:
determining the spatial position information of another spatial position according to the spatial position information of the equipment and the relative position relation between the equipment and the other spatial position;
determining imaging position information of the other spatial position in the image according to the imaging position information of the equipment and the relative position relation between the equipment and the other spatial position; and
and establishing a mapping between the spatial position information of the other spatial position and the imaging position information of the other spatial position based on the spatial position information of the other spatial position and the imaging position information of the other spatial position.
13. A method for establishing a mapping between a spatial location in a scene and an imaging location in an image captured by a camera, wherein the scene has one or more cameras and one or more visual markers deployed therein, the method comprising:
determining, by a device, spatial location information when the device is at least one location in a scene, wherein the at least one location is located in a field of view of the camera;
acquiring an image by the camera while the device is in each of the at least one location;
determining the spatial position information of another spatial position according to the spatial position information of the equipment and the relative position relation between the equipment and the other spatial position;
determining imaging position information of the other spatial position in the image according to the imaging position information of the equipment and the relative position relation between the equipment and the other spatial position; and
and establishing a mapping between the spatial position information of the other spatial position and the imaging position information of the other spatial position based on the spatial position information of the other spatial position and the imaging position information of the other spatial position.
14. A system for establishing a mapping between a spatial location in a scene and an imaging location in an image captured by a camera, the system comprising:
one or more cameras deployed in the scene;
one or more visual markers deployed in the scene; and
a device configured to implement the method of any one of claims 1-13.
15. A storage medium having stored therein a computer program which, when executed by a processor, is operable to carry out the method of any one of claims 1-13.
16. An electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, is operable to carry out the method of any of claims 1-13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110002685.3A CN114726996B (en) | 2021-01-04 | 2021-01-04 | Method and system for establishing a mapping between a spatial location and an imaging location |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110002685.3A CN114726996B (en) | 2021-01-04 | 2021-01-04 | Method and system for establishing a mapping between a spatial location and an imaging location |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114726996A CN114726996A (en) | 2022-07-08 |
CN114726996B true CN114726996B (en) | 2024-03-15 |
Family
ID=82233903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110002685.3A Active CN114726996B (en) | 2021-01-04 | 2021-01-04 | Method and system for establishing a mapping between a spatial location and an imaging location |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114726996B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104170367A (en) * | 2011-12-28 | 2014-11-26 | 英特尔公司 | Virtual shutter image capture |
CN104243789A (en) * | 2013-06-17 | 2014-12-24 | 鸿富锦精密工业(深圳)有限公司 | Video camera image setting system and method |
WO2015073590A2 (en) * | 2013-11-12 | 2015-05-21 | Smart Picture Technology, Inc. | Collimation and homogenization system for an led luminaire |
AU2016100647A4 (en) * | 2015-06-07 | 2016-06-23 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
WO2016184255A1 (en) * | 2015-05-19 | 2016-11-24 | 北京蚁视科技有限公司 | Visual positioning device and three-dimensional mapping system and method based on same |
CN108713179A (en) * | 2017-09-18 | 2018-10-26 | 深圳市大疆创新科技有限公司 | Mobile article body controlling means, equipment and system |
CN109361859A (en) * | 2018-10-29 | 2019-02-19 | 努比亚技术有限公司 | A kind of image pickup method, terminal and storage medium |
CN109819169A (en) * | 2019-02-13 | 2019-05-28 | 上海闻泰信息技术有限公司 | Panorama shooting method, device, equipment and medium |
CN110458897A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Multi-cam automatic calibration method and system, monitoring method and system |
CN111026107A (en) * | 2019-11-08 | 2020-04-17 | 北京外号信息技术有限公司 | Method and system for determining the position of a movable object |
CN111256701A (en) * | 2020-04-26 | 2020-06-09 | 北京外号信息技术有限公司 | Equipment positioning method and system |
WO2020192543A1 (en) * | 2019-03-27 | 2020-10-01 | 北京外号信息技术有限公司 | Method for presenting information related to optical communication apparatus, and electronic device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2496591B (en) * | 2011-11-11 | 2017-12-27 | Sony Corp | Camera Movement Correction |
US10789473B2 (en) * | 2017-09-22 | 2020-09-29 | Samsung Electronics Co., Ltd. | Method and device for providing augmented reality service |
-
2021
- 2021-01-04 CN CN202110002685.3A patent/CN114726996B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104170367A (en) * | 2011-12-28 | 2014-11-26 | 英特尔公司 | Virtual shutter image capture |
CN104243789A (en) * | 2013-06-17 | 2014-12-24 | 鸿富锦精密工业(深圳)有限公司 | Video camera image setting system and method |
WO2015073590A2 (en) * | 2013-11-12 | 2015-05-21 | Smart Picture Technology, Inc. | Collimation and homogenization system for an led luminaire |
WO2016184255A1 (en) * | 2015-05-19 | 2016-11-24 | 北京蚁视科技有限公司 | Visual positioning device and three-dimensional mapping system and method based on same |
AU2016100647A4 (en) * | 2015-06-07 | 2016-06-23 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
CN108713179A (en) * | 2017-09-18 | 2018-10-26 | 深圳市大疆创新科技有限公司 | Mobile article body controlling means, equipment and system |
CN109361859A (en) * | 2018-10-29 | 2019-02-19 | 努比亚技术有限公司 | A kind of image pickup method, terminal and storage medium |
CN109819169A (en) * | 2019-02-13 | 2019-05-28 | 上海闻泰信息技术有限公司 | Panorama shooting method, device, equipment and medium |
WO2020192543A1 (en) * | 2019-03-27 | 2020-10-01 | 北京外号信息技术有限公司 | Method for presenting information related to optical communication apparatus, and electronic device |
CN110458897A (en) * | 2019-08-13 | 2019-11-15 | 北京积加科技有限公司 | Multi-cam automatic calibration method and system, monitoring method and system |
CN111026107A (en) * | 2019-11-08 | 2020-04-17 | 北京外号信息技术有限公司 | Method and system for determining the position of a movable object |
CN111256701A (en) * | 2020-04-26 | 2020-06-09 | 北京外号信息技术有限公司 | Equipment positioning method and system |
Also Published As
Publication number | Publication date |
---|---|
CN114726996A (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7337654B2 (en) | Maintenance activity support system and maintenance activity support method | |
CN103064417B (en) | A kind of Global localization based on many sensors guiding system and method | |
US9020301B2 (en) | Method and system for three dimensional mapping of an environment | |
CN111256701A (en) | Equipment positioning method and system | |
JP6943988B2 (en) | Control methods, equipment and systems for movable objects | |
CN111026107B (en) | Method and system for determining the position of a movable object | |
US12271999B2 (en) | System and method of scanning an environment and generating two dimensional images of the environment | |
JP6725736B1 (en) | Image specifying system and image specifying method | |
TWI812865B (en) | Device, method, storage medium and electronic apparatus for relative positioning | |
CN112528699B (en) | Method and system for obtaining identification information of devices or users thereof in a scene | |
JP4227037B2 (en) | Imaging system and calibration method | |
CN216483094U (en) | Visual positioning and navigation system of indoor mobile robot | |
CN114726996B (en) | Method and system for establishing a mapping between a spatial location and an imaging location | |
CN206833252U (en) | A kind of mobile electronic device | |
CN112558008B (en) | Navigation method, system, equipment and medium based on optical communication device | |
JP2006051864A (en) | Automatic flight control system and automatic flight control method | |
CN112581630B (en) | User interaction method and system | |
KR20210112551A (en) | Construction management system and method using mobile electric device | |
JP7562398B2 (en) | Location Management System | |
CN113008135B (en) | Method, apparatus, electronic device and medium for determining a position of a target point in space | |
JP7527190B2 (en) | Equipment and materials management system | |
JP7508271B2 (en) | IMAGE IDENTIFICATION SYSTEM AND IMAGE IDENTIFICATION METHOD | |
CN112955930A (en) | System and method for reverse optical tracking of moving objects | |
CN111753565B (en) | Method and electronic equipment for presenting information related to optical communication device | |
CN113029159A (en) | Indoor mobile robot visual positioning navigation system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |