WO2015087315A1 - Methods and systems for remotely guiding a camera for self-taken photographs - Google Patents
Methods and systems for remotely guiding a camera for self-taken photographs Download PDFInfo
- Publication number
- WO2015087315A1 WO2015087315A1 PCT/IL2014/050471 IL2014050471W WO2015087315A1 WO 2015087315 A1 WO2015087315 A1 WO 2015087315A1 IL 2014050471 W IL2014050471 W IL 2014050471W WO 2015087315 A1 WO2015087315 A1 WO 2015087315A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- georeferencing
- camera
- image
- location
- remote
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 239000000463 material Substances 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims 2
- 230000001360 synchronised effect Effects 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 7
- 230000008569 process Effects 0.000 description 24
- 239000013598 vector Substances 0.000 description 16
- 230000004807 localization Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 3
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 239000004606 Fillers/Extenders Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
- G01C11/08—Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
- G01C11/10—Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken using computers to control the position of the pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B2206/00—Systems for exchange of information between different pieces of apparatus, e.g. for exchanging trimming information, for photo finishing
Definitions
- the present invention relates generally to the field of controlling a capturing device and in particular to systems and methods for guiding a remote camera for self-photographing.
- Self-taken, self-depicting photographs are typically taken with a hand-held digital camera or camera phone.
- Selfies are often associated with social networking, like Instagram. They are often casual, are typically taken either with a camera held at arm's length or in a mirror, and typically include either only the photographer or the photographer and as many people as can be in focus.
- Photogrammetry is the art and science for determining geometrical properties of an object from one or more images. It would be advantageous to combine the tools of photogrammetry in order to address some of the challenges imposed by selfie photography.
- a method for remotely guiding a camera for self-taken photographs may include the following steps: capturing, using a user equipment (UE), at least one image of a scene, wherein the at least one captured image include, in total, at least three objects having predefined locations on a specified frame of reference; sending the captured images from the UE to a remote server; georeferencing said captured images, to yield a location of the UE on the specified frame of reference; directing a remote camera which is located at a known location on the specified frame of reference, and remotely from the UE, at the UE, based on the georeferenced location of the UE; and capturing an image containing the UE, by the directed remote camera.
- UE user equipment
- Figure 1 illustrates the outline of the scenario according to some embodiments of the present invention. It depicts a camera and a user's device, each arbitrarily arranged in their respective, not necessarily aligned axes systems;
- Figure 2 illustrates the reason localization errors occur in the present data processing techniques according to some embodiments of the present invention.
- Figure 2A depicts remote camera pointing errors caused by localization errors of the device.
- Figure 2B depicts remote camera pointing errors caused by pointing errors of the device itself;
- Figure 3A is a flowchart of the methodical process according to some embodiments of the present invention.
- Figure 3B contains a system breakdown capable of delivering said processes, also containing all of the embodiments;
- Figure 4 is an example, set in a sports stadium, of step 443 in Figure 3A according to some embodiments of the present invention
- Figure 5 is an example, set in a sports stadium, of one sub step in step 456 in Figure 3 A according to some embodiments of the present invention
- Figure 6 is an example, set in a sports stadium, of another substep in step 456 in Figure 3 A according to some embodiments of the present invention.
- Figure 7 is an example, set in a sports stadium, of step 452 in Figure 3A according to some embodiments of the present invention
- Figure 8 is an example, set in a sports stadium, of step 458 in Figure 3 A according to some embodiments of the present invention.
- Coordinate frames as used herein, is defined as frames used to represent and measure properties of objects, such as their position and orientation.
- Ground frame is a three-dimensional Cartesian coordinate system (X, Y, Z) adapted for locating objects in the physical space.
- Ground frames may be global (e.g. WGS84) or local - determined ad hoc to support relative observations.
- image frame is defined as a two-dimensional coordinate system (x, y) related to the image plane of the camera.
- the origin of the image coordinate system (x, y) is located at the intersection of the camera optical axis with the image plane.
- a point in the image plane (x, y) may be identified by pixel index (c, r) since digital cameras use two-dimensional array sensors in the image plane to capture the incoming electromagnetic signal.
- georeferencing aka camera calibration, in computer vision terminology
- the external parameters of an image also referred to as external orientation
- the external parameters of an image refer to a position and orientation of an image in space, i.e.
- the external parameters are usually referred to as "6DOF" (six degrees of freedom) since these parameters comprise three rotation angles (Euler angles) that describe the rotations about three principal axes needed to rotate from the ground system into the image system (augmented with a z axis pointing along the camera optical axis) and three coordinates of the camera in the ground system.
- Internal parameters refer to intrinsic properties of the camera. Internal parameters may be comprised of the camera's focal length, distortions and a geometric transformation aligning the detector array within the camera system.
- camera model refers to a mathematical formula which models a transformation from an object domain to an image domain using the internal parameters and external parameters.
- the camera model is usually represented by collinearity equations. Following below in Equation (1) is a non-limiting mathematical formula representing a camera model. It is understood that embodiments of the present invention can be extended to any type of camera model.
- c is a column location of an image point of a projected ground object
- r is a row location of the image point of the object
- ⁇ is a scalar
- K is a camera calibration matrix, as detailed herein below
- R is a rotation matrix between the reference coordinate system and the (augmented) image coordinate system
- X, Y, Z are the coordinates of the object in the ground reference coordinate system.
- R and C are also referred to as external parameters or external orientation of an image.
- the camera calibration matrix may be expressed as follows in Eq. (2):
- f c is a focal of the camera along the column axis
- f r is a focal of the camera along the row axis
- s is a skewness of the digital sensor array
- c 0 is a column coordinate of the focal center in the image coordinate system
- r 0 is row coordinate of the focal center in the image coordinate system.
- K is also referred to as internal parameters or internal orientation of an image.
- tilt point refers to a scene point in the physical space, if this scene point can be identified in a reference georeferenced image and in an image undergoing the georeferencing process.
- control point refers to a scene point in the physical space, if this scene point has known coordinates in the ground system and can be identified in an image undergoing the georeferencing process.
- camera central Line of Sight or simply “camera central LOS” as used herein, refers to a direction of a vector corresponding to an optical axis of a camera acquiring an image. It can be understood as a vector originating at the optical center of the camera and passing through an object in the physical world appearing at a center of the image. In our case we will have this vector determining an orientation (or the normal vector) of the surface to be imaged by a remote camera. Given the rotation matrix R of section "camera model" above, the camera central LOS is specified by a vector corresponding to the third row of the R matrix multiplied by (-1).
- control points refers to a process that given a sufficient number of control points (X, Y, Z) ⁇ (R,C) the image internal and external orientation parameters (K,R and C) can be determined using many well-known linear and iterative optimization strategies. The exact number of the required control points depends on a prior knowledge of the imaging geometry and potentially additional constraints applied to the 11 parameter calibration model.
- BA Bundle- Adjustment
- the Tie Points between Imgc and the pair of Ref A and Refe images allow the computation of Imgc parameters in the ground reference system.
- the tie points may be obtained manually or by some image processing means.
- the technical goal of the system is to direct the remote camera vector (110) so that it coincides with (passes through) the base of the handheld device (in Figure 1 is illustrated as smartphone, but can be either smartphone, tablet or other device) vector (140) within in a tolerated error (for example up to 1 meter- in this instance, for 200 meters, this translates into 0.3° pointing tolerance).
- a tolerated error for example up to 1 meter- in this instance, for 200 meters, this translates into 0.3° pointing tolerance.
- Additional goal is the case of multiple cameras, is the selection of a correct camera from resolving phone vector (140).
- SUBSTITUTE SHEET (RULE 26 ⁇ ) During the setup make sure that when the camera points exactly on a point with coordinates ( ⁇ , ⁇ , ⁇ ) in a local/global frame the PTU readings are. For outdoor cases this can easily be done using off-the-shelf methods, such as DGPS, north finding systems and electronic level. For indoor cases other localization techniques are being developed (e.g. Wifi Network Triangulation).
- PTU pan-tilt unit
- the process does not require specifically a smartphone, but rather any device that is able to capture images and 6DOF orientation.
- smartphones make for the vast majority of readily available devices, but other devices such as “smart glasses", camera with geo- tagging modules, music players such as iPod Touch, and others can be utilized.
- Embodiments of the present invention provide a system and method all serving the purpose of directing a remote PTU-Camera assembly to take a picture of a remote subject with rough initial localization (originating from the user's mobile device's sensors).
- figure 3 A general method is proposed. It is divided into two distinctive processes: calibration process (439) and the main process (438).
- the calibration process is a prerequisite to the main process, and has to occur at least once for the main process to perform its task.
- the calibration sequence begins with two processes which can be performed independently but both are required: device/smartphone calibration setup (440), which is needed for the localization of smartphones and PTU calibration setup (444) which is needed to correctly direct the PTU to the location found by the smartphone localization.
- Device calibration can be one of three embodiments: arena mapping (441), by physical objects (442) or by using a georeferenced image set (443).
- Embodiment I Smartphone calibration setup via arena mapping or mathematical modeling (441) is the process of modeling the arena (the locus of all the possible places the user can stay in while requesting a photo) in a given local axes system.
- a non-limiting example can be by modeling a seating arrangement of a concert hall, assigning a 3D coordinate for each seat. This can be executed, for example, using measuring seat location, mathematical interpolating and 3D modeling.
- Embodiment II Smartphone calibration setup via physical objects (442) is the process of setting or defining physical objects of known geometry in said local axes system. These objects can also hold the property of containing good features for image matching, so they will be easy to automatically trace in a smartphone image.
- Embodiment III Smartphone calibration setup via geo-referenced image set (442) is the process of recording an image set, from various locations and orientations in the arena, and further georeferencing them to said local axis system. See figure 4 for an example.
- PTU calibration is created using setting calibration objects (444) - measured objects in the arena, and by calibrate the PTU in the PTU + Camera assembly (445) in the following process: accurately measure the relative location and north-bearing of the Camera-PTU
- the sequence begins with a request for photo from the system (447), non-limiting examples can be accessing a web server with a smartphone's internet browser or an app installed on the user's smartphone. Other initiations may be requested by third party, or by "opt-out" model in which users' photos are automatically taken.
- Embodiment I location data suffice for the arena mapping needs to be provided.
- a non- limiting example can be supplying the exact seat number, where the arena mapping is a seat number to 3D coordinate transformation.
- Embodiment II Pointing the smartphone towards the calibration physical and taking a picture, also recording external orientation.
- Embodiment III Pointing the smartphone towards the general area depicted by the geo- referenced set.
- All the aforementioned data is gathered and sent to a server (449), which can typically be wirelessly sent over the web, or even local (installed on the smartphone).
- User identification can take place (449) and connected the support data (such as contact and photo send details - telephone number, email).
- the request, identification and data are assigned a place in a queue (451).
- a queue manager (452) assigns requests to available assembly or assemblies, based on their availability (operational and not dealing with previous requests), and also the desired angle(s), which can be determined from the previous data.
- the queue can be utilized in a number of ways, as a sequential request, asynchronous handling, priority assignment or any other.
- an assembly logic module which breaks the request down to camera commands (455), for instance shutter release, accessories commands (454) if applicable, for instance spotlight or flash bulb, and a process which later becomes PTU commands (458).
- camera commands for instance shutter release
- accessories commands for instance spotlight or flash bulb
- PTU commands for instance spotlight or flash bulb
- user localization In order to achieve accurate PTU commands, user localization (456) must take place. This is made by either embodiment.
- Embodiment I input user location in said arena, and calculating 3D location using mapping or modeling which took place in (441).
- Embodiment II and III Geo-referencing of the acquired image takes place using the following method:
- Embodiment II Find the calibration objects pattern in the acquired image, detecting the location of the desired calibration objects in the image, using generally known relevant techniques, for example template matching. See Figure 5 for an example.
- Embodiment III Find Corresponding points in the acquired image and the georeferenced images, using generally known relevant techniques, for example interest point matching.
- georeferencing information either:
- Embodiment II coordinates of the calibration objects
- Embodiment III camera parameters of the reference georeferenced images and the pixel coordinates of the matched tie points in each image
- Embodiment II camera calibration using control points (see definitions and notations),
- Embodiment III using multiple images, bundle adjustment can be performed with the new smartphone image joining the bundle. Use tie points from the matched georeferenced images (see definitions and notations).
- the precise orientation can be used to further select a more appropriate assembly (as shown in figure 7).
- Taking the photo several photos or a video clip using PTU-Camera assembly (459) follows.
- Several pictures can be taken in sequence, in different zoom levels. See Figure 8 for an example using many zoom levels.
- Other processes algorithms can be used in this step, such as facial recognition in order to improve localization. Further "dialogue" with the user (photo retake, retouch etc.) can take place.
- the last step is to store, share or distribute the result electronically (460) to the user, to social networks.
- Other functions can be utilized, such as billing, ad placement and so forth.
- the systems refer to a common system with three embodiments, each corresponds to the embodiments of the methods. Furthermore, one can assign each step in the Method to each module in the System, to get the same result.
- the system comprises of one or more smartphone or handheld device (420), a calibration setup subsystem(422), a server subsystem(421), and one or more Camera-PTU assemblies(423).
- a user operates a smartphone (420), which sends a request to the server(421).
- the user interface module (425), for example a locally installed app or a web interface, interfaces with the user.
- a user ID manager (426) helps trace the request to the user.
- Other support functions can also be utilized (428).
- a queue manger (427) sends requests to a desired assembly, based on preconfigured logic.
- an assembly logic unit (430) breaks to task into sub tasks, sent to the camera controller (433) which controls the camera (436), optional accessory controller (434) which controls the optional accessories (437), such as lighting, and the PTU controller (432) which controls the PTU (432).
- the most crucial part is calculating precise localization of the smartphone and desired PTU movement (431) using data supplied by the user and calibration.
- Three embodiments, corresponding to the Systems, can be used to perform the described calculations. The calculation is used in order to control the PTU accurately, and thus is an important part of the data supplied to the PTU controller (432).
- images or video clips can be taken (424) and be distributed electronically (429), with or without further "dialogue” with the user (photo retake, retouch and the like).
- aspects of the present invention may be embodied as a system, method or an apparatus. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.”
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
- the term "method” may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.
- the descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Studio Devices (AREA)
Abstract
Methods and systems for the guidance of remote and rotatable cameras toward a user are introduced, said guidance based solely or a standard handheld device capable of location sensing and/or orientation sensing and/or photographing, such as smartphones. The invention overcomes the prior art's limitations, since the measurement quality of location and orientation in mass produced appliances are incapable of the delivering the required accuracies needed for said application. In two embodiments, both the camera- gimbal assembly and handheld device's imagery are calibrated using pre-georeferenced imagery or calibration objects in the scenery, and put in same axis system. The calibration enables the gimbal's measurements to be tuned to the desired accuracy. Using the calibrated measurements, exact relative angles can be calculated to guide the remote gimbal to the user, enabling picture of the handheld device's holder.
Description
METHODS AND SYSTEMS FOR REMOTELY GUIDING A CAMERA FOR SELF- TAKEN PHOTOGRAPHS
FIELD OF THE INVENTION
The present invention relates generally to the field of controlling a capturing device and in particular to systems and methods for guiding a remote camera for self-photographing.
BACKGROUND OF THE INVENTION
Self-taken, self-depicting photographs (also known as "selfies") are typically taken with a hand-held digital camera or camera phone. Selfies are often associated with social networking, like Instagram. They are often casual, are typically taken either with a camera held at arm's length or in a mirror, and typically include either only the photographer or the photographer and as many people as can be in focus.
One apparent limitation of the selfie, beyond the limitations of the camera itself relate to the fact that the selfie capturing device is hand held and so guidance is limited to the point of view of a device that is "at arm's length" from the subject of the photography. Photogrammetry is the art and science for determining geometrical properties of an object from one or more images. It would be advantageous to combine the tools of photogrammetry in order to address some of the challenges imposed by selfie photography.
BRIEF SUMMARY OF THE INVENTION
According to some embodiments of the present invention, a method for remotely guiding a camera for self-taken photographs is provided herein. The method may include the following steps: capturing, using a user equipment (UE), at least one image of a scene, wherein the at least one captured image include, in total, at least three objects having predefined locations on a specified frame of reference; sending the captured images from the UE to a remote server; georeferencing said captured images, to yield a location of the UE on the specified frame of reference; directing a remote camera which is located at a known location on the specified frame of reference, and remotely from the UE, at the UE, based on the georeferenced location of the UE; and capturing an image containing the UE, by the directed remote camera.
These additional, and/or other aspects and/or advantages of the present invention are set forth in the detailed description which follows.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the invention and in order to show how it may be implemented, references are made, purely by way of example, to the accompanying drawings in which like numerals designate corresponding elements or sections. In the accompanying drawings:
Figure 1 illustrates the outline of the scenario according to some embodiments of the present invention. It depicts a camera and a user's device, each arbitrarily arranged in their respective, not necessarily aligned axes systems;
Figure 2 illustrates the reason localization errors occur in the present data processing techniques according to some embodiments of the present invention. Figure 2A depicts remote camera pointing errors caused by localization errors of the device. Figure 2B depicts remote camera pointing errors caused by pointing errors of the device itself;
Figure 3A is a flowchart of the methodical process according to some embodiments of the present invention. Figure 3B contains a system breakdown capable of delivering said processes, also containing all of the embodiments;
Figure 4 is an example, set in a sports stadium, of step 443 in Figure 3A according to some embodiments of the present invention;
Figure 5 is an example, set in a sports stadium, of one sub step in step 456 in Figure 3 A according to some embodiments of the present invention;
Figure 6 is an example, set in a sports stadium, of another substep in step 456 in Figure 3 A according to some embodiments of the present invention;
Figure 7 is an example, set in a sports stadium, of step 452 in Figure 3A according to some embodiments of the present invention; and Figure 8 is an example, set in a sports stadium, of step 458 in Figure 3 A according to some embodiments of the present invention.
The drawings together with the following detailed description make the embodiments of the invention apparent to those skilled in the art.
DETAILED DESCRIPTION OF THE INVENTION
Prior to setting a detailed description of embodiments of the present invention, it may be helpful to set forth definitions of certain terms that will be used hereinafter.
The term "coordinate frames" as used herein, is defined as frames used to represent and measure properties of objects, such as their position and orientation. Ground frame is a three-dimensional Cartesian coordinate system (X, Y, Z) adapted for locating objects in the physical space. Ground frames may be global (e.g. WGS84) or local - determined ad hoc to support relative observations.
The term "image frame" as used herein, is defined as a two-dimensional coordinate system (x, y) related to the image plane of the camera. The origin of the image coordinate system (x, y) is located at the intersection of the camera optical axis with the image plane. A point in the image plane (x, y) may be identified by pixel index (c, r) since digital cameras use two-dimensional array sensors in the image plane to capture the incoming electromagnetic signal. The term "georeferencing" (aka camera calibration, in computer vision terminology) as used herein, refers to the process of determining the external and the internal parameters of an image. The external parameters of an image (also referred to as external orientation) refer to a position and orientation of an image in space, i.e. the position and a line of sight of the camera which had acquired the image. The external parameters are usually referred to as "6DOF" (six degrees of freedom) since these parameters comprise three rotation angles (Euler angles) that describe the rotations about three principal axes needed to rotate from the ground system into the image system (augmented with a z axis pointing along the camera optical axis) and three coordinates of the camera in the ground system.
The term "internal parameters" as used herein, refer to intrinsic properties of the camera. Internal parameters may be comprised of the camera's focal length, distortions and a geometric transformation aligning the detector array within the camera system.
The term "camera model" as used herein, refers to a mathematical formula which models a transformation from an object domain to an image domain using the internal parameters and external parameters. The camera model is usually represented by collinearity equations. Following below in Equation (1) is a non-limiting mathematical formula
representing a camera model. It is understood that embodiments of the present invention can be extended to any type of camera model.
Eq. (1)
Where: c is a column location of an image point of a projected ground object r is a row location of the image point of the object μ is a scalar; K is a camera calibration matrix, as detailed herein below; R is a rotation matrix between the reference coordinate system and the (augmented) image coordinate system C=[Cx, Cy, Cz] are the coordinates of the camera in the ground reference coordinate system; X, Y, Z are the coordinates of the object in the ground reference coordinate system. R and C are also referred to as external parameters or external orientation of an image. The camera calibration matrix may be expressed as follows in Eq. (2):
Where: fc is a focal of the camera along the column axis; fr is a focal of the camera along the row axis; s is a skewness of the digital sensor array; c0 is a column coordinate of the focal center in the image coordinate system; r0 is row coordinate of the focal center in the image coordinate system. K is also referred to as internal parameters or internal orientation of an image.
The term "tie point" as used herein, refers to a scene point in the physical space, if this scene point can be identified in a reference georeferenced image and in an image undergoing the georeferencing process.
The term "control point" as used herein, refers to a scene point in the physical space, if this scene point has known coordinates in the ground system and can be identified in an image undergoing the georeferencing process.
The term "camera central Line of Sight" or simply "camera central LOS" as used herein, refers to a direction of a vector corresponding to an optical axis of a camera acquiring an image. It can be understood as a vector originating at the optical center of the camera and
passing through an object in the physical world appearing at a center of the image. In our case we will have this vector determining an orientation (or the normal vector) of the surface to be imaged by a remote camera. Given the rotation matrix R of section "camera model" above, the camera central LOS is specified by a vector corresponding to the third row of the R matrix multiplied by (-1).
The term "georeferencing using control points" as used herein, refers to a process that given a sufficient number of control points (X, Y, Z) <→ (R,C) the image internal and external orientation parameters (K,R and C) can be determined using many well-known linear and iterative optimization strategies. The exact number of the required control points depends on a prior knowledge of the imaging geometry and potentially additional constraints applied to the 11 parameter calibration model.
The term "georeferencing using tie points" as used herein, refers to a process that given at least two georeferenced images, RefA and RefB, (with camera parameters deemed sufficient for the sought application) and an image Imgc undergoing georeferencing, the parameters of Imgc can be computed using the "gold- standard" iterative optimization method called Bundle- Adjustment (BA). BA is defined as the problem of simultaneously refining the 3D coordinates describing the scene geometry as well as the camera parameters of the images, according to an optimality criterion involving the corresponding image projections of all points. In embodiments of the present invention case, since RefA and Refe images are already georeferenced (hence uniquely defining the ground referential) the Tie Points between Imgc and the pair of RefA and Refe images allow the computation of Imgc parameters in the ground reference system. The tie points may be obtained manually or by some image processing means.
With specific reference now to the drawings in detail, it is stressed that the particulars shown are for the purpose of example and solely for discussing the preferred embodiments of the present invention, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention. The description taken with the drawings makes apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
Before explaining the embodiments of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following descriptions or illustrated in the drawings. The invention is applicable to other embodiments and may be practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
Referring to Figure 1 , the technical goal of the system is to direct the remote camera vector (110) so that it coincides with (passes through) the base of the handheld device (in Figure 1 is illustrated as smartphone, but can be either smartphone, tablet or other device) vector (140) within in a tolerated error (for example up to 1 meter- in this instance, for 200 meters, this translates into 0.3° pointing tolerance). Additional goal, is the case of multiple cameras, is the selection of a correct camera from resolving phone vector (140).
Since the introduction of the affordable smartphone and other mobile devices, devices have been extensively utilized in self-photography. The devices, however, are limited in the user's arm reach, and require a third party for taking a picture. Taking the picture from wide perceptive or unusual angles is often impossible. Solutions exist in the form of an arm extender, remote control photography or third party, all having their obvious limitations - leaving the device unguarded, awkwardly looking devices, and the like. Using a mobile device's location and orientation to guide a remote camera can be a satisfactory solution, but using only the raw sensors cannot yield precise enough results, as illustrated in the following paragraphs.
Two methods allow the calculation of the 3D angle between the device and the camera, and both don't provide satisfactory performance. Device-sensor-based camera guidance state of the art - Method 1 - Calculation of 3D relative angle using difference in location.
Using the simple steps:
PTU angular readings (PTUaz, PTU-
6
SUBSTITUTE SHEET (RULE 26}
During the setup make sure that when the camera points exactly on a point with coordinates (Χ,Υ,Ζ) in a local/global frame the PTU readings are. For outdoor cases this can easily be done using off-the-shelf methods, such as DGPS, north finding systems and electronic level. For indoor cases other localization techniques are being developed (e.g. Wifi Network Triangulation).
1. Use the mobile device / smartpho determine location, such as from internal GPS, to retrieve location vector p n—
2. Calculate the relative angle, by:
2.2. Converting the difference vector to two rotation angles (dAz, dElv) to be applied on the PTU using the following Cartesian to Polar coordinate conversion:
dr = Jdx2 + dy2 + dzz
dAz = tasL ^^dy dx"}
This is allowed since both vectors are in the same global coordinate system.
3. Move the PTU motor by dAz and DElv.
4. Take a photo and transfer to the application.
The problem with the process appears when accounting for all errors in the system. Since regular mobile devices such as smartphones calculate absolute location measurements using low-cost GPS, and attitude measurements using inertial and magnetic sensors, their accuracies are up to dozens of meters and several degrees RMS respectively. Referring to Figure 2A, although the pointing error of the PTU itself (210) is negligible, and the localization is accurate (200) the mobile device / smartphone location error (220) doesn't allow for accurate pointing. For indoor cases neither side's accuracy can be deemed adequate.
Device-sensor-based camera guidance state of the art - Method 2 - Calculation of 3D relative angle using mobile device / smartphone pointing. This is achieved using the simple steps:
1. Accurately measure the absolute north-bearing and leveling of the Camera-PTU assembly ( camdtr
This can easily be done using off-the-shelf methods,
\camm/
such as by utilizing north-finding systems and electronic leveling.
2. Point the mobile device / smartphone to the camera (positioning the remote camera in the center of the device's camera image for example). Use the smartphone to determine absolute direction, retrieving direction vector pkndvr = I p n^ I. This vector is referenced to same north axis as camdxr.
Point camera system from camdr toward —*phmkr to point back to the device.
Take photo and transfer to the application.
The problem with the process appears when accounting for system errors, in the pointing error of regular smartphones calculate absolute location measurements using GPS and attitude measurements using inertial and magnetic sensors. Their accuracies are up to dozens of meters and several degrees respectively. Referring to Figure 2B - although the pointing error of the PTU itself (300) is negligible, the device pointing error (220) doesn't allow for accurate pointing. Other problem is that identification of the camera-PTU is needed in order to place it in the center of the FOV.
Therefore, what is needed in the art, is a system and a method to allow a person to accurately and automatically direct a remote camera mounted on a pan-tilt unit (PTU) to photograph said person, using only a common smartphone as a guiding system. The process does not require specifically a smartphone, but rather any device that is able to capture images and 6DOF orientation. In practice, smartphones make for the vast majority of readily available devices, but other devices such as "smart glasses", camera with geo- tagging modules, music players such as iPod Touch, and others can be utilized.
Embodiments of the present invention provide a system and method all serving the purpose of directing a remote PTU-Camera assembly to take a picture of a remote subject with rough initial localization (originating from the user's mobile device's sensors). Presented are three embodiments for a system and three embodiments for a method, some embodiments corresponding to each other, and all enabling the required result. Referring to figure 3 A, general method is proposed. It is divided into two distinctive processes: calibration process (439) and the main process (438). The calibration process is
a prerequisite to the main process, and has to occur at least once for the main process to perform its task.
The calibration sequence begins with two processes which can be performed independently but both are required: device/smartphone calibration setup (440), which is needed for the localization of smartphones and PTU calibration setup (444) which is needed to correctly direct the PTU to the location found by the smartphone localization.
Device calibration can be one of three embodiments: arena mapping (441), by physical objects (442) or by using a georeferenced image set (443).
Embodiment I: Smartphone calibration setup via arena mapping or mathematical modeling (441) is the process of modeling the arena (the locus of all the possible places the user can stay in while requesting a photo) in a given local axes system. A non-limiting example can be by modeling a seating arrangement of a concert hall, assigning a 3D coordinate for each seat. This can be executed, for example, using measuring seat location, mathematical interpolating and 3D modeling.
Embodiment II: Smartphone calibration setup via physical objects (442) is the process of setting or defining physical objects of known geometry in said local axes system. These objects can also hold the property of containing good features for image matching, so they will be easy to automatically trace in a smartphone image.
Embodiment III: Smartphone calibration setup via geo-referenced image set (442) is the process of recording an image set, from various locations and orientations in the arena, and further georeferencing them to said local axis system. See figure 4 for an example.
PTU calibration is created using setting calibration objects (444) - measured objects in the arena, and by calibrate the PTU in the PTU + Camera assembly (445) in the following process: accurately measure the relative location and north-bearing of the Camera-PTU
This can easily be done by utilizing the aforementioned calibration scenery (444), by centering on known objects in the scenery, and calibrating relatively to them, mechanically or analytically. This can be done without special instruments and also indoors/outdoors, independent of GPS signals.
The whole data can be stored and used for later use (446), regardless of the system implementation.
Beginning with the main process (438), the sequence begins with a request for photo from the system (447), non-limiting examples can be accessing a web server with a smartphone's internet browser or an app installed on the user's smartphone. Other initiations may be requested by third party, or by "opt-out" model in which users' photos are automatically taken.
Localization support data, specific for the user, needs to be gathered (448). They depend on the embodiment: Embodiment I: location data suffice for the arena mapping needs to be provided. A non- limiting example can be supplying the exact seat number, where the arena mapping is a seat number to 3D coordinate transformation.
Embodiment II: Pointing the smartphone towards the calibration physical and taking a picture, also recording external orientation. Embodiment III: Pointing the smartphone towards the general area depicted by the geo- referenced set.
All the aforementioned data is gathered and sent to a server (449), which can typically be wirelessly sent over the web, or even local (installed on the smartphone).
User identification can take place (449) and connected the support data (such as contact and photo send details - telephone number, email). The request, identification and data are assigned a place in a queue (451). A queue manager (452) assigns requests to available assembly or assemblies, based on their availability (operational and not dealing with previous requests), and also the desired angle(s), which can be determined from the previous data. The queue can be utilized in a number of ways, as a sequential request, asynchronous handling, priority assignment or any other.
After the request is sent to a specific assembly, it is being received by an assembly logic module (453), which breaks the request down to camera commands (455), for instance shutter release, accessories commands (454) if applicable, for instance spotlight or flash bulb, and a process which later becomes PTU commands (458).
In order to achieve accurate PTU commands, user localization (456) must take place. This is made by either embodiment.
Embodiment I: input user location in said arena, and calculating 3D location using mapping or modeling which took place in (441).Embodiment II and III: Geo-referencing of the acquired image takes place using the following method:
1. Find smartphone pixel coordinates set (r,c) in the camera using image
matching techniques:
Embodiment II: Find the calibration objects pattern in the acquired image, detecting the location of the desired calibration objects in the image, using generally known relevant techniques, for example template matching. See Figure 5 for an example.
Embodiment III: Find Corresponding points in the acquired image and the georeferenced images, using generally known relevant techniques, for example interest point matching.
2. Determine inputs:
(r,c) coordinate sets from previous step,
input camera model from raw sensors,
georeferencing information, either:
Embodiment II: coordinates of the calibration objects
Embodiment III: camera parameters of the reference georeferenced images and the pixel coordinates of the matched tie points in each image
3. Perform georeferencing with either:
Embodiment II: camera calibration using control points (see definitions and notations),
Embodiment III: using multiple images, bundle adjustment can be performed with the new smartphone image joining the bundle. Use tie points from the matched georeferenced images (see definitions and notations).
4. Determine the phone's exterior orientation (see Figure 6): pfc ., I which corresponds to camera model vector C (see definitions and notations).
phone direction: pkruhr = which can be determined from the camera mc
rotation matrix R (see definitions and notations).
In each embodiment, either the location or orientation will vastly improve, thus resulting in fewer errors. The PTU movement calculation (457) to phn, performed by calculating angles θ, φ from difference in locations vector am— moving the PTU
by relevant angles.
The precise orientation can be used to further select a more appropriate assembly (as shown in figure 7). Taking the photo, several photos or a video clip using PTU-Camera assembly (459) follows. Several pictures can be taken in sequence, in different zoom levels. See Figure 8 for an example using many zoom levels. Other processes algorithms can be used in this step, such as facial recognition in order to improve localization. Further "dialogue" with the user (photo retake, retouch etc.) can take place. The last step is to store, share or distribute the result electronically (460) to the user, to social networks. Other functions can be utilized, such as billing, ad placement and so forth.
The systems refer to a common system with three embodiments, each corresponds to the embodiments of the methods. Furthermore, one can assign each step in the Method to each module in the System, to get the same result.
The system comprises of one or more smartphone or handheld device (420), a calibration setup subsystem(422), a server subsystem(421), and one or more Camera-PTU assemblies(423).
Before any user request can take place, calibration must occur, using well measured (hardware) calibration objects (422) and the calibration algorithm (422). The calibration is identical to the one described in the Methods.
A user operates a smartphone (420), which sends a request to the server(421). In the server, the user interface module (425), for example a locally installed app or a web interface, interfaces with the user. A user ID manager (426) helps trace the request to the user. Other support functions can also be utilized (428). A queue manger (427) sends requests to a desired assembly, based on preconfigured logic.
Whenever a specific assembly is selected, an assembly logic unit (430) breaks to task into sub tasks, sent to the camera controller (433) which controls the camera (436), optional accessory controller (434) which controls the optional accessories (437), such as lighting, and the PTU controller (432) which controls the PTU (432). The most crucial part is calculating precise localization of the smartphone and desired PTU movement (431) using data supplied by the user and calibration. Three embodiments, corresponding to the Systems, can be used to perform the described calculations. The calculation is used in order to control the PTU accurately, and thus is an important part of the data supplied to the PTU controller (432). After all the desired commands to the hardware have been sent, images or video clips can be taken (424) and be distributed electronically (429), with or without further "dialogue" with the user (photo retake, retouch and the like).
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or an apparatus. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system."
The aforementioned flowchart and block diagrams illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be
implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the above description, an embodiment is an example or implementation of the inventions. The various appearances of "one embodiment," "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments.
Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment. Reference in the specification to "some embodiments", "an embodiment", "one embodiment" or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. It will further be recognized that the aspects of the invention described hereinabove may be combined or otherwise coexist in embodiments of the invention.
The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples.
It is to be understood that the details set forth herein do not construe a limitation to an application of the invention. Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
It is to be understood that the terms "including", "comprising", "consisting" and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers.
If the specification or claims refer to "an additional" element, that does not preclude there being more than one of the additional element.
It is to be understood that where the claims or specification refer to "a" or "an" element, such reference is not be construed that there is only one of that element.
It is to be understood that where the specification states that a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, that particular component, feature, structure, or characteristic is not required to be included.
Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks. The term "method" may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs. The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only.
Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein.
While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention.
Claims
1. A method comprising:
capturing, using a user equipment (UE), at least one image of a scene, wherein the at least one captured image include, in total, at least three objects having predefined locations on a specified frame of reference;
sending the captured images from the UE to a remote server;
georeferencing said captured images, to yield a location of the UE on the specified frame of reference;
directing a remote camera which is located at a known location on the specified frame of reference, and remotely from the UE, at the UE, based on the georeferenced location of the UE; and
capturing an image containing the UE, by the directed remote camera.
2. The method of claim 1, wherein said georeferencing comprises image calibration using said object locations as control points.
3. The method of claim 1 , wherein said georeferencing comprises: photographing imagery depicting said objects, georeferencing said imagery using said objects as control points, and georeferencing said image using tie points to said imagery.
4. The method of claim 1, wherein said georeferencing comprises modeling the scene to yield a model, and manually pointing to a location within the scene or translating a known location in the model, into a specific point coordinate.
5. The method according to claim 1, further comprising transmitting said captured image containing the UE.
6. The method according to claim 5, further comprising transmitting said captured image containing the UE, to the UE.
7. The method according to claim 1 , wherein said UE comprises an image capturing device and wireless communication circuitry.
8. The method according to claim 1, wherein said UE is one of: a smart telephone, a tablet personal computer (PC); a lap top computer; near eye display device; and head worn image capturing device.
9. The method according to claim, 1, wherein the georeferencing and the directing are repeatedly carried out responsive to a plurality of sending of images by a plurality of UEs, wherein an order of the georeferencing and the directing is based on a scheme that optimizes an operation of the remote camera.
10. The method according to claim, 1, wherein the georeferencing and the directing are repeatedly carried out responsive to a plurality of sending of images by a plurality of UEs, wherein an order of the georeferencing and the directing is based on a priority scheme associating a priority order to the plurality of UEs.
11. The method according to claim 1, further comprising adjusting at least one of: the directing and the capturing by the remote camera, in response to a feedback.
12. The method according to claim 11, wherein the feedback arrives from the UE.
13. A system comprising:
a user equipment (UE) configured to:
capture at least one image of a scene, wherein the at least one captured image include, in total, at least three objects having predefined locations on a specified frame of reference; and
send the captured images from the UE to a remote server;
a remote server; configured to:
georeference the captured images, to yield a location of the UE on the specified frame of reference; and
direct a remote camera which is located at a known location on the specified frame of reference, and remotely from the UE, at the UE, based on the georeferenced location of the UE; and
a remote camera configured to capture an image containing the UE.
14. The system according to claim 13, wherein said georeferencing comprises image calibration using said object locations as control points.
15. The system according to claim 13, wherein said georeferencing comprises: photographing imagery depicting said objects, georeferencing said imagery using said objects as control points, and georeferencing said image using tie points to said imagery.
16. The system according to claim 13, wherein said georeferencing comprises modeling the scene, and manually pointing to a location within the scene.
17. The system according to claim 13, further comprising transmitting said captured image containing the UE.
18. The system according to claim 13, further comprising transmitting said captured image containing the UE, to the UE.
19. The system according to claim 13, wherein said UE comprises an image capturing device and wireless communication circuitry.
20. The system according to claim 13, wherein said UE is one of: a smart telephone, a tablet personal computer (PC); a lap top computer; near eye display device; and a head worn image capturing device;
21. The system according to claim 13, wherein the remote server is configured to repeatedly carry out the georeferencing and the directing responsive to a plurality of sending of images by a plurality of UEs, wherein an order of the georeferencing and the directing is based on a scheme that optimizes an operation of the remote camera.
22. The system according to claim 13, wherein the remote server is configured to repeatedly carry out the georeferencing and the directing responsive to a plurality of sending of images by a plurality of UEs, wherein an order of the georeferencing and the directing is based on a priority scheme associating a priority order to the plurality of UEs.
23. The system according to claim 13, wherein remote server is further configured to adjust at least one of: the directing and the capturing by the remote camera, in response to a feedback.
24. The system according to claim 23, wherein the feedback arrives from the UE.
25. A system comprising:
a user equipment (UE) configured to:
capture at least one image of a scene, wherein the at least one captured image include, in total, at least three objects having predefined locations on a specified frame of reference;
georeference the captured images, to yield a location of the UE on the specified frame of reference; and
send the captured images and the georeferenced location of the UE from the UE to a remote server;
a remote server; configured to:
direct a remote camera which is located at a known location on the specified frame of reference, and remotely from the UE, at the UE, based on the georeferenced location of the UE; and
a remote camera configured to capture an image containing the UE.
26. A system comprising:
a user equipment (UE) configured to initiate a self- capturing:
a georeferencing module configured to georeference a location of the UE ; and a remote camera configured to capture an image containing the UE based on the georeferenced location of the UE,
wherein the UE, the remote server, and the remote camera are located in different locations.
27. The system according to claim 26, wherein the system is configured to manage synchronous and/or asynchronous ordering of remote self photo.
28. The system according to claim 26, wherein the system allocates the most relevant camera per request from one or more relevant cameras that could take into account relative places of different cameras, subject and background, availability of different cameras among other considerations
29. The system according to claim 26, wherein the system establishes a dialogue between the system and ordering person is conducted to coordinate photo taking time, including informing the customer on camera availability, the user's place in the ordering queue, estimating time till picture taking, alerting before actual take, user queue notification and more usability features.
30. The system according to claim 26, wherein the system comprises image/face recognition feature that enables either the identification of human figures and/or the specific user within the picture and is enabled to set the focus and frame content based on such identification.
31. The system according to claim 26, wherein the system enables a real time dialogue with the system to present a one or more pictures taken and enabling the user to redo or quit if they are not satisfied with the result.
32. The system according to claim 26, wherein the system enables the user with easier registration and or enrolment either through unique ID or QR printed on event ticket in a way it identifies event, location and venue.
33. The system according to claim 26, wherein the system comprises mechanisms to charge for the delivery or usage of such material.
34. The system according to claim 26, wherein the system comprises mechanism to charge the user or a 3rd party on actual photos taken and or sent and/or shared and/or printed and/or delivered.
35. The system according to claim 26, wherein the system is further configured to digitally embed special logo or text or any other message in full or a as a watermark or similar within the picture or near the picture as a memorabilia or monetization method under specific business rules.
36. The system according to claim 26, wherein the system is further configured to send the picture or a downgraded or improved version of it to the customer by MMS, e- mail, link for download or immediate messaging system, according to ordering or business rules.
37. The system according to claim 26, wherein the system is further configured to share the picture or a downgraded version of it with the user and condition sharing of a better or improved version in payment or another remuneration.
38. The system according to claim 26, wherein the system is further configured to automatically share the picture over social networks and instant messaging systems according to preset rules and or user request.
39. The system according to claim 26, wherein the system further enables automatic or pre-set improvements in Image quality or by adding features to the picture or by removing unnecessary features.
40. The system according to claim 26, wherein the camera orientation is carried out by physical, identifiable preset shape marked within the area instead or besides the camera orientation methods.
41. The system according to claim 26, wherein additional image manipulations will happen, including automatically stitched panoramas.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361914067P | 2013-12-10 | 2013-12-10 | |
US61/914,067 | 2013-12-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015087315A1 true WO2015087315A1 (en) | 2015-06-18 |
Family
ID=53370700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2014/050471 WO2015087315A1 (en) | 2013-12-10 | 2014-05-26 | Methods and systems for remotely guiding a camera for self-taken photographs |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015087315A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017017291A (en) * | 2015-07-06 | 2017-01-19 | Tdk株式会社 | Hoop load port device |
CN112802115A (en) * | 2020-12-26 | 2021-05-14 | 长光卫星技术有限公司 | Geometric calibration method and device for multi-focal-plane spliced large-view-field off-axis camera |
CN117036506A (en) * | 2023-08-25 | 2023-11-10 | 浙江大学海南研究院 | Binocular camera calibration method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060209303A1 (en) * | 2005-03-16 | 2006-09-21 | Shuto Ohta | System and method for automated positioning of camera |
US20080137912A1 (en) * | 2006-12-08 | 2008-06-12 | Electronics And Telecommunications Research Institute | Apparatus and method for recognizing position using camera |
US20080240697A1 (en) * | 2005-07-26 | 2008-10-02 | Marcus Brian I | Remote View And Controller For A Camera |
US20100103258A1 (en) * | 2007-03-21 | 2010-04-29 | Nxp, B.V. | Camera arrangement and method for determining a relative position of a first camera with respect to a second camera |
US20110115930A1 (en) * | 2009-11-17 | 2011-05-19 | Kulinets Joseph M | Image management system and method of selecting at least one of a plurality of cameras |
US20110304730A1 (en) * | 2010-06-09 | 2011-12-15 | Hon Hai Precision Industry Co., Ltd. | Pan, tilt, and zoom camera and method for aiming ptz camera |
US20120019659A1 (en) * | 2010-07-23 | 2012-01-26 | Robert Bosch Gmbh | Video surveillance system and method for configuring a video surveillance system |
-
2014
- 2014-05-26 WO PCT/IL2014/050471 patent/WO2015087315A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060209303A1 (en) * | 2005-03-16 | 2006-09-21 | Shuto Ohta | System and method for automated positioning of camera |
US20080240697A1 (en) * | 2005-07-26 | 2008-10-02 | Marcus Brian I | Remote View And Controller For A Camera |
US20080137912A1 (en) * | 2006-12-08 | 2008-06-12 | Electronics And Telecommunications Research Institute | Apparatus and method for recognizing position using camera |
US20100103258A1 (en) * | 2007-03-21 | 2010-04-29 | Nxp, B.V. | Camera arrangement and method for determining a relative position of a first camera with respect to a second camera |
US20110115930A1 (en) * | 2009-11-17 | 2011-05-19 | Kulinets Joseph M | Image management system and method of selecting at least one of a plurality of cameras |
US20110304730A1 (en) * | 2010-06-09 | 2011-12-15 | Hon Hai Precision Industry Co., Ltd. | Pan, tilt, and zoom camera and method for aiming ptz camera |
US20120019659A1 (en) * | 2010-07-23 | 2012-01-26 | Robert Bosch Gmbh | Video surveillance system and method for configuring a video surveillance system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017017291A (en) * | 2015-07-06 | 2017-01-19 | Tdk株式会社 | Hoop load port device |
CN112802115A (en) * | 2020-12-26 | 2021-05-14 | 长光卫星技术有限公司 | Geometric calibration method and device for multi-focal-plane spliced large-view-field off-axis camera |
CN112802115B (en) * | 2020-12-26 | 2022-03-01 | 长光卫星技术有限公司 | Geometric calibration method and device for multi-focal plane splicing large field of view off-axis camera |
CN117036506A (en) * | 2023-08-25 | 2023-11-10 | 浙江大学海南研究院 | Binocular camera calibration method |
CN117036506B (en) * | 2023-08-25 | 2024-05-10 | 浙江大学海南研究院 | Binocular camera calibration method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109087244B (en) | Panoramic image splicing method, intelligent terminal and storage medium | |
CN105606077B (en) | Geodetic Measuring System | |
KR102143456B1 (en) | Depth information acquisition method and apparatus, and image collection device | |
JP5865388B2 (en) | Image generating apparatus and image generating method | |
JP6398472B2 (en) | Image display system, image display apparatus, image display method, and program | |
JP5620036B1 (en) | Digital camera for panoramic photography and panoramic photography system | |
KR101653041B1 (en) | Method and apparatus for recommending photo composition | |
US9939263B2 (en) | Geodetic surveying system | |
CN102932584B (en) | Display unit and display packing | |
US20160050349A1 (en) | Panoramic video | |
JP5812509B2 (en) | Map display device, map display method and program | |
CN110291777A (en) | Image acquisition method, device and machine-readable storage medium | |
TW201725899A (en) | Electronic device and photo shooting method | |
JP2011058854A (en) | Portable terminal | |
JP2015231101A (en) | Imaging condition estimation apparatus and method, terminal device, computer program and recording medium | |
WO2015087315A1 (en) | Methods and systems for remotely guiding a camera for self-taken photographs | |
JP7504688B2 (en) | Image processing device, image processing method and program | |
JP7439398B2 (en) | Information processing equipment, programs and information processing systems | |
JP2019012201A (en) | Image pick-up device, image pick-up program, and image pick-up method | |
WO2016071896A1 (en) | Methods and systems for accurate localization and virtual object overlay in geospatial augmented reality applications | |
JP2014120815A (en) | Information processing apparatus, imaging device, information processing method, program, and storage medium | |
JP6610741B2 (en) | Image display system, image display apparatus, image display method, and program | |
JP2019068429A (en) | Imaging condition estimation apparatus and method, terminal device, computer program, and recording medium | |
KR20140068416A (en) | Apparatus and method for generating three dimension image using smart device | |
CN114175631B (en) | Image processing device, image processing method, program and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14868995 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14868995 Country of ref document: EP Kind code of ref document: A1 |