CN110418120A - The system and method for stacking projector alignment - Google Patents
The system and method for stacking projector alignment Download PDFInfo
- Publication number
- CN110418120A CN110418120A CN201810382102.2A CN201810382102A CN110418120A CN 110418120 A CN110418120 A CN 110418120A CN 201810382102 A CN201810382102 A CN 201810382102A CN 110418120 A CN110418120 A CN 110418120A
- Authority
- CN
- China
- Prior art keywords
- image
- projected image
- projector
- projected
- target position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Projection Apparatus (AREA)
- Transforming Electric Information Into Light Information (AREA)
Abstract
The application provides the system and method for stacking projector alignment.Computing device and at least two is for projecting the projector communication for being stacked with image and at least one sensor for obtaining the correspondence image of projected image.From sensor image determine each projected image in the initial position of public domain, and the virtual projection image positioned at public target position.Determine the correspondent transform by projected image movement and the public target position for being transformed into virtual projection image.By correspondent transform applied to projected image after the corresponding conversion of additional first projected image generation.Projector is controlled as based on projected image after the projection conversion of public target position.
Description
Prioity claim
This application claims submitting on 04 28th, 2017 application No. is No.15/499, the priority of 954 U. S. application,
Its content is contained in this by reference.
Technical field
The present invention relates to projector fields, and in particular, to the system and method for stacking projector alignment.
Background technique
The image that stacking projector projects come out, which is typically necessary, to be aligned.For example, each stacking projector can project
Identical image, image are aligned on a surface, to enhance image in the brightness on surface.In other cases, one or more
A stacking projector may project different images, for example, an image may be stacked the image rings of projector projects by other
Around.Even if however, stack projector projects image in terms of alignment there are small difference, spectators are also easy to discover.Though
So similar alignment, due to the variation of environment after manual-alignment, can still be deviated, so at any time by manual operation
With regard to needing periodically to carry out manual maintenance.Some automation modes can also be used to be aligned, but such automation mode one
As it is all more complicated, need largely handle and/or cost of device.
Summary of the invention
Generally, this application involves the system and method for stacking projector alignment, the stacking projector includes one
Or multiple sensors, such as camera, projector projects are stacked to the non-aligned of such as screen and/or surface for obtaining each
Image.Image in one or more sensors is for determining projected image in the initial position of public domain, such as camera area
Domain etc..Virtual projection image is located at the public target position in public domain, and correspondent transform be determined to by image from
It stacks mobile projector and/or conversion and/or twists into public target position.These transformation are applied on image to ensure to turn
Image after changing is in target position, and converted images can be aligned under corresponding control and be incident upon public target by projector
Position.
In the present specification, element can be described as " configuration with " execute one or more functions or " being configured to " this
A little functions.In general, it configures to execute or be configured to carry out a kind of element of function and be able to carry out the function, or is applicable in
It in the execution function, or is adapted to execute the function, can perhaps be used to execute the function or be otherwise able to carry out
The function.
It should be understood that for purposes of the invention, " at least one of X, Y and Z " and " one in X, Y and Z
Or it is multiple " statement can be interpreted only X, only Y, only two or more any combination in Z or X, Y and Z (for example,
XYZ, XY, YZ, ZZ etc.).Similar logic can be applied to two or more " at least one ... " occurred in any case
The statement of " one or more ... ".
Further, in this specification, term " corresponding model " refers to an a series of model in correspondences;Corresponding mould
Type may include a series of corresponding points and/or described respective function, including but not limited to, for example, optimal polynomial curve.
One of embodiment of the present invention provides a kind of system, comprising: the first projector, first projector is for projecting the
One projected image;Second projector, second projector are used to project second be stacked in first projected image and throw
Shadow image, first projected image and second projected image are initially misaligned;At least one sensor, described at least one
A sensor is used to obtain the first image of first projected image and the second image of second projected image;Calculate dress
It sets, the computing device is communicated with first projector, second projector and at least one described sensor, described
Computing device is used for: in public domain, determining first perspective view from the first image and second image respectively
The initial position of picture and second projected image;In the public domain, virtual projection image, the virtual projection figure are determined
The public target position of image position the first projected image described in the public domain and second projected image;Described in determination
The correspondent transform of first projected image and second projected image, according to the public target of the virtual projection image
Distinguish one or manyly mobile and convert first projected image and second projected image in position;Described pair is strained
It changes and is applied to additional first projected image and additional second projected image, after being respectively formed the first projected image after converting and converting
Second projected image;And it controls first projector and second projector and is thrown respectively based on the public target position
Penetrate after the conversion the second projected image after the first projected image and the conversion.
In some embodiments, the correspondent transform includes the torsion of first projected image and second projected image
Qu Bianhua.In some embodiments, the distortion variation includes that predefined distortion is applied to positioned at the public target position
The virtual projection image.
In some embodiments, the public target position is located at the initial position and the institute of first projected image
It states between the initial position of the second projected image.
In some embodiments, the public target position is the initial position of first projected image and described
The average value of the initial position of second projected image.
In some embodiments, the virtual projection figure is positioned in the public target position using continuously adjustable parameter
Picture.
In some embodiments, the computing device is further used for: by adjusting first projector and described the
The image of two projectors inputs, and controls first projector and second projector and is distinguished based on the public target position
Project after the conversion the second projected image after the first projected image and the conversion.
In some embodiments, the system further comprises one or more additional projections instrument, one or more of
One or more additional projections figures that the projection of additional projections instrument is stacked with first projected image and second projected image
At the beginning of picture, one or more of additional projections images and one or more first projected images and second projected image
Begin to be misaligned, at least one sensor, at least one described sensor is further used for obtaining one or more of additional throwings
The correspondence image of shadow image, computing device, the computing device are communicated with one or more of additional projections instrument, the calculating
Device is further used for: determining one or more of additional projections images being moved to the additional right of the public target position
It should convert;And the additional correspondent transform is utilized, it controls each one or more of additional projections instrument and is based on the public affairs
Target position projects projected image after conversion respectively altogether.In some embodiments, the public target position is first throwing
The intermediate value of the initial position of shadow image, second projected image and one or more of additional projections images.
In some embodiments, one or more of sensors include one or more cameras, the first image and
Second image includes corresponding camera image.In some embodiments, the public domain includes camera area.
In some embodiments, one or more of sensors include one or more insertion surfaces or abutment surface
Optical sensor, the surface is the surface that is projected of first projected image and second projected image, described first
Image and second image include corresponding sensor image.In some embodiments, the public domain includes sensor
Region.
In some embodiments, the public domain include it is following it is multiple in one: first projector and described
One of them region of second projector;Camera area;Screen area;And any mathematical region.
Further, another embodiment of the present invention provides a kind of methods, comprising: with the first projector, the second projector
With at least one sensor communication computing device in, first projector for project the first projected image, described second
Projector is for projecting the second projected image being stacked in first projected image, first projected image and described the
Two projected images are initially misaligned and at least one described sensor is used to obtain the first image of first projected image
It is determined respectively from the first image and second image with the second image of second projected image in public domain
The initial position of first projected image and second projected image determines virtual projection image in the public domain,
The virtual projection image is located at the public mesh of the first projected image and second projected image described in the public domain
Cursor position;The correspondent transform of first projected image and second projected image is determined, according to the virtual projection figure
Distinguish movement one or manyly and convert first projected image and second projection in the public target position of picture
Image.The correspondent transform is applied to additional first projected image and additional second projected image, is respectively formed the after conversion
Second projected image after one projected image and conversion;And first projector and second projector are controlled based on described
Public target position projects after the conversion the second projected image after the first projected image and the conversion respectively.
In some embodiments, the correspondent transform includes the torsion of first projected image and second projected image
Qu Bianhua.In some embodiments, the distortion variation includes that predefined distortion is applied to positioned at the public target position
The virtual projection image.
In some embodiments, the public target position is located at first projected image and second projected image
The initial position between.
In some embodiments, the public target position is the initial position of first projected image and described
The average value of the initial position of second projected image.
In some embodiments, using continuously adjustable parameter by the virtual projection framing to the public target position
It sets.
In some embodiments, the computing device is further used for, by adjusting first projected image and described
The image of second projected image inputs, and controls first projector and second projector is based on the public target position
The second projected image after the first projected image and the conversion is projected after the conversion respectively.
In some embodiments, the system further comprises one or more additional projections instrument, one or more of
One or more additional projections figures that the projection of additional projections instrument is stacked with first projected image and second projected image
Picture;At the beginning of one or more of additional projections images and one or more first projected images and second projected image
Begin to be misaligned, at least one sensor, at least one described sensor is further used for obtaining one or more of additional throwings
The correspondence image of shadow image, computing device, the computing device are communicated with one or more of additional projections instrument, the calculating
Device is further used for: determining one or more of additional projections images being moved to the additional right of the public target position
It should convert;And the additional correspondent transform is utilized, it controls each one or more of additional projections instrument and is based on the public affairs
Target position projects projected image after conversion respectively altogether.In some embodiments, the public target position is first throwing
The intermediate value of the initial position of shadow image, second projected image and one or more of additional projections images.
In some embodiments, one or more of sensors include one or more cameras, the first image and
Second image includes corresponding camera image.In some embodiments, the public domain includes camera area.
In some embodiments, one or more of sensors include one or more insertion surfaces or abutment surface
Optical sensor, the surface is the surface that is projected of first projected image and second projected image, described first
Image and second image include corresponding sensor image.In some embodiments, the public domain includes sensor
Region.
In some embodiments, the public domain include it is following it is multiple in one: first projector and described
One of them region of second projector;Camera area;Screen area;And any mathematical region.
The another aspect of this explanation provides a kind of non-transitory computer readable medium for storing computer program, executes institute
Computer program is stated to be intended to: in the computing device communicated with the first projector, the second projector and at least one sensor, institute
State the first projector for project the first projected image, second projector for projection is stacked on first projected image
On the second projected image, first projected image is initially misaligned and described at least one with second projected image
A sensor is used to obtain the first image of first projected image and the second image of second projected image, public
Region determines first projected image and second projected image from the first image and second image respectively
Initial position determine that virtual projection image, the virtual projection image are located at described in the public domain in public domain
The public target position of first projected image and second projected image;Determine that first projected image and described second is thrown
The correspondent transform of shadow image, it is one or manyly mobile to be distinguished according to the public target position of the virtual projection image
With conversion first projected image and second projected image;The correspondent transform is applied to additional first projected image
With additional second projected image, it is respectively formed the first projected image and the second projected image after conversion after conversion;And control institute
It states the first projector and second projector and the first perspective view after the conversion is projected according to the public target position respectively
Second projected image after picture and the conversion.
Detailed description of the invention
The various embodiments and embodiments thereof described for a better understanding of the present invention, hereinafter reference will be made to the drawings in more detail
Ground describes these exemplary embodiments, in which:
Fig. 1 shows the optical projection system according to nonlimiting examples.
Fig. 2 shows the stacking projector alignment systems that can be applied to system shown in Figure 1 according to nonlimiting examples.
Fig. 3 shows the stacking projector alignment method according to nonlimiting examples.
Fig. 4 shows the image being misaligned according to the projector original projection of the system shown in Figure 2 of nonlimiting examples.
Fig. 5 shows the example of the projected image initial position for determining public domain according to nonlimiting examples
Property structure light image.
Fig. 6 is shown to be determined according to the transformation of nonlimiting examples and virtual target image, for moving and/or converting
Projected image makes it stack (for example, being stacked to virtual target picture position).
Fig. 7, which is shown, stacks image according to the projector projects of the system shown in Figure 2 of nonlimiting examples.
Specific embodiment
System 100 shown in FIG. 1 includes: rendering device 101 (being alternatively referred to as device 101 below);Content player
103;Computing device 105 and two or more projector 107-1,107-2 (being alternatively referred to as projector 107 below).
Generally, device 101 is communicated with content player 103 and is optionally communicated with computing device 105, content player 103
It is communicated with projector 107.As shown, device 101 and content player 103 communicate and belong to jointly device 108, but
In other embodiments, device 101 and content player 103 can be independent device.Computing device 105 is optionally for generation
Aligned data 109a, aligned data 109a include the image data for being directed at the projection of projector 107.According to aligned data
109a, device 101 produce the conventional images data for being used to project in rendered image data 110, such as rendering projector 107
(not shown).In Fig. 1, the solid line for connecting each section shows the data flow of image and/or video recording, connection computing device 105 to
The chain-dotted line of device 101 and/or device 108 shows the flow direction of aligned data 109a.
When device 101 and content player 103 are mutually indepedent, device 101 sends image data 110 to content and plays
Device 103 (is alternatively united by data for projection 112-1,112-2 for generating the corresponding suitable processing of projector 107 and playing
Referred to as data for projection 112) processing and/or " broadcasting " image data 110.For example, image data 110 can include but is not limited to
Avi file, a series of JPG files, PNG file etc..Data for projection 112 can include but is not limited to HDMI data, VGA data,
And/or image transmission data.When device 101 and the combination of content player 103 are in device 108, device 108 can be to projection
Data 112 (for example, image data) carry out real-time rendering without generating image data 110.Under any circumstance, data for projection
112 are transferred to projector 107 by content player 103, are based on the content player 103, and data for projection 112 is thrown for controlling
Shadow instrument 107 projects image, such as projects on three-dimension object.In some embodiments, in " broadcasting " medium and player 103
Before " playing " medium later, content and/or medium (such as image data 110) are first rendered.For example, in some embodiments,
System 100 be it is distributed, the rendering of medium can carry out in the place such as manufacturing enterprise, while corresponding content player 103
It can be embedded into each projector of the occasions such as theater.
Device 101 generally comprises image composer and/or image renderer, such as computing device, server etc., for giving birth to
At and/or rendering image, such as image data 110.The image data 110 can include but is not limited to still image, image etc..
Further, although not shown in the drawings, device 101 may include and/or communicate with image composer and/or memory, this is deposited
Reservoir storage is for generation and/or rendered image data 110.In addition, algorithm etc., which can be used, in device 101 generates image data
110, further to generate image.
Content player 103 includes the player for " broadcasting " and/or rendered image data 110, for example, working as picture number
When according to 110 including image data, content player 103 is projected by output data for projection 112 for projector 107, is broadcast with realizing
Put and/or render image data.Therefore, content player 103 can include but is not limited to video player, image processing dress
It sets, computing device, server etc..However, as described above, when device 101 and 103 groups of content player are combined into device 108, figure
As the rendering of data 110 can be omitted, and when device 108 renders data for projection 112, will not generate image data 110.
Computing device 105 includes projector (including projector 107), camera (not shown in FIG. 1) and/or light sensing
Device, and the aligned data 109a for automatically determining projector 107 computing device any reasonable combination.Computing device
105 nonlimiting examples and corresponding function are by the description referring to following figure 2 to Fig. 7.
Each projector 107 includes the projector for projecting data for projection 112, which includes but is not limited to number
Projector, cinema projector, LCOS (liquid crystal on silicon) projector, DMD (the more mirror devices of number) projector etc..Further,
Although illustrating only two projectors 107 in figure, system 100 may include two or more projectors 107, wherein every projection
For projecting corresponding data for projection, data for projection includes instrument, for example, the identical image for being stacked with.Do not consider to project
Technology used in instrument 107, it can be assumed that projector 107 or other projectors described herein include video modulator, the figure
As modulator includes multiple single pixel modulators;Such as when projector includes DMD projector, which includes multiple
Digital micro-mirror, each digital micro-mirror are used for a pixel of image to be projected.
As shown, system 100 further includes one or more 2D (two dimension) distortion equipment and/or module 113, for example, In
In projector 107 (although this distortion equipment can be only fitted in content player 103 and/or as autonomous device).Project number
Distortion variation can be carried out by distorting module 113 according to 112, such as pass through the pixel in mobile and/or adjustment data for projection 112
The data for projection 112 that projector 107 projects on target object is adjusted, target object includes but is not limited to screen, object etc..
Aligned data 109a and communicate with device 101 (and/or device 108) since computing device 105 determines, distorting module 113 can be with
It does not use, may be selected using and/or remove from system 100.In practice, according to the prior art, the use indication of module 113 is distorted
The treatment process of image, and by means of computing device 105 be device 101 (and/or device 108) provide aligned data 109a,
Distortion module 113 can be omitted.However, in some embodiments, distortion module 113 can be used for the image projected on material object
It is finely adjusted.
Although device 101, content player 103, computing device 105 and projector 107 are described as independent member
Part, in other embodiments, in one or more devices 101, content player 103, computing device 105 and projector 107
Corresponding part can be only fitted in the same device (for example, device 108) and/or the process resource that can mutually share.For example,
Although being not shown, system 100 include one or more controllers, one or more processors, one or more memories and
One or more communication interfaces, for example, the control in device 101, content player 103, computing device 105 and projector 107
Device, memory and communication interface can device 101, content player 103, computing device 105 and projector 107 it is mutual it
Between share.In fact, in general, the module of system 100, as shown, the different function of optical projection system is represented, in the throwing
The aligned data 109a of projector 107 can be automatically determined in shadow system.In some embodiments, system 100 includes module
And/or function, when one or more projectors 107 and/or when projecting the screen or object mobile of image, the module and/
Or function is mapped on three-dimension object and/or updates aligned data 109a for that will project.
Attention is now turned to Fig. 2 that description stacks projector alignment system 200.In fact, computing device 105 can wrap
One or more modules of system 200 are included, in addition, the module of system 100 can according to need the module including system 200.System
System 200 includes computing device 201 (being alternatively referred to as device 201 below);First projector 207-1 is thrown for projecting first
Shadow image;Second projector 207-2, for projecting the second projected image, the second projected image is stacked in the first projected image,
And first projected image and the second projected image be initially misaligned;At least one sensor, for obtaining first projected image
The first image and second projected image the second image;With a surface 215 for the projection image of projector 207.It throws
Shadow instrument 207-1,207-2 are alternatively referred to as projector 207, and commonly referred to as projector 207.
As shown, at least one sensor is arranged in the form of camera 214, the first image is (for example, belong to projector
First projected image of 207-1 projection) and the second image (for example, second projected image for belonging to projector 207-2 projection) packet
Include corresponding camera image.
In other embodiments, at least one sensor can be multiple light of insertion surface 215 or abutment surface 215
Sensor, the surface 215 are the surfaces that the first projected image and the second projected image are projected, and the first image and the second image
Including corresponding sensor image, for example, the light intensity map of optical sensor.For example, optical sensor can be embedding in array fashion
Enter into the surface 215.
Obviously, although example of the embodiment only using camera 214 as at least one sensor is introduced, for this field
For technical staff, which may include other kinds of sensor and/or optical sensor (for example, insertion
Screen 215 and/or the optical sensor for adjoining screen 215), as long as can be used for obtaining the biography for the image that projector 207 projects
Sensor.The high resolution for the image that the resolution ratio for the image that camera 214 obtains can be obtained than optical sensor;For example, phase
The pixel for the image that machine 214 obtains can be millions of etc., and are embedded in screen and/or adjoin the optical sensor of screen and obtain
Image pixel number can depend on sensor quantity, at least may be as few as 6 pixels (for example, each angle of screen 215
On have a sensor, it is intermediate that there are two sensors).
In general, computing device can be with the first projector 207-1, the second projector 207-2 and at least one biography
Sensor (such as camera 214) communication.
Projector 207 can be stacking, thus placement corresponding with surface 215 can make projector 207 project surface
215 image is mutually aligned, although the image projected under initial setting and misalignment.Further, to make projector 207
The image of projection is aligned, and camera 214 can be used to obtain the camera image 414 of the projected image of projector 207.
However, although it is that plane is described (for example, surface 215 can wrap that current embodiment, which is based on surface 215,
Include screen etc.), in some other embodiment, surface 215 can be replaced object, and object may include three-dimension object, and
Projector 207 can project on pattern to the object.
In general, comparison diagram 1 and Fig. 2, projector 107 may include projector 207, and computing device 105 may include
Computing device 201, projector 207 and camera 214, and, such as when resource in device 101, content player 103 and calculates dress
It sets when sharing between 105, any one of device 101 and content player 103 may include at least one of computing device 201
Point.Further, although two stacking projectors 207 are shown in the figure, system 200 may include that more than two stackings are thrown
Shadow instrument.Similarly, although merely illustrating a camera 214 in figure, system 200 may include multiple cameras 214;It should further manage
It solves, the camera 214 in system 200 may include digital camera, which can transmit number to computing device 201
Camera image.
Device 201 may include any reasonable computing device, including but not limited to image processing unit (GPU), image
Processing unit, image processing engine, image processor, personal computer (PC), server etc., and device 201 generally comprises
Controller 220, memory 222 and communication interface 224 (being alternatively referred to as interface 224 below), and optionally also wrap
Include any reasonable combination of input device and display device.
Interface 224 includes any suitable wired or wireless communication interface, for projector 207 and camera 214 (and
Device 101, content player 103, computing device 105 and Arbitrary Term and/or any other sensor in device 108) root
It is communicated in a manner of wiredly and/or wirelessly according to demand.
Controller 220 may include a processor and/or multiple processors, including but not limited to one or more centers
Processor (CPUs) and/or one or more processing units;No matter which kind of mode, controller 220 all include hardware element and/or
Hardware processor.In fact, in some embodiments, controller 220 may include ASIC (specific integrated circuit) and/or FPGA
(field programmable gate array) is particularly used at least determining the aligned data of projector (such as projector 207).Therefore, it fills
Setting 201 is preferably not general computing device, and is used exclusively for executing the device that aligned data determines function.For example, device
201 and/or controller 220 can specially include that engine can be performed in computer, it is specific for executing that engine can be performed in computer
Aligned data determines function.
Memory 222 may include non-volatile storage media (for example, Electrically Erasable Programmable Read-Only Memory
(" EEPROM "), flash memory) and volatile memory (for example, random access memory (" RAM ")).It is described herein to be used for
The program instruction of 201 function of realization device is usually enduringly stored in memory 222, and is used by controller 220, in journey
It may make volatile storage to obtain reasonable employment during sequence instruction execution.Those of ordinary skill in the art will appreciate that
It arrives, memory 222 is a kind of for storing the computer readable medium example of the program instruction executed for controller 220.Into one
Step ground, memory 222 are also possible to a kind of memory unit and/or memory modules and/or non-volatile storage media example.
Particularly, when controller 220 handles the program 230 that memory 222 stores, controller 220 can be made and/or calculated
Device 201: in public domain, respectively from the first image (for example, being obtained by camera 214 and/or other sensors) and second
Determined in image (for example, being obtained by camera 214 and/or other sensors) the first projected image and the second projected image just
Beginning position;In public domain, determine that virtual projection image, virtual projection image are located at the first projected image and in public domain
The public target position of two projected images;The correspondent transform of the first projected image and the second projected image is determined, according to virtual
The first projected image and the second projected image are moved respectively in the public target position of projected image;Correspondent transform is applied to additional
First projected image and additional second projected image are respectively formed the first projected image and the second perspective view after conversion after conversion
Picture;And control the first projector 207-1 and the second projector 207-2 be based on public target position project respectively convert after the
Second projected image after one projected image and conversion.
Under any circumstance, computing device 201 and/or controller 220 are usually used for determining the aligned number of projector 207
According to 109a, aligned data 109a may include the correspondent transform.
Attention is now turned to Fig. 3, Fig. 3 shows the stacking projector alignment method 300 according to nonlimiting examples
Flow chart.In order to which method 300 is better described, method 300 (such as when 220 processing routine 230 of controller) be can be assumed
It is to utilize system 200, and execute especially by the controller of device 201 220.In fact, method 300 be system 200 and/
Or a kind of mode that device 201 can be implemented.In addition, below will bring the discussion of method 300 to device 201, system 200
And its all parts are further understood.It is to be appreciated, however, that system 200 and/or device 201 and/or method 300 are can
Become, do not need accurately to be bonded to each other as discussed herein and carry out work, and such variation belongs to the present embodiment
Protection scope.
However, it is emphasized that method 300 without accurately being executed with order as shown in the figure, unless otherwise clearly referring to
Out;And modules can be in parallel rather than sequentially executing;Therefore, each element of method 300 be " method block " rather than
" step ".It will be appreciated, however, that method 300 can be implemented on the variant of system 200.Moreover, although 201 quilt of computing device
It is described as implementing and/or executing each method block of method 300, it is possible to understand that each method block of method 300 passes through controller
220 processing routines 230 execute.
In method 300, it is further assumed that camera 214 (and/or other sensors described above) has obtained the first throwing
First image of shadow image and the second image of the second projected image, such as pass through controller 220: control first projector
207-1 projects at least one first projected image;It controls second projector 207-2 and projects at least one second projected image,
To such as surface 215, the time of the second projector 207-2 projection and the first projector 207-1 project at least one first projection
The time of image is different (for example, according to a graded, the time that each projector 207 projects image is different);And it controls
At least one first image of at least one the first projected image of the acquisition of camera 214 and at least one second projected image are at least
One the second image.
In method block 301, computing device 201 and/or controller 220, in public domain, respectively from the first image and
The initial position of the first projected image and the second projected image is determined in two images.
In method block 303, computing device 201 and/or controller 220 determine virtual projection image in public domain,
The virtual projection image is located at the public target position of the first projected image and the second projected image in public domain.
In method block 305, computing device 201 and/or controller 220 determine the first projected image and the second perspective view
The correspondent transform of picture, with one or manyly mobile according to the public target position of virtual projection image difference and conversion first is thrown
Shadow image and the second projected image.In fact, the movement of projected image can independently occur and be not accompanied by other distortion variations, but
It is that the movement of projected image may include the distortion and/or conversion of image.It should be pointed out that conversion described herein is thrown
It include to shadow image, the public target position including projected image to be moved to virtual image, and the property of can choose that distortion should
Virtual image shape of the projected image to public target position.Projected image can be also considered as after conversion described herein
Projected image after movement, wherein optionally experience distortion variation.
In method block 307, correspondent transform is applied to additional first perspective view by computing device 201 and/or controller 220
Picture and additional second projected image are respectively formed the first projected image and the second projected image after conversion after conversion.
In method block 309, computing device 201 and/or controller 220 control the projection of the first projector 207-1 and second
Instrument 207-2 is based on public target position and projects the first projected image and the second projected image after conversion after conversion respectively, after conversion
The second projected image aligned stack projects after first projected image and conversion, such as on surface 215.
It will be described below according to nonlimiting examples of the Fig. 4 to Fig. 7 to method 300.
Fig. 4 shows the perspective view on surface 215 and the first projector 207-1 projects (shown in dotted line) to surface 215
The first projected image 407-1 and the second projector 207-2 projection (shown in dotted line) to surface 215 on the second perspective view
As 407-2.First projected image 407-1 and the second projected image 407-2, is alternatively referred to as, projected image 407 below,
And commonly referred to as projected image 407.Further, although Fig. 4 shows projected image 407 simultaneously on surface 215, each
Projected image 407 can be to be projected in different times.Moreover, projected image 407 is mutual even if projecting at the same time
Between be also be misaligned.
Fig. 4 further illustrates camera 214 and obtains at least one projected image 407 and computing device in certain position
201 with the communication of projector 207 and camera 214.In fact, computing device 201 and/or controller 220 can control projector
207 projection projected images 407, and control camera 214 obtain camera image 414.
Each projected image 407 may include one or more structure light images, for example, generating projector 207 and phase
Point between machine 214 and/or surface 215 is corresponding and/or mapping and/or corresponding model and/or the first projector 207-1 and the
Point between two projector 207-2 is corresponding and/or maps and/or correspond to model.For example, directing attention to Fig. 5, Fig. 5 is shown
Two the structure light images 507-1,507-2 that similar one or more projected image 407 projects (are alternatively referred to as tying below
Structure light image 507).Structure light image 507-1 includes horizontal stripe pattern and structure light image 507-2 includes vertical stripes
Pattern.Each structure light image 507 can be projected sequentially, be such as first similar to projected image 407-1 and be projected instrument 207-1 throwing
It penetrates, then is similar to projected image 407-2 and is projected instrument 207-2 projection.
Each 207 projective structure light image 507 of projector, camera 214 obtain corresponding each camera image 414, control
Device 220 receives camera image 414, and controller 220 generates the pixel and/or region in camera image 414 and structure light image 507
Between it is corresponding and/or corresponding model (for example, in homolographic projection region of each projector 207).
In other words, controller 220 controls the hardware pixel (for example, pixel modulator) of each projector 207 to project
The image pixel of specific structure light image 507, controller 220 identify corresponding pixel and/or region in camera image 414, phase
Machine image 414 is consistent with the specific hardware pixel of the imaging device of the camera 214.In this way, each projected image 207 is mapped
To camera area, and/or by transmission pixel correspondence and/or corresponding model is converted, the pixel of a projector 207 can be reflected
(for example, from a view field to another view field) is mapped in the pixel of another projector 207.
For the correspondence model, due to, such as the resolution ratio of the camera image 414, each projector pixel Ying Youyi
A corresponding points, however, due to processing constraint (such as the time etc.) and/or measuring error and/or mapping error, some correspondence (examples
Such as, in corresponding model) mapping be based on, come for example, threshold value and/or abnormal point etc. are filtered out.
Under any circumstance, controller 220 is generally used for using public domain, and public domain includes but is not limited to, below
One in the region of one: first projector 207-1 in multiple and the region of the second projector 207-2;Camera area;Screen
Curtain region (for example, the physical geometry when surface 215 is configured in controller 220);Any mathematical region and sensor
Region is (for example, the sensor insertion surface 215 of the projected image 407 for sensing projection and/or the region of abutment surface 215
Embodiment etc.).In other words, location of pixels and/or 407 region of projected image and camera image 414 can mutually map to each other
Any region, all in the range of the present embodiment.
Particularly, when one or more sensors for obtaining images in system 200 include one or more cameras, such as
When camera 214, the first image and the second image in the method block 301 of method 300 include corresponding camera image, for example, phase
Machine image 414 and public domain may include camera area.In addition, camera area can be used for projecting image from one
Instrument 207 is transmitted to another 207 region of projector etc..
Particularly, when one or more sensors for obtaining image of system 200 include insertion surface or abutment surface
(surface that the first projected image 407-1 and the second projected image 407-2 are projected onto, such as when multiple optical sensors 215),
The first image and the second image in the method block 301 of method 300 include corresponding sensor image, which includes passing
Sensor region.In addition, similarly, sensor region can be used for image being transmitted to another projection from a projector 207
207 region of instrument etc..
Attention is gone in Fig. 6, Fig. 6 shows public domain CS, for example, the pixel pair of projected image 407-1 mapping
The camera area for answering model I1, I2 to map.In other words, pixel corresponds to the position of model I1 and the position pair of projected image 407-1
It answers, pixel corresponds to the position of model I2 and the position of projected image 407-2 and corresponds to.Therefore, corresponding in the pixel of public domain CS
The position of model I1, I2 represent a kind of embodiment of the method block 301 in method 300, wherein controller 220, in public area
Domain CS determines the first throwing (for example, coming from camera 214) from the first image (for example, coming from camera 214) and the second image respectively
The initial position (for example, position that pixel corresponds to model I1, I2 representative) of shadow image 407-1 and the second projected image 407-2.
Fig. 6 is also shown by the determination of controller 220 in the method block 303 in method 300 in the CS of public domain
Virtual projection image V is located at the public target position in the CS of public domain, which is that pixel corresponds to model
The public target position of the first projected image 407-1 and the second projected image 407-2 that I1, I2 are represented.In other words, in order to by
One projected image 407-1 and the second projected image 407-2 are stacked, and need to throw the first projected image 407-1 and second
Shadow image 407-2 is moved to the public target position of virtual projection image V representative.
Therefore, in the method block 305 of method 300, public target position of the controller 220 based on virtual projection image V
Correspondent transform W1, W2 of the first projected image 407-1 and the second projected image 407-2 are determined, with mobile first projected image
407-1 and the second projected image 407-2.The correspondence arrow of Fig. 6 shows transformation W1, W2, indicates that the transformation W1, W2 are intended to the
One projected image 407-1 and the second projected image 407-2 converts and/or is moved to the position of virtual projection image V from initial position
It sets.As shown, virtual projection image V is distortion relative to projected image 407 (and/or pixel corresponds to model I1, I2), because
This transformation W1, W2 include the distortion variation of the first projected image 407-1 and the second projected image 407-2, and distortion variation includes
But it is not limited to, trapezoidal distortion corresponding with the first projected image 407-1 and the second projected image 407-2.
Further, as shown, the public target position that virtual target image V is represented is located at the first projected image 407-
Between 1 initial position and the initial position of the second projected image 407-2.In some embodiments, public target position is the
The average value of the initial position of the initial position of one projected image 407-1 and the second projected image 407-2.
However, in other embodiments, public target position can be located at the first projected image 407-1 and second and throw
One of them the position shadow image 407-2 (for example, projected image 407 changes by or without distortion, is moved to another
The position of projected image 407).For example, virtual target image V can be positioned at the initial position of the first projected image 407-1 or the
One in the initial position of two projected image 407-2, i.e., pixel corresponds to model I1, I2 in figure, and uses continuous adjustment ginseng
Number indicates.For example, the adjustable parameter between 0 to 1 can be used, wherein " 0 " represents first when determining public target position
The position (for example, the position of the first projected image 407-1 is public target position and/or " V=I1 ") of projected image 407-1,
And " 1 " represent the second projected image 407-2 position (for example, the position of the second projected image be public target position and/or
" V=I2 ").When continuously adjustable parameter is equal to 0.5, common target location is being averaged for projected image 407-1,407-2 position
Value.Continuous adjusting parameter will be hereafter described in detail.
Attention is gone into Fig. 7, Fig. 7 shows computing device 201 and/or controller 220 (for example, in the side of method 300
In method block 307) correspondent transform W1, W2 be applied to additional first projected image 706-1 and additional second projected image 706-2,
To generate the first projected image 707-1 and the second projected image 707-2 after conversion after conversion respectively.Fig. 7 further illustrates meter
It calculates device 201 and/or controller 220 controls (for example, in method block 309 of method 300) first projector 207-1 and second
Projector 207-2 is according to the second projected image after the first projected image 707-1 after the projection conversion of public target position and conversion
707-2 makes the first projected image 707-1 and the second projected image 707-2 heap on surface 215 after conversion after projected conversion
It is folded.Projection is indicated with chain-dotted line in Fig. 7.Further, if transformation W1, W2 be not applied to additional projections image 706-1,
706-2, additional projections image 706-1,706-2 will be incident upon the position of projected image 407-1,407-2 not stacked respectively
It sets.In addition, the first projected image 707-1 and the second projected image 707-2 after conversion corresponds in the position on surface 215 after conversion
The public target position of virtual target image V in the CS of public domain.
Referring again to Fig. 1, in some embodiments, transformation W1, W2 are provided to rendering in the form of aligned data 109a
Device 101 and/or content player 103 and/or distortion module 113 and/or device 108, so that rendering device 101, content are broadcast
Put device 103, distortion variation one or more of module 113 and device 108 make the projection content of projector 107 be distorted change
Change.In other words, additional projections image 706-1,706-2 represents the content to be projected for storing and/or generating in rendering device 101,
Such as the first projected image 707-1 and the second projected image 707-2 after conversion represents the image to be projected, example after video recording and conversion
Such as, shadow after the amendment being stacked with of additional projections image 706-1,706-2 after the distortion variation of correspondent transform W1, W2
Picture.
Referring again to Fig. 7, in fact, computing device 201 and/or controller 220 can be by adjusting the first projector
The image of 207-1 and the second projector 207-2 input, further for controlling the first projector 207-1 and the second projector
207-2 is respectively according to the second projected image 707- after the first projected image 707-1 after the projection conversion of public target position and conversion
2。
It will be appreciated by those skilled in the art that there are also other many variations and alternative embodiments by the present invention.For example,
When the first projected image 707-1 is aligned projection with the second projected image 707-2 after conversion after conversion, in some embodiments
In, the second projected image 707-2 can be embedded into after conversion in the first projected image 707-1 after conversion, and vice versa.Change speech
It, in these embodiments, one after the conversion in projected image 707-1,707-2 can generate heap when being embedded into another
It folds (for example, projected image 707-2 is stacked to after projected image 707-1 may include one piece of white space and convert after conversion
In white space).
It will be appreciated by those skilled in the art that can also change and alternative embodiment there are many possible.For example,
After conversion the first projected image 707-1 and conversion after the second projected image 707-2 after movement between initial pictures 407
When public target position alignment projects, in some embodiments, the virtual projection image that image 707 is aligned may further become
It changes, such as changes and/or convert by predefined trapezoidal distortion and/or distortion variation and/or other kinds of distortion.Change speech
It, in these embodiments, image is aligned with the pre-defined function of corresponding model after transformation, to generate stacking.Similarly, one
The correspondence model of a or multiple projectors 207 may represent it is some before transformation results, and consolidating unlike projector hardware
There is the correspondence between pixel (hardware pixel).For example, the second projector can be with the first projector in the case where two projectors
Distortion variation after location of pixels alignment, i.e., the first projector otherwise carries out distortion variation in advance.This configuration can
So that for example, the manual alignment of a projector reaches " art " effect (first passing through corresponding model in advance), similarly, volume
Outer stacking projector arrangement is to be automatically aligned to the first projector, to reduce hand labour.
Further, method 300 may extend to more than two projectors 207.For example, system 200 can be wrapped further
Include one or more additional projections instrument 207 for projecting one or more additional projections images 407, additional projections image 407
It is stacked on the first projected image 407-1 and the second projected image 407-2, one or more projected images 407 and the first projection
Image 407-1 and the second projected image 407-2 are initially misaligned, at least one sensor (such as camera 214) is further used
In the correspondence image for obtaining one or more additional projections images 407, computing device 201 and the one or more additional projections instrument
207 communications, computing device 201 are further used for: determining one or more additional projections images 207 being moved to public target
Position is (for example, the position that virtual projection image V is represented, the corresponding one or more volumes represented of additional pixels in public domain
Outer projection's image) additional correspondent transform;And the one or more additional projections instrument 207 is controlled (for example, in addition to projector
207-1,207-2) each according to public target position project after the conversion of additional correspondent transform projected image.
Further, in these embodiments, public target position can be the first projected image 407-1, the second projection
Image 407-2 and one or more additional projections image 407 (for example, what additional projections image 207 projected, and and perspective view
As other similar projected images of 407-1,407-2) intermediate value of corresponding initial position.
Particularly, methods described herein can be stated by following mathematical way.
Assuming that multiple projector P1, P2, P3...Pn, wherein it is (general to respectively represent the projector that number is n by P1, P2, P3
Ground, is expressed as Pi, and i is 1 arbitrary integer for arriving n).
Further, projector P1, P2, P3...Pn corresponding pixel in public domain correspond to model be I1, I2,
I3...In。
In public target region, a series of corresponding virtual projection target images of projector P1, P2, P3...Pn are determined
For V1, V2, V3...Vn.Each virtual projection target image can be determined as any or all pixel correspond to model I1,
The function of I2, I3...In.Such as: V1=f1 (I1, I2, I3...In), V2=f2 (I1, I2, I3...In) ... Vn=fn
(I1,I2,I3...In)。
It is to be further understood that virtual projection target image V1, V2, V3...Vn can be located at public target position, example
Such as, when stacked, one or more of virtual projection target image V1, V2, V3...Vn can be in different locations.
From these functions, the correspondent transform Wi applied to projector Pi can be determined, so that projector Pi is according to correspondence
Virtual projection target image Vi rather than corresponding pixel corresponds to model Ii the initial non-stacking position of projection (for example, represent)
Project image.
In some embodiments, correspondent transform Wi may include the additional change applied to each virtual projection target image Vi
Ki is changed, so that it is the image after distortion variation (by attached that correspondent transform Wi, which makes the image projected from projector Pi be projected instrument projection,
Transformation Ki is added to be determined), the image after distortion variation includes but is not limited to trapezoidal distortion image.
Determine that pixel corresponds to model Ii, such as in camera area (for example, public domain), as described above, the pixel pair
Answer model Ii that can generate by using camera image;In addition, pixel correspondence between each pixel of each projector Pi can be with
It is mapped to a position of camera area.As described above, these image/correspondences can be by obtaining the one of projector Pi projection
The structure light test image (for example, structure light image 507-1,507-2 etc.) of series determines.
Anyway, which corresponds to model Ii and is located at showing in public domain, such as controller 220 and/or stores
The pixel that device 222 uses corresponds to model.Other examples for falling into the public domain of the present embodiment range include but is not limited to 3D generation
Battery limit (BL) domain, 2D screen area etc..Furthermore (, camera can not be used, and multiple optical sensors are placed in screen, similarly, knot
Which (a little) projector pixel structure light image, which may be used to determine, is projected onto each optical sensor.)
Similarly, used public domain can be arbitrary, including but not limited to: camera area --- it is easy to real
It applies;The view field of one of projector Pi;And screen area, when using and/or meaning screen mould, screen area
It can be two-dimentional or three-dimensional.
Since type that is mobile and converting depends on the location and shape that the pixel that public domain is shown corresponds to model Ii, because
This function fi (I1, I2, I3...In) and additional conversion Ki depends on used public domain.However, when camera 214 and throwing
When the relative position of shadow instrument 207 is " close " for the size of whole system, camera area and view field may phases
Seemingly, and further there may be similar results.
As described above, when the image of each projector Pi projection is stacked with, each virtual projection target image V1,
Equally, V2, V3...Vn are equal to and/or are equal to common virtual projection target image V:V1=V2=...=Vn=V (f1=
F2=...=fn=f).
Each virtual projection target image Vi=fi in the mean place of the initial position of projected image occurs when stacking
(I1 ..., In)=(I1+I2+ ...+In)/n.Therefore, when only there are two projector (for example, n=2), such as in system 200,
Vi=fi (I1, I2)=(I1+I2)/2.
In addition, corresponding to model (for example, one of throw when common virtual projection target image V is equal to one of input
The initial position of shadow image) when, each virtual projection target image Vi=fi (I1 ... In)=IL, L are one in 1...n.
As described above, correspondent transform may include respectively by the first projected image and when being projected there are two image
Two projected images are moved to the continuously adjustable parameter of public target position.For example, it is assumed that there are two projector (i.e. n=2), accordingly
Correspond to model I1, I2 there are two pixel, the continuously adjustable parameter alpha can reflect pixel correspond in model I1, I2 where part
Determine each virtual projection target image Vi=fi (α;II, I2)=α I1+ (1-α) I2.
When public target position is average value, α=0.5 (such as V1=V2=0.5I1+0.5I2).When public target position
When being set to the position of one of initial projections image, α=0 or 1.For example, public target position is located at first and throws as α=1
The position of the initial pictures of shadow instrument, V1=V2=I1;As α=0, public target position is located at the initial pictures of the second projector
Position, V1=V2=I2.
When stacking includes insertion, the image of a projector can distort the subgraph that variation is another projector, table
Show as follows: V1=f1 (I1, I2)=I1, and V2=f2 (I1, I2)=I1 subgraph.
The trapezoidal distortion in two projector examples stacked can also be described as follows.Using common virtual projection target V,
Additional trapezoidal distortion conversion K is used to carry out trapezoidal distortion to common virtual projection target V, and virtual projection target image is made to become K
(V), the correspondent transform W1 and W2 further, determined for projector P1 and P2, arrives the image stack of each projector projects
Common virtual projection target V after trapezoidal distortion.
Additional conversion can be by being described the exemplary conclusion of the above trapezoidal distortion, for example, by defining K (V)
For any predefined transformation of virtual projection image.Alternative functions K (V) can include but is not limited to depth-width ratio amendment, be applied to
Two-dimensional spline (distortion) of target V etc..
This application describes the method and systems for stacking projected image, and stacking projected image includes using public domain will be first
Beginning projected image is transformed into virtual projection image in the public target position of public domain, and makes volume using identified conversion
Outer projection's image is stacked with.
It will be appreciated by those skilled in the art that in some embodiments, device 101,105,108,201 and content player
103 function can hardware by using pre-programmed, firmware components (for example, specific integrated circuit (ASICs)), electrically erasable can
Program read-only memory (EEPROMs) etc.) or the execution of other correlation modules.In other embodiments, device 101,105,
108,201 and content player 103 function can by using the computing device of accessible code memory (not shown) come
It realizes, code memory is stored with the computer readable program code for computing device operation.Computer readable program code can deposit
In computer readable storage medium, computer readable storage medium can be fixed, is tangible, can be direct by module for storage
(for example, mobile floppy, CD-ROM, ROM, hard disk, USB drive) read.Further, it is understood that computer-readable journey
Sequence can be stored as computer program product, including computer available media.Further, persistent storage can wrap
Include computer readable program code.Further, computer readable program code and/or computer available media may include non-
The computer readable program code of instantaneity and/or the computer available media of non-transient.Alternatively, computer readable program code can
Remotely to store but can be filled by modem or other interfaces for being connected to network (including but not limited to internet)
It sets and these modules is transmitted to based on transmission medium.Transmission medium can be non-moving medium (for example, optical and/or digital
And/or the communication line of simulation) either mobile medium (for example, microwave, infrared ray, region optic communication or other transmission plans)
Or their combination.
It will be appreciated by those skilled in the art that the embodiment of many modifications and substitutions is can within the scope of the invention
Can, and various embodiments of the present invention are not limited only to above-disclosed example.The scope of the present invention is only additional by this
The restriction of claim.
Claims (20)
1. a kind of system, comprising:
First projector, first projector is for projecting the first projected image;
Second projector, second projector are used to project the second projected image being stacked in first projected image,
First projected image and second projected image are initially misaligned;
At least one sensor, at least one described sensor are used to obtain the first image of first projected image and described
Second image of the second projected image;
Computing device, the computing device and first projector, second projector and at least one described sensing
Device communication, the computing device are used for:
In public domain, first projected image and described the are determined from the first image and second image respectively
The initial position of two projected images;
In the public domain, virtual projection image is determined;The virtual projection image is located at described in the public domain
The public target position of one projected image and second projected image;
The correspondent transform of first projected image and second projected image is determined, according to the virtual projection image
Distinguish one or manyly mobile and convert first projected image and second projected image in the public target position;
The correspondent transform is applied to additional first projected image and additional second projected image, is respectively formed first after conversion
Second projected image after projected image and conversion;And
First projector and second projector is controlled to be based on after the public target position projects the conversion respectively
Second projected image after first projected image and the conversion.
2. the system as claimed in claim 1, which is characterized in that the correspondent transform includes first projected image and described
The distortion of second projected image changes.
3. system as claimed in claim 2, which is characterized in that the distortion variation includes being applied to predefined distortion to be located at
The virtual projection image of the public target position.
4. the system as claimed in claim 1, which is characterized in that the public target position is located at first projected image
Between the initial position and the initial position of second projected image.
5. the system as claimed in claim 1, which is characterized in that the public target position for first projected image institute
State the average value of the initial position of initial position and second projected image.
6. the system as claimed in claim 1, which is characterized in that fixed in the public target position using continuously adjustable parameter
The position virtual projection image.
7. the system as claimed in claim 1, which is characterized in that the computing device is further used for: by adjusting described the
The input of the image of one projector and second projector controls first projector and second projector based on described
Public target position projects after the conversion the second projected image after the first projected image and the conversion respectively.
8. the system as claimed in claim 1, which is characterized in that
It further comprise one or more additional projections instrument, one or more of additional projections instrument projections and first projection
One or more additional projections images that image and second projected image stack, one or more of additional projections images
Initially be misaligned with one or more first projected images and second projected image, at least one described sensor into
One step is used to obtain the correspondence images of one or more of additional projections images, the computing device with it is one or more of
The communication of additional projections instrument, the computing device are further used for:
Determine the additional correspondent transform that one or more of additional projections images are moved to the public target position;And
Using the additional correspondent transform, controls each one or more of additional projections instrument and be based on the public target position
Set projected image after projecting conversion respectively.
9. system as claimed in claim 8, which is characterized in that the public target position is first projected image, institute
State the intermediate value of the initial position of the second projected image and one or more of additional projections images.
10. the system as claimed in claim 1, which is characterized in that one or more of sensors include one or more phases
Machine, the first image and second image include corresponding camera image.
11. system as claimed in claim 10, which is characterized in that the public domain includes camera area.
12. the system as claimed in claim 1, which is characterized in that one or more of sensors include one or more embedding
Enter the optical sensor of surface or abutment surface, the surface is that first projected image and second projected image are thrown
The surface penetrated, the first image and second image include corresponding sensor image.
13. system as claimed in claim 12, which is characterized in that the public domain includes sensor region.
14. the system as claimed in claim 1, which is characterized in that the public domain include it is following it is multiple in one: it is described
One of them region of first projector and second projector;Camera area;Screen area;And any mathematical region.
15. a kind of method, including,
In the computing device communicated with the first projector, the second projector and at least one sensor, first projector
For projecting the first projected image, second projector is used to project the second projection being stacked in first projected image
Image, first projected image and second projected image are initially misaligned and at least one described sensor is used for
Obtain the first image of first projected image and the second image of second projected image, in public domain, respectively from
The first image and second image determine the initial position of first projected image and second projected image;
In the public domain, determine that virtual projection image, the virtual projection image are located at described in the public domain
The public target position of one projected image and second projected image;
The correspondent transform of first projected image and second projected image is determined, according to the virtual projection image
Distinguish one or manyly mobile and convert first projected image and second projected image in the public target position.
The correspondent transform is applied to additional first projected image and additional second projected image, is respectively formed first after conversion
Second projected image after projected image and conversion;And
First projector and second projector is controlled to be based on after the public target position projects the conversion respectively
Second projected image after first projected image and the conversion.
16. method as claimed in claim 15, which is characterized in that the correspondent transform includes first projected image and institute
The distortion variation of the second projected image is stated, the distortion variation includes that predefined distortion is applied to positioned at the public target position
The virtual projection image set.
17. method as claimed in claim 15, which is characterized in that the public target position is one or more below:
Between the initial position of first projected image and the initial position of second projected image;And institute
State the average value of the initial position of the first projected image and the initial position of second projected image.
18. method as claimed in claim 15, which is characterized in that using continuously adjustable parameter in the public target position
Position the virtual projection image.
19. method as claimed in claim 15, further comprises, by adjusting first projector and second projection
The image of instrument inputs, and controls first projector and second projector is based on the public target position and projects institute respectively
State after conversion the second projected image after the first projected image and the conversion.
20. a kind of computer readable storage medium, the storage medium stores computer instruction, when computer reads storage medium
In computer instruction after, computer is executed such as the described in any item methods of claim 15~19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810382102.2A CN110418120A (en) | 2018-04-26 | 2018-04-26 | The system and method for stacking projector alignment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810382102.2A CN110418120A (en) | 2018-04-26 | 2018-04-26 | The system and method for stacking projector alignment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110418120A true CN110418120A (en) | 2019-11-05 |
Family
ID=68345526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810382102.2A Pending CN110418120A (en) | 2018-04-26 | 2018-04-26 | The system and method for stacking projector alignment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110418120A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111083457A (en) * | 2019-12-27 | 2020-04-28 | 成都极米科技股份有限公司 | Method and device for correcting projection images of multiple light machines and projection instrument of multiple light machines |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1543207A (en) * | 2003-05-02 | 2004-11-03 | ������������ʽ���� | Image processing system, projector and image processing method |
US7097311B2 (en) * | 2003-04-19 | 2006-08-29 | University Of Kentucky Research Foundation | Super-resolution overlay in multi-projector displays |
US8045006B2 (en) * | 2009-07-10 | 2011-10-25 | Seiko Epson Corporation | Method and apparatus for determining the best blending of overlapped portions of projected images |
US20120300044A1 (en) * | 2011-05-25 | 2012-11-29 | Thomas Clarence E | Systems and Methods for Alignment, Calibration and Rendering for an Angular Slice True-3D Display |
CN105308503A (en) * | 2013-03-15 | 2016-02-03 | 斯加勒宝展示技术有限公司 | System and method for calibrating a display system using a short-range camera |
CN106228527A (en) * | 2010-11-15 | 2016-12-14 | 斯加勒宝展示技术有限公司 | Utilize manually and semi-automated techniques calibrates the system and method for display system |
-
2018
- 2018-04-26 CN CN201810382102.2A patent/CN110418120A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7097311B2 (en) * | 2003-04-19 | 2006-08-29 | University Of Kentucky Research Foundation | Super-resolution overlay in multi-projector displays |
CN1543207A (en) * | 2003-05-02 | 2004-11-03 | ������������ʽ���� | Image processing system, projector and image processing method |
US8045006B2 (en) * | 2009-07-10 | 2011-10-25 | Seiko Epson Corporation | Method and apparatus for determining the best blending of overlapped portions of projected images |
CN106228527A (en) * | 2010-11-15 | 2016-12-14 | 斯加勒宝展示技术有限公司 | Utilize manually and semi-automated techniques calibrates the system and method for display system |
US20120300044A1 (en) * | 2011-05-25 | 2012-11-29 | Thomas Clarence E | Systems and Methods for Alignment, Calibration and Rendering for an Angular Slice True-3D Display |
CN105308503A (en) * | 2013-03-15 | 2016-02-03 | 斯加勒宝展示技术有限公司 | System and method for calibrating a display system using a short-range camera |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111083457A (en) * | 2019-12-27 | 2020-04-28 | 成都极米科技股份有限公司 | Method and device for correcting projection images of multiple light machines and projection instrument of multiple light machines |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110336987B (en) | Projector distortion correction method and device and projector | |
CN106454291B (en) | System and method for automatic registration and projection mapping | |
CN110191326B (en) | Projection system resolution expansion method and device and projection system | |
US9357206B2 (en) | Systems and methods for alignment, calibration and rendering for an angular slice true-3D display | |
US9693049B2 (en) | Projection mapping video pipeline | |
CN104778694B (en) | A kind of parametrization automatic geometric correction method shown towards multi-projection system | |
US10638104B2 (en) | Device, system and method for generating updated camera-projector correspondences from a reduced set of test patterns | |
CN105308503A (en) | System and method for calibrating a display system using a short-range camera | |
US9807359B1 (en) | System and method for advanced lens geometry fitting for imaging devices | |
CN105026997A (en) | Projection system, semiconductor integrated circuit and image correction method | |
WO2023207452A1 (en) | Virtual reality-based video generation method and apparatus, device, and medium | |
WO2017128887A1 (en) | Method and system for corrected 3d display of panoramic image and device | |
CN111062869A (en) | Curved screen-oriented multi-channel correction splicing method | |
JP2015534299A (en) | Automatic correction method of video projection by inverse transformation | |
CN112970044A (en) | Disparity estimation from wide-angle images | |
EP3934244B1 (en) | Device, system and method for generating a mapping of projector pixels to camera pixels and/or object positions using alternating patterns | |
CN110418120A (en) | The system and method for stacking projector alignment | |
JP5249733B2 (en) | Video signal processing device | |
EP3396948B1 (en) | System and method for aligning stacked projectors | |
KR102167836B1 (en) | Method and system for omnidirectional environmental projection with Single Projector and Single Spherical Mirror | |
Yuen et al. | Inexpensive immersive projection | |
KR20210121669A (en) | Method and apparatus for virtual viewpoint image synthesis through triangular based selective warping | |
WO2024154528A1 (en) | Three-dimensional measurement device | |
WO2025070155A1 (en) | Projection system, projection control device, and projection control program | |
JP2025059980A (en) | Projection system, projection control device, and projection control program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191105 |