Background
With the coming of the information era and the rapid development of the digital culture industry, the multimedia technology is generally applied to various industries, and is eagerly required to obtain the display effects of large pictures, multiple colors, high brightness and high resolution, so that the establishment of a stereoscopic cinema and a conference center is large, and the establishment of a game platform and the implementation of a network television terminal are small, but the traditional CRT display is difficult to meet the requirements of people, and industrial researchers begin to try to establish large-screen display equipment.
As early as the 80 s, researchers began to research high-resolution large-screen display technology, and people constructed high-resolution large-screen display devices by using display walls formed by a plurality of displays, but the boundaries of the displays damaged the continuity of pictures, so that such devices cannot be actually called as real high-resolution large-screen display devices. After the middle of the 90 s, with the development of computer technology and projection devices, researchers began to adopt computers and projectors to constitute seamless high-resolution large-screen display devices. With the progress of research, researchers have been able to use a computer cluster composed of general computers and a general commercial projector to compose a high-resolution multi-projection display device by mounting software for seamlessly splicing the projector pictures.
The method for constructing the seamless spliced large screen includes overlapping the edges of pictures projected by a group of projectors, splicing the pictures into a whole picture which is bright, ultra-large and high-resolution without gaps through an image technology, and finally displaying the result as if the picture is projected by one projector. When two or more projectors are combined to project a picture, part of image lights are overlapped, and the seamless splicing has the main function of gradually adjusting the lights of the overlapped parts of the two projectors to ensure that the brightness contrast of an overlapped area is consistent with that of a peripheral image, so that the whole picture is complete and uniform, and human eyes cannot distinguish the spliced gaps of the multiple projectors.
The construction method of the high-resolution multi-projection display device comprises the following two parts:
and (4) geometrically splicing the projector pictures. Under projection display, when a projection screen is a plane, if the optical axis of a projector is not perpendicular to the screen, the projection image is subjected to gradient deformation, and trapezoidal deformation is one of linear deformation; when the projection screen is a curved surface, such as a spherical surface, a cylindrical surface, or even a right-angled plane, i.e., the optical axis of the projector is perpendicular to the screen, the projected image on the curved surface screen is deformed nonlinearly. And the images of the projectors are inevitably misaligned and in inconsistent proportion. Therefore, geometric alignment of the projector screen is necessary. In general, the geometric alignment operation needs to be performed based on geometric correction of the projector screen. If geometric correction is not carried out, the pixel proportion of two pictures to be aligned is very uneven, and the original regular area generates large deformation through projection, so that the alignment operation cannot be carried out naturally.
The brightness of the projector screen is uniform. When the projector picture is projected on the projection screen, the pictures of adjacent projectors are overlapped, so that a shining gap appears on the projection screen, and the overall feeling of the picture is influenced; the brightness inconsistency among the projectors also causes a gap in the picture. These have resulted in the screen of a multi-projection display device being divided into isolated cells, and therefore, the brightness of the projector screen needs to be adjusted.
Geometric splicing is a fundamental loop in the whole construction method, and brightness uniformity adjustment can be performed only after geometric splicing is completed.
The large screen splicing technology is developed to the present, and geometric splicing methods can be classified into 3 types according to the automation level:
1. mechanical, purely manual correction
The mechanical correction and splicing method mainly comprises the steps of establishing a mechanical device, placing the projector on the mechanical device, adjusting the angle and the position of the projector by controlling the mechanical device, and adjusting internal parameters of the projector at the same time, so that the alignment and splicing effects are achieved.
2. Human assisted software correction
The manual method is usually to project some identification stripes with software, stretch the stripes with software under manual control to align them visually with each other, enable stitching alignment between projectors, record the parameters of the stripes with software or manually, and use these parameters to draw.
3. Software correction method
In order to increase the automation of the correction method, scientists have studied a class of software correction methods. For regular projection screens, such as flat, cylindrical, spherical, etc., mathematical formulas may be used for the calculations. And in the correction process, the image coordinate transformation relation between the projection screen and the projector can be obtained only by solving the coefficient of the formula. According to the image coordinate transformation relation between the projection screen and the projector, the image coordinate transformation relation between the projector and the projector can be conveniently obtained. With these two transformation relationships, image distortion correction and image alignment between projectors can be performed.
The other method is not to obtain the image coordinate transformation relation between the projection screen and the projector, but to obtain the image coordinate transformation relation between the projector and the camera, and the result of the correction of the method is specific to a specific viewpoint, which is generally located at the shooting position of the camera.
In fact, the above methods all need to use a camera to shoot the picture on the projection screen, and use a mathematical formula to calculate the image coordinate transformation relation T between the projection screen and the projector s→p ,T p→s In the method (2), first, T needs to be calculated c→p ,T p→c And T s→c ,T c→s Then using these two relationships to calculate T s→p ,T p→s . In the second method, the relationship T is directly used c→p ,T p→c 。
Thus, T is obtained c→p ,T p→c Is a key problem in large-screen splicing technology, and when the scale of a multi-projection large-screen system is smaller, T is the maximum projection area c→p ,T p→c Is easier to obtain, and only needs to make the projector project the image with the coded informationAfter the shooting by the camera, which point in the projection image corresponding to the point in the photo can be obtained, and T is obtained c→p ,T p→c 。
However, the pursuit of larger-size and higher-resolution display devices is endless, and as the scale of large-screen systems becomes larger and larger, the screen size of the final system exceeds the range that the camera can shoot, the method using a single camera cannot smoothly find T c→p ,T p→c . There are two methods to solve this problem, one is to use multiple cameras to capture pictures and to stitch the captured results into the picture of the same camera. Another method is to splice projector pictures in a range that can be shot by a camera each time, splice adjacent projector pictures into pictures of a spliced projector set in sequence, gradually increase the number of the spliced projectors, and finally splice all projector pictures.
Both methods are only suitable for regular projection screens, and no splicing method suitable for ultra-large-scale irregular screens exists.
Disclosure of Invention
The invention provides a geometric splicing method of a multi-projection large screen system capable of realizing seamless splicing.
A multi-projection large-screen splicing method based on a rotary table comprises the following steps:
(1) Collecting and solving calibration data;
controlling a projector to project an image with coded information on a projection screen, controlling a rotary table to enable a camera arranged on the rotary table to face the direction of a screen, shooting the image with the coded information by the camera, and solving a coordinate transformation relation between the image with the coded information projected on the projection screen by the projector and the image shot by the camera;
during operation, when the camera is arranged on a two-dimensional rotary table (the rotary table can rotate in two directions), the included angles between X and Y axes of a CCD of the camera and two rotating axes of the rotary table are measured and recorded; the optical center of the camera coincides with the intersection of the extension lines of the two rotating shafts of the turntable.
The method comprises the steps of controlling a projector to project an image with coded information on a projection screen, shooting a picture generated after the projector projects on the projection screen by a camera, wherein the image shot by the camera is called calibration data, the calibration data not only comprises a picture shot by the camera, but also comprises angle data of rotation of a numerical control turntable, and the calibration data shot according to the process comprises all information required by large-screen splicing.
Controlling the turntable to make the camera face the direction of the screen, and setting the direction as D 0 . For arbitrary projector P i The camera is facing the direction D i Thereby ensuring that it can photograph P i The entire picture projected on the screen. Control P i Projecting the image with the coded information, taking the image by a camera, and setting the group of images as I i ,I i With thereon P i Coded information of the mark; according to I i Find P i Image coordinates and I i Coordinate transformation relationship between images of (a); the coordinate transformation relation can be used to obtain projection screen distortion correction data and projection screen alignment data.
Continuing at D i Position shot P i Images of other nearby projectors. Such as P j To obtain a camera image I j i According to I j i And I i The overlap region of the projector images can be found.
(2) Correcting distortion of a projection picture;
utilizing P obtained in step (1) i Image coordinates and I i The distortion correction of the projection picture is carried out by the transformation relation among the image coordinates, and the correction process is as follows:
selecting key points from the camera image, and interpolating the key points to obtain a curved polygon I 1 A curved polygon I 1 A similarity transformation (scaling) is performed to find a one in the projection imageA similar curved polygon I 2 (ii) a Find I 1 Is at a corresponding point in the projection image, I 2 The key point in (2) is deformed to the corresponding point, and simultaneously, the pair I 2 Interpolating other points to obtain image I 3 ,I 3 The distortion of the projected picture is corrected.
(3) Aligning the projection pictures;
aligning the projection pictures by using the coordinate transformation relation obtained in the step (1);
according to the rotating angle of the rotary table, the internal parameters obtained by camera calibration and P in the picture i Determining P from the position corresponding to the image of (1) i The position of the image in the whole projection picture is subjected to projection picture alignment operation;
and obtaining a set of sight lines corresponding to a certain area in the picture in image solving, and ensuring that the pictures of different projectors seen by each sight line are the same by aligning the projection pictures only by obtaining the picture corresponding to the sight line set according to the sight line set.
Alignment of the projection pictures is respectively realized by two types of applications, one type is a system for real-time rendering, a rendering cluster is used for distributed rendering, and then the rendered pictures are spliced together during projection. Another type of application is to project already stitched pictures on a projection screen.
(4) Positioning an overlapping area and adjusting the brightness;
and (2) calculating overlapped areas in images projected by different projectors on the projection screen by using the coordinate transformation relation obtained in the step (1), and performing brightness uniformity transition on the overlapped areas to finish seamless splicing of the multi-projection large screen. And performing brightness equalization transition, namely performing brightness reduction on the pictures projected by the overlapping area, so that the sum of the brightness of the pictures projected by the overlapping area is equal to the brightness of the pictures projected by the single projector before reduction.
The camera is in the direction D i While shooting and P i Pictures of other projectors with overlapped pictures P j Projected picture sum P i Controlling P when the projected pictures are overlapped on the projection screen j Projecting an image with coded information, taking the image by a camera, and setting the group of images as I j i ,I j i With thereon P j Coded information of the mark, I j i And I i Is the image taken by the camera at the same place, so I j i And I i The same coordinate in the image is corresponding to the same point on the projection screen, and the point in the camera image is obtained as P i And P j The corresponding points in the projected image of (2), so that the projector P can be found i And P j The area where the overlap occurs in the images of (1); and performing brightness uniformity transition by using the obtained overlapping area to finish seamless splicing of the multi-projection large screen.
The whole system for realizing the method comprises a set of shooting hardware equipment with a numerical control rotary table, a set of multi-projection large-screen splicing system and a set of multi-projection large-screen system.
The method comprises the following steps:
images transferred to projectors, projection images for short
Images captured by cameras, camera images for short
Image formed by projection screen of projection picture of projector, projection picture for short
In the method, the camera is placed on the high-precision numerical control rotary table, and the orientation of the camera is changed by controlling the rotary table to rotate, so that the camera can shoot a larger and more complete area, and the camera can be ensured to shoot all areas of a large screen and keep quite high precision. And mounting the camera on a numerical control turntable.
The installation of the camera must satisfy the following two requirements:
(a) The included angle between the X axis and the Y axis of a CCD (photosensitive original of a camera) and two rotating shafts of a turntable is obtained through calibration measurement and recording, or the X axis and the Y axis of the CCD of the camera and the two rotating shafts of the turntable are kept parallel or perpendicular (namely the included angle is 0 or 90 degrees);
(b) The optical center of the camera coincides with the intersection of the extension lines of the two rotating shafts of the turntable.
The method can complete geometric correction and geometric alignment, can position the overlapping area and prepare for brightness uniformity. The method of the invention has a very high degree of automation.
Detailed Description
Calibration data acquisition and solution
Calibration data acquisition
The projector is controlled to project an image with structured light codes on the projection screen, the camera shoots a picture generated after the projector projects on the projection screen at a proper position, the obtained image is decoded, and then the corresponding relation between a picture shot by the camera and the image in the projector can be obtained, wherein the picture shot by the camera is a part of the calibration data. In the invention, the calibration data not only comprises the pictures shot by the camera, but also comprises the rotation angle data of the numerical control turntable, the image data shot according to the process comprises all information required by splicing, and the information can be obtained by solving the calibration data.
The acquisition of calibration data is carried out according to the following process:
(1) Arranging a turntable, adjusting the height and the orientation of a camera, and setting the initial orientation of the camera to be superposed with the position and the orientation of the optimal observation direction of the audience;
(2) Judging whether the camera can shoot the picture of the projector which needs to be shot currently, if not, rotating the turntable to shoot the projector P which is expected to be shot currently i The entire area of the picture is projected. Recording the respective rotation angle phi of the turntable in two dimensions with respect to the initial orientation i And alpha i The rotation angles of the rotary table in the horizontal direction and the vertical direction are respectively represented by the two angles;
(3) After shooting the projector P i After the picture, the camera does not move, and the picture is taken with P i The images of other projectors are overlapped, and the process does not need to ensure that all areas of the projectors can be shot;
(4) And shooting the coded images in sequence according to the steps, and finally shooting the coded images of all projectors and the projectors overlapped with the projectors.
Scaling data solving
By solving for the calibration data, data for projection picture distortion correction, projection picture alignment, and overlap region positioning can be solved.
(1) Generation of projection picture distortion correction data:
the projected picture distortion correction data is actually the transformation relation T between the projector image and the coordinates on the camera shot picture c→p ,T p→c . For regular projection screens such as flat surfaces and quadric surfaces, the transformation relationship can be described by a mathematical formula, while for irregular projection screens, the transformation relationship can be determined only by using a key point interpolation method, such as a circle in fig. 2 as a key point, and a dotted line represents a point generated by interpolation. In either method, several sampling points are calculated first to find their corresponding points in two images (camera image and projection image).
Algorithms for finding such correspondence are numerous at present, but structured light algorithms are most commonly used. By adding active feature codes to the projected images, the points on the picture also have such feature codes after projection and shooting, and the points with the same code information are the corresponding points on the picture and the projected images. In the present invention, a time-multiplexed code is used, which projects a set of images over a period of time, the set of images combining to each point having different coding information, as reported in "Overview of coded light projection technologies for automatic 3D profiling" by j.pag aes, j.salivi, r.garca and c.matabosch, ICRA, 2003-138. For example, this method projects 4 frames at four time points t0, t1, t2, and t 3. the image at the time t0 includes one black-white stripe, the stripe at the time t0 is cut by the time t1, each stripe is divided into two black-white stripes, namely two black-white stripes are projected, and so on, 4 black-white stripes are projected at the time t2, and 8 black-white stripes are projected at the time t 3. The point in each image that is in the black stripe is coded as a 1, and this in the white stripe corresponds to a 0. When the images at the four times of t0, t1, t2 and t3 occupy one binary digit, the four images can be combined to have 16 codes from 0000 to 1111, and 16 image areas can be distinguished. When more images are projected, more codes can be obtained, and more image areas can be distinguished, so that the image coordinates of the points can be uniquely determined. When the black and white stripes are staggered in the horizontal direction, the horizontal coordinate of the corresponding point can be obtained. FIG. 3 is a schematic diagram of the abscissa correspondence of corresponding points; similarly, projecting black and white stripes staggered in the vertical direction can obtain the corresponding relation of the vertical coordinates of the corresponding points.
(2) Generation of projection screen alignment data:
before the projection picture alignment operation, the positions of the pictures of the projectors on the projection screen and the mutual relations of the pictures of the projectors on the projection screen need to be known. The intersection point of the sight line of the human eyes and the projection screen is the picture content seen in the sight line direction, the picture content of the intersection point is probably the result of the action of a plurality of projection pictures, and the alignment of the projection pictures ensures that the contents of each sight line and a plurality of intersection points of the plurality of projection pictures are the same, so that the geometric continuity between the projection pictures is ensured. The projection picture alignment data in the invention is the corresponding relation between each projection picture and the sight of human eyes.
Fig. 4 is a diagram showing the photographing effect of the calibration data, in which the coordinate system O-XYZ is the coordinate system of the viewpoint of the human eye, the straight line PP' is the direction of the optical axis of the camera during one photographing, and the optical axis is a straight line with the optical center perpendicular to the CCD of the camera. When the camera is at the initial position, the optical axis of the camera coincides with the Z axis of the coordinate system O-XYZ, the optical center coincides with the origin, and the position of the straight line PP' in the coordinate system O-XYZ is known because the angle of rotation of the camera relative to the initial position is known during the photographing process.
And then, the position of the sight line corresponding to the projection picture on the coordinate system O-XYZ needs to be calculated according to the PP' and the position of the projection picture in the camera image.
As shown in fig. 5, the shaded rectangular area outside indicates the camera image, and the white area in the center indicates the shotIn the image of a projector, the vertex of a rectangular pyramid represents the center of light of the camera lens, and the point (u) 0 ,v 0 ) The intersection point of the optical axis and the imaging plane is represented, and it is required to obtain: and white areas, namely the corresponding view line sets of the projection picture. In practical application, the sight line set corresponding to the area surrounded by the circumscribed rectangle of the white area shown by the dotted line in the upper graph can be obtained first to obtain a so-called view cone, and then the sight line set corresponding to the point inside the white area can be obtained by interpolation, so that the difficulty of calculation can be greatly reduced, and the speed of the final system can be improved.
The view point set in the rectangular area can be represented by the same method as the view field pyramid in OpenGL, and since the camera simulates the human eye, the view field pyramid of the human eye is obtained by the parameters of the camera and the position of the straight line PP' in the coordinate system O-XYZ. The calculation method is as follows:
determining camera parameters fcx, fcy, u using calibration software 0 ,v 0 And the like.
Whereindx and dy represent the physical size of a pixel on the CCD in the picture, respectively, and f is the distance from the optical center to the CCD.
The relationship tg θ = (u) is easily derived from the trigonometric relationship of fig. 5 0 -u 1 ) The/fcx, i.e. the angle between the plane OAB and the optical axis (main viewing angle of the viewing cone), can likewise be determined as the included angle between the other side faces of the viewing cone and the optical axis, while the base of the viewing cone is perpendicular to the optical axis. Thus, the sight line set corresponding to the rectangular region in the upper diagram is obtained.
Next, the sight line set corresponding to the projection screen needs to be obtained, and the direct interpolation is carried out to obtain the sight line direction corresponding to a certain point on the picture, so that the corresponding position of the rectangular area in the whole projection image is obtained, the position is interpolated to obtain the final screen corresponding to the projector, and the interpolation method is explained in the following contents.
(3) Overlap area positioning data generation:
as shown in FIG. 6, the left diagram of FIG. 6 shows a projector P i The right image in FIG. 6 is the projector P photographed at the same position j The camera position of the image (2) is not moved, so that the same point on the same coordinate on the left and right images corresponds to the same point on the projection screen, and for example, the same region on the projection screen corresponds to the photo region in the upper green frame. Therefore, the intersection of the white areas in the two images obtains P on the projection screen i And P j The overlapping area resulting from the projection is imaged on the photograph as shown in fig. 7.
By decoding the encoded image, the overlap region shown in FIG. 7 can be obtained in the projector P i And P j Position in the projected picture, i.e. finding P i And P j In the overlapping region of P i A position in the picture.
According to the above method, the sum P can be obtained i Other overlapping regions of interest.
Projection picture distortion correction
The distortion correction of the projected image ensures that the distortion of the projected image to the projected image can be seen in fig. 1 and 2 as the image observed by the viewer and the projected image before processing, and the corresponding relationship between the point on the distorted image and the projected image is obtained through calculation. The method for correcting the distortion of the projected picture is divided into 4 steps as shown in fig. 8.
(1) Obtaining a curved polygon I from key points in the picture and key point interpolation 1 Is shown by 1 Similarity transformation (scaling) is performed to find a similar curve in the projection imagePolygon I 2 ;
(2) Taking this curved polygon together with the points inside them;
(3) Find I 1 Is at a corresponding point in the projection image, I 2 Deforming the key point to the corresponding point, and interpolating other points to obtain image I 3 ;
(4)I 3 After projection, the human eye observes to obtain I 1 ,I 1 And I 2 Similarly, distortion of the picture no longer exists.
The correction of the distortion of the projector picture is completed through the steps.
Projection picture alignment
The method comprises the steps of obtaining a set of sight lines corresponding to a certain area in a picture in image solving, aligning projection pictures, firstly obtaining the sight line set corresponding to the projection pictures, then obtaining the corresponding pictures according to the sight line set to ensure that the intersection points of each sight line and different projection pictures are the same, so that the pictures can be smoothly transited, and the inside of each projection picture is continuous, so that the whole spliced picture is continuous. This completes the alignment of the projection screen.
The alignment of the projection pictures is respectively realized by two types of applications, one type is a system for real-time rendering, a rendering cluster is used for distributed rendering, and then the pictures rendered by each rendering node are spliced together during projection. Another application is to cut a complete picture and project it on a projection screen, and then stitch the picture again into a complete picture.
For a real-time rendering system, each rendering node is connected with one projector, the whole system has one viewpoint and a main sight line direction, so that a uniform viewpoint coordinate system is provided, and the viewpoint and the main sight line direction of each node can be determined according to the viewpoint and the implementation direction. In the OpenGL rendering system, the problems are as follows:
it is known that:
(1) The position of the optical axis in a viewpoint coordinate system is obtained in the calibration data acquisition process, namely the negative direction of the Z axis firstly rotates around the positive direction of the Y axis by a rotation angle phi i Then the common vertical line of the current positions of the Y axis and the negative Z axis which pass through the origin of the viewpoint coordinate system rotates, and finally the angle between the common vertical line and the Y axis is alpha i . Projection cos alpha of unit vector of direction of optical axis on Y axis i The projection length on the ZOX plane is sin α i The projections continuing on the Z axis and the X axis are-sin alpha respectively i sinφ i And-sin alpha i cosφ i . That is, the normal vector of the optical axis in the viewpoint coordinate system is
(-sinα i sinφ i ,cosα i ,-sinα i cosφ i ) (formula 1)
Referring to FIG. 9, the same reasoning can be found for the upward direction with the unit normal vector of
(sin(90-α i )cosφ i ,cos(90-α i ),sin(90-α i )sinφ i ) (formula 2)
(2) The relation between the circumscribed rectangle of the area where the projection picture is located in the picture and the optical axis. This relationship is represented by the imaged pyramid as: the included angle between the left side surface of the rectangular pyramid and the optical axis is theta 1 The included angles between the corresponding right side surface, lower side surface and upper side surface and the optical axis are theta 2 ,θ 3 ,θ 4 。
Solving the following steps: the parameter of gluloocat function, the parameter of glFrustum function. As a result: the viewpoint of the glulookup function is the same viewpoint of the whole system, and the middle three parameters are points (-sin alpha) in a viewpoint coordinate system i sinφ i ,cosα i ,-sinα i cosφ i ) Coordinate values in the world coordinate system, the last three parameters being points in the viewpoint coordinate system (sin (90- α) i )cosφ i ,cos(90-α i ),sin(90-α i )sinφ i ) Transformed in position in a world coordinate system.
Parametrization of the glFrustum functionThe number is related to the near plane, and if the distance from the near plane to the viewpoint is d, then the parameters left, right, bottom, top in the function are-dtan theta 1 ,dtanθ 2 ,-dtanθ 3 ,dtanθ 4 。
A complete picture is cut and then projected on a projection screen, and one method is to stick the picture on the inner surface of a cylinder or the surface of a cube and then display the picture by the above rendering method, and the setting of the method is similar to the application of real-time rendering. The other method is to directly intersect the sight line with the image S to be displayed so as to determine the corresponding position of each point in the projector picture in the S. The calculation steps are as follows:
(1) The normals of the 4 sides of the cone formed by the projector corresponding to the set of lines of sight are calculated. The calculation of the lower and left flanks is described below (see fig. 10):
the normal vectors of the lower side surface and the upper side surface are in the same plane, OY and OV are in the optical axis direction of the projector, so that the included angle between the normal vector of the lower side surface and the Y axis is alpha + theta 3 +90, from which the normal quantity is found to be
(-sin(α i +θ 3 +90)sinφ i ,cos(α i +θ 3 +90),-sin(α i +θ 3 +90)cosφ i ) (formula 3)
The normal vector of the upper side can also be obtained according to the method as follows:
(-sin(α i -θ 2 +90)sinφ i ,cos(α i -θ 2 +90),-sin(α i -θ 2 +90)cosφ i ) (equation 4) the normal vector of the left side is a vector
(-sin(θ 1 +φ),0,-cos(θ 1 + phi)) (formula 5)
By an angle alpha around the vector (cos phi, 0, -sin phi), i.e. the vector
(formula 6)
In the same way, the normal vector of the right side face is
(2) And (4) calculating the intersection lines of the 4 side surfaces and the picture to be projected, wherein the area surrounded by the intersection lines is the picture to be projected by the projector. The image to be projected may be a curved surface or a plane in the three-dimensional coordinate system, and the calculation method is described by taking a planar projection image as an example.
The normal vector of the upper side of the view volume is
(-sin(α i -θ 2 +90)sinφ i ,cos(α i -θ 2 +90),-sin(α i -θ 2 +90)cosφ i ) (formula 8)
Then its plane equation is
((-sin(α i -θ 2 +90)sinφ i )x+(cos(α i -θ 2 +90))y-sin(α i -θ 2 +90)cosφ i ) Z =0 (formula 9)
Setting the picture to be projected on a plane
z=z 1 (formula 10)
And then their intersection lines are:
The intersection of the other side planes with the image to be projected can likewise be determined.
Here by changing z 1 The value of (c) may be scaled to the picture. These intersecting lines allow a picture corresponding to the line of sight to be obtained within the dotted-line rectangular area in fig. 5.
(3) After the picture corresponding to the sight line in the dotted line rectangular area is obtained, the picture corresponding to the central white area can be obtained by performing two-dimensional linear interpolation, and the picture is stretched to the size corresponding to the projector picture, so that the picture to be projected by the projector is obtained.
Positioning of overlapping area and brightness adjustment
In the presence of a compound which is to obtain P i ,P j After the overlap region is formed, as shown in fig. 7, the brightness of the overlap region can be adjusted (reduced) to adjust the brightness of the overlap region, and the purpose of reducing the brightness is to make P be P i And P j The sum of the brightness of the pictures projected in the overlapping area is equal to the brightness of the picture projected by the single projector before the reduction.
As shown in FIG. 11, the left diagram shows the overlap region correspondence P i The picture brightness adjustment in (1) is schematically illustrated, and the right side corresponds to P j The picture brightness adjustment diagram in (1). The weights 0 and 1 are weights for brightness adjustment, and respectively represent the ratio of brightness adjustment to the original brightness, 1 is no adjustment, and 0 is black adjustment. The brightness weight increases from the edge of each picture to the center, the area of the overlap region closest to the image center is 1, the area farthest from the image center is 0, the boundary with the brightness weight of 1 is called as the inner boundary, the boundary with the brightness weight of 0 is called as the outer boundary, and 0 and 1 marked in the diagram indicate the outer boundary and the inner boundary respectively. And the brightness weight of the area outside the boundary of the overlapping area is calculated according to the distance between the area and the boundary as follows:
let projector p m □p n To a certain point x 'in the photo' i Wherein m is less than or equal to i is less than or equal to n. Then it can set
Wherein d (x' i ) Denotes a projector midpoint x' i Distance to outer boundary of nearest overlapping region, 0.0 ≦ d (x ″)' i )≤1.0。
The brightness weight of the point in the picture is assigned to the corresponding point in the projector picture, so that the brightness weight of the point in the projector is obtained (the brightness weight of the point in the non-overlapping area is 1.0), the brightness of the projector picture is reduced by using the weight, and finally, the brightness of the picture overlapping area projected on the projection screen is consistent with that of other areas.