CN118509566B - Projector control parameter analysis method, device and equipment - Google Patents
Projector control parameter analysis method, device and equipment Download PDFInfo
- Publication number
- CN118509566B CN118509566B CN202410775868.2A CN202410775868A CN118509566B CN 118509566 B CN118509566 B CN 118509566B CN 202410775868 A CN202410775868 A CN 202410775868A CN 118509566 B CN118509566 B CN 118509566B
- Authority
- CN
- China
- Prior art keywords
- image
- projection
- target
- projector
- hand
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3182—Colour adjustment, e.g. white balance, shading or gamut
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3188—Scale or resolution adjustment
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Projection Apparatus (AREA)
- Position Input By Displaying (AREA)
Abstract
The application relates to the technical field of image processing and discloses a projector control parameter analysis method, a projector control parameter analysis device and projector control parameter analysis equipment. The method comprises the following steps: the method comprises the steps of performing initial image acquisition on a projection plane of a target projector to obtain a first projection image, and performing geometric and photometric calibration to obtain initialized projection parameters; acquiring a hand interaction image, and performing hand shadow segmentation and linear scanning fusion to obtain a target hand image; performing 3D fingertip position analysis and touch detection to obtain a touch detection result, and performing gesture track reconstruction and gesture pattern matching to obtain gesture pattern information; performing real-time response and collecting a second projection image of the projection plane, and performing feature point change monitoring and image compensation synthesis to obtain a target projection image; the application realizes the dynamic projection parameter response of the projector and improves the image projection display effect of the projector.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for analyzing control parameters of a projector.
Background
With the development of interactive projection technology, users can not only watch high-quality projection images, but also interact with projection contents directly in an interactive mode such as gestures, so that application scenes of the projection technology are greatly enriched, such as education, entertainment, conference demonstration and other fields. However, with the widespread use of interactive projection technology, image processing methods thereof face increasing challenges and demands, especially in terms of image quality control, interactive response speed, and optimization of user interaction experience.
When the existing projector image processing method is used for processing the problems of image deformation, color distortion and the like caused by gesture interaction of users, the ideal effect cannot be achieved. In particular, in complex interaction scenes, such as accurate segmentation of hand shadows, accurate recognition and tracking of gesture actions, and the like, the conventional image processing algorithm is difficult to meet the dual requirements of high precision and real-time performance. In addition, factors such as lens distortion, ambient illumination change and the like can also influence the quality of the projection image, and further increase the difficulty of image processing.
Disclosure of Invention
The application provides a projector control parameter analysis method, a projector control parameter analysis device and projector control parameter analysis equipment.
In a first aspect, the present application provides a projector control parameter analysis method, including:
The method comprises the steps of performing initial image acquisition on a projection plane of a target projector to obtain a first projection image, and performing geometric and photometric calibration on the first projection image to obtain initialized projection parameters of the target projector;
Acquiring a hand interaction image between a target user and the projection plane through the target projector, and performing hand shadow segmentation and linear scanning fusion on the hand interaction image to obtain a target hand image;
3D fingertip position analysis and touch detection are carried out on the target hand image to obtain a touch detection result, and gesture track reconstruction and gesture pattern matching are carried out on the target hand image according to the touch detection result to obtain gesture pattern information;
Responding the gesture mode information in real time through the target projector, collecting a second projection image of the projection plane, and carrying out feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image;
And carrying out lens distortion correction and projection parameter optimization on the target projector according to the target projection image, and generating target projection parameters of the target projector.
In a second aspect, the present application provides a projector control parameter analysis apparatus, comprising:
The calibration module is used for carrying out initial image acquisition on a projection plane of the target projector to obtain a first projection image, and carrying out geometric and photometric calibration on the first projection image to obtain initialized projection parameters of the target projector;
The acquisition module is used for acquiring a hand interaction image between a target user and the projection plane through the target projector, and carrying out hand shadow segmentation and linear scanning fusion on the hand interaction image to obtain a target hand image;
The detection module is used for carrying out 3D fingertip position analysis and touch detection on the target hand image to obtain a touch detection result, and carrying out gesture track reconstruction and gesture pattern matching on the target hand image according to the touch detection result to obtain gesture pattern information;
the synthesizing module is used for responding to the gesture mode information in real time through the target projector, collecting a second projection image of the projection plane, and carrying out feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image;
And the optimization module is used for carrying out lens distortion correction and projection parameter optimization on the target projector according to the target projection image and generating target projection parameters of the target projector.
A third aspect of the present application provides an electronic device, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the electronic device to perform the projector control parameter analysis method described above.
According to the technical scheme provided by the application, the initialized projection parameters can be effectively obtained by carrying out initial image acquisition and geometric and photometric calibration on the projection plane, and the geometric distortion of the projection image can be obviously improved by detecting and matching the characteristic points and carrying out geometric correction based on an affine transformation matrix, so that the accurate alignment of the projection image and the real restoration of colors are ensured, and the overall projection quality is improved. By utilizing the hand shadow segmentation and linear scanning fusion technology, the target hand image can be accurately segmented and identified, so that not only is the accuracy of gesture identification improved, but also the response capability of the system to complex gestures is enhanced, and the interaction between a user and projection content is more visual and smooth. Through 3D fingertip position analysis and touch detection, and gesture track reconstruction and pattern matching, a gesture instruction of a user can be rapidly and accurately identified, and a projection image can be adjusted in real time according to the gesture instruction. Feature point change monitoring and image compensation synthesis are carried out on the second projection image, so that image deformation or color deviation caused by user interaction can be effectively compensated, and consistency and stability of visual effects of the projection image are ensured. The definition and accuracy of the projection image are improved by comprehensively correcting the lens distortion and comprehensively optimizing the projection parameters of the target projection image. Through the omnibearing image optimization processing, the projector can keep excellent projection performance under various different environments and conditions, so that dynamic projection parameter response of the projector is realized, and the image projection display effect of the projector is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained based on these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an embodiment of a projector control parameter analysis method according to an embodiment of the application;
fig. 2 is a schematic diagram of an embodiment of a projector control parameter analysis apparatus according to an embodiment of the application.
Detailed Description
The embodiment of the application provides a projector control parameter analysis method, a projector control parameter analysis device and projector control parameter analysis equipment. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present application is described below with reference to fig. 1, and an embodiment of a projector control parameter analysis method in an embodiment of the present application includes:
step 101, carrying out initial image acquisition on a projection plane of a target projector to obtain a first projection image, and carrying out geometric and photometric calibration on the first projection image to obtain initialized projection parameters of the target projector;
it is to be understood that the execution body of the present application may be a projector control parameter analysis device, and may also be a terminal or a server, which is not limited herein. The embodiment of the application is described by taking a server as an execution main body as an example.
Specifically, initial image acquisition is performed on a projection plane of a target projector, so as to obtain a first projection image. And performing calibration and adjustment on the first projection image to ensure that the quality and the projection effect of the image can reach the optimal state. Feature point detection is performed on the first projection image, and a representative feature point set is identified from the image, wherein the feature points represent important information and structures of the image. And obtaining a second characteristic point set of the template projection image, wherein the template image is used as a reference object and comprises characteristic point distribution in an ideal state, and the deviation degree between the current image and the ideal state is estimated by comparing and matching the two characteristic point sets. Feature point matching uses a specific function to quantify the similarity measure between feature points. The function takes into account descriptor vectors of feature points and maps these vectors to a high-dimensional feature space by nonlinear mapping to capture complex similarity relationships. The similarity calculation is based on the similarity degree between the weighted Euclidean distance evaluation feature points, and the measurement standard is adjusted by considering the covariance matrix, so that the matching process is more accurate and reliable. After obtaining the feature point similarity measurement, carrying out geometric correction on the first projection image based on the affine transformation matrix and the feature point similarity measurement. The geometric correction aims at adjusting the shape and position of the image to compensate for distortions and offsets that may occur during projection, resulting in initial geometric correction parameters. And carrying out parameter precision adjustment on the initial geometric correction parameters through the sub-pixel characteristic point positioning function to obtain target geometric correction parameters. The positions of the feature points are precisely positioned at the sub-pixel level by calculating the Hessian matrix of the feature points and the gradient vector of the feature response function, so that the accuracy of geometric correction is greatly improved. And carrying out photometric calibration on the first projection image through local histogram equalization, and adjusting the brightness and contrast of the image, so that the image can be kept clear and consistent under the condition of uneven illumination, and the target photometric calibration parameters are obtained. And generating initialized projection parameters of the target projector according to the target geometric correction parameters and the target photometric calibration parameters, wherein the parameters ensure the accuracy and definition of the projected image and the fluency of user interaction.
102, Acquiring a hand interaction image between a target user and a projection plane through a target projector, and performing hand shadow segmentation and linear scanning fusion on the hand interaction image to obtain a target hand image;
Specifically, an interactive image between the hand and the projection plane is acquired by the target projector. The hand interaction image contains the full view of the user's hand interacting with the projection plane under different lighting conditions, including the hand itself and shadows formed by the hand blocking the projection light. In order to accurately separate the hand image of the user from the interactive images, color feature extraction is performed on the hand interactive images, and average color features representing hand colors are obtained by analyzing color distribution in the images. The structural feature extraction is carried out on the image by calculating the local gradient, the structural difference between the hand and the background and the shadow is captured, and the hand and other parts can be distinguished more accurately by combining the two features. And carrying out hand and shadow segmentation on the hand interaction image according to the average color features and the average structural features. By comparing the deviation of the color and structural features of each pixel in the hand image from the average, it is determined which parts belong to the hand and which parts belong to the shadow. And (3) performing hand shadow segmentation similarity calculation on the hand segmentation image, and evaluating the segmentation quality between the hand and the shadow based on the segmentation effect. And performing linear scanning fusion measurement on the hand segmentation image based on the hand shadow segmentation similarity measurement. The hand segmentation image is scanned row by row or column by column, and the segmentation parameters are dynamically adjusted according to the similarity of the hand and the shadow segmentation, so that the segmentation effect is optimized, and the limit between the hand and the shadow is ensured to be clearer. And processing the hand image according to the result of the linear scanning fusion measurement, wherein the processing comprises the enhancement of the hand and shadow boundary and the refinement of the hand image. Noise points and unnecessary details in the image are removed through a thinning algorithm, so that the finally obtained target hand image is finer and clearer. And finally, accurately separating the hands of the user from the hand interaction image to obtain a target hand image.
Step 103, performing 3D fingertip position analysis and touch detection on the target hand image to obtain a touch detection result, and performing gesture track reconstruction and gesture pattern matching on the target hand image according to the touch detection result to obtain gesture pattern information;
Specifically, 3D fingertip position analysis is performed on the target hand image, and position data of the fingertip in the three-dimensional space are obtained. The coordinates are converted into 3D position data in the real world coordinate system by a 3D fingertip position analysis function according to the pixel coordinates on the image plane and their corresponding depth values, in combination with the rotation matrix and translation vector of the projector. And carrying out touch detection on the fingertip 3D position data based on the touch judging function, and judging whether the finger of the user is contacted with the projection plane or not. The touch decision function is based on the distance of the fingertip from the projection plane, which contains a number of parameters for adjusting the sensitivity of the function. Adjustment of these parameters allows the system to respond to touch actions of different users, whether tapping or pressing, in a highly flexible manner, to be accurately captured. Meanwhile, the touch judgment process also considers the dynamic property of user operation, and the accuracy and the instantaneity of touch detection are ensured through adjustment of the amplitude, the frequency and the phase. And when the touch action is detected, calculating gesture track vectors of the target hand image, and solving integral track vectors during gesture. The calculation of the gesture track vector reflects the overall movement direction and distance of the gesture, and captures the dynamic change of the gesture. By integrating the instantaneous velocity vector between the start and end time points of the gesture, a vector is obtained that fully describes the gesture dynamics. And carrying out similarity calculation on the integral track vector during the gesture and a preset gesture template vector. Based on the angles between the vectors, the similarity between the gesture and the predefined template is quantified by an inverse cosine function. The result of the similarity calculation is a quantified measure that describes the degree of matching between the user gesture and the system's preset gesture templates. And finally, generating gesture mode information of the target hand image according to the similarity calculation result. The information includes the type of gesture, as well as the strength and duration of the gesture, etc.
104, Responding to the behavior mode information in real time through the target projector, collecting a second projection image of the projection plane, and carrying out feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image;
Specifically, the target projector is controlled to perform real-time response operation according to gesture mode information of the user, which requires high sensitivity and fast processing capability of the system, so that each change of the gesture of the user can be timely fed back. At the same time, a second projection image on the projection plane is acquired, capturing changes in image content due to user interaction. And monitoring the characteristic point change of the second projection image and the first projection image to obtain the characteristic point change quantity. The feature point variation amount between the first projection image and the second projection image is calculated. The displacement condition of the image feature points is captured by comparing the coordinate changes of the feature points at two moments in the three-dimensional space, including the position changes and the depth direction changes on the two-dimensional plane. And carrying out image rotation and deformation compensation parameter analysis on the second projection image based on the Rodrigues rotation function and the calculated characteristic point variation. By analyzing the displacement condition of the image characteristic points, specific parameters of rotation and deformation compensation required to be carried out on the image are determined so as to correct image distortion caused by user interaction or other external factors. Image rotation and distortion compensation is performed based on image data acquired in real time and accurate feature point variation. And (3) calculating a color adjustment coefficient of the second projection image to obtain the color adjustment coefficient, and ensuring the consistency and naturalness of the image in color. The calculation of the color adjustment coefficient comprises the comparison analysis of the pixel color intensity in the first projection image and the second projection image, and the color change is balanced in a weighting mode so as to achieve the optimal visual effect. And carrying out image compensation synthesis on the second projection image according to the obtained image rotation and deformation compensation parameters and the color adjustment coefficient to obtain a final target projection image. And comprehensively considering the variation of the feature points, the rotational deformation compensation of the image and the color adjustment result, and fusing the processing effects together through an image processing algorithm to generate a final image output.
And 105, performing lens distortion correction and projection parameter optimization on the target projector according to the target projection image, and generating target projection parameters of the target projector.
Specifically, by analyzing the target projection image, various distortions due to the physical characteristics of the lens, including radial distortion and tangential distortion, are identified, both of which can significantly affect the accuracy and sharpness of the image. To correct these distortions, a lens radial distortion correction is performed on the target projector from the target projection image. And calculating corrected image coordinates according to the original coordinates and the distance from the original coordinates to the center of the image by applying a specific radial distortion correction function. The radial distortion coefficient controls the amount of correction for different degrees of radial distortion. The lens tangential distortion correction corrects tangential distortion of the image in the x-axis and y-axis directions by a tangential distortion correction function. The tangential distortion coefficient enables the system to accurately adjust specific tangential distortion characteristics of the image, ensures that the proportion and the position of the image in the horizontal and vertical directions are more accurate, and improves the overall quality of the image. By combining the radial distortion correction parameters and tangential distortion correction parameters, a full parameter calibration model is constructed that contains correction information for the distortion and incorporates an understanding of the optical and digital processing characteristics of the projector itself. And carrying out projection parameter optimization on the initialized projection parameters of the target projector based on the full-parameter calibration model. Factors ranging from distortion correction to color accuracy, brightness uniformity to contrast optimization and the like are considered, so that the target projector can be ensured to generate target projection parameters which are matched with the original image content as much as possible and reach the best performance in visual effect.
And carrying out adjustment range prediction on the initialized projection parameters of the target projector based on the full-parameter calibration model to obtain a projection parameter adjustment range set. And randomly initializing in a predicted adjustment range to generate a plurality of candidate projection parameters. These candidate parameters cover a wide range of possibilities, aimed at finding the best projection parameter settings by experimentation and evaluation. For each set of candidate projection parameters, corresponding test projection images are acquired, and quality evaluation is carried out on the images, wherein the quality evaluation comprises two key indexes of image definition and color accuracy. The image sharpness evaluation is obtained by calculating the total number of pixels in the image and the luminance value and the luminance gradient of each pixel point. The overall brightness level of the image is considered, and the brightness change of each pixel point in the image in the horizontal and vertical directions is analyzed, so that the definition of the image is evaluated, and therefore, the candidate parameters can be identified to generate a projection image with high definition. And meanwhile, evaluating the color accuracy of the test projection image by calculating the average color difference in the CIELAB color space. Color accuracy evaluates the overall color appearance of the image of interest and analyzes the brightness differences, red-green contrast differences, and yellow-blue contrast differences for each pixel point in the image in the CIELAB color space. And comprehensively evaluating all candidate projection parameters based on the image definition evaluation index and the color accuracy evaluation index, and selecting an optimal projection parameter capable of meeting the requirements of high image definition and high color accuracy simultaneously through comparison and analysis. The optimization selection process is based on quantized evaluation indexes, and the mutual influence among parameters and the overall optimization strategy are considered, so that the finally generated target projection parameters can be ensured to improve the quality and viewing experience of the projection image to the greatest extent.
In the embodiment of the application, the initialized projection parameters can be effectively obtained by carrying out initial image acquisition and geometric and photometric calibration on the projection plane, and the geometric distortion of the projection image can be obviously improved by detecting and matching the characteristic points and carrying out geometric correction based on an affine transformation matrix, so that the accurate alignment of the projection image and the true restoration of the color are ensured, and the overall projection quality is improved. By utilizing the hand shadow segmentation and linear scanning fusion technology, the target hand image can be accurately segmented and identified, so that not only is the accuracy of gesture identification improved, but also the response capability of the system to complex gestures is enhanced, and the interaction between a user and projection content is more visual and smooth. Through 3D fingertip position analysis and touch detection, and gesture track reconstruction and pattern matching, a gesture instruction of a user can be rapidly and accurately identified, and a projection image can be adjusted in real time according to the gesture instruction. Feature point change monitoring and image compensation synthesis are carried out on the second projection image, so that image deformation or color deviation caused by user interaction can be effectively compensated, and consistency and stability of visual effects of the projection image are ensured. The definition and accuracy of the projection image are improved by comprehensively correcting the lens distortion and comprehensively optimizing the projection parameters of the target projection image. Through the omnibearing image optimization processing, the projector can keep excellent projection performance under various different environments and conditions, so that dynamic projection parameter response of the projector is realized, and the image projection display effect of the projector is improved.
In a specific embodiment, the process of executing step 101 may specifically include the following steps:
(1) Initial image acquisition is carried out on a projection plane of a target projector, so that a first projection image is obtained;
(2) Detecting characteristic points of the first projection image to obtain a first characteristic point set in the first projection image;
(3) Obtaining a second characteristic point set of the template projection image, and carrying out characteristic point matching on the first characteristic point set and the second characteristic point set to obtain a characteristic point similarity measure, wherein the characteristic point matching function is as follows:
;
Wherein, Representing feature pointsAnd feature pointsThe feature point similarity measure between them,Representing feature points in the first set of feature points,Representing feature points in the second set of feature points,Respectively represent characteristic pointsAnd feature pointsIs used to determine the feature descriptor vector of (a),A nonlinear mapping function, mapping the original descriptor space to a high-dimensional feature space to capture complex similarity relationships,Is a weighted Euclidean distance in whichIs a covariance matrix;
(4) Performing geometric correction on the first projection image based on the affine transformation matrix and the feature point similarity measurement to obtain initial geometric correction parameters;
(5) Parameter precision adjustment is carried out on the initial geometric correction parameters through a sub-pixel characteristic point positioning function, and target geometric correction parameters are obtained, wherein the sub-pixel characteristic point positioning function is as follows:
;
Wherein, The feature point positions representing sub-pixel accuracy,The pixel positions of the original feature points are represented,Representing a Hessian matrix for describingRelative to positionIs used to determine the local curvature of the lens,Representing characteristic response functionsRelative toAnd the second derivative of (c) represents the rate of change of the feature point positions,Representing characteristic response functionsIndicating the direction of maximum growth of the feature point positions;
(6) Performing photometric calibration on the first projection image by adopting local histogram equalization to obtain target photometric calibration parameters of a target projector;
(7) And generating initialized projection parameters of the target projector according to the target geometric correction parameters and the target photometric calibration parameters.
Specifically, initial image acquisition is performed on a projection plane of a target projector, and a first projection image is obtained. The definition and detail of the image are ensured by the high-precision image capturing device. And detecting the characteristic points of the first projection image to obtain a first characteristic point set. Feature point detection can identify unique points in an image that can represent the content of the image. These feature points need not only be consistent from image to image, but also remain stable as the image changes (e.g., rotates, scales, changes in brightness, etc.). And acquiring a second characteristic point set of the template projection image, and carrying out characteristic point matching on the first characteristic point set and the second characteristic point set. Using a similarity metric function, euclidean distances between feature points are considered and matching accuracy is enhanced by nonlinear mapping and weighted distances. In this way, the same or similar feature points are effectively matched, even if there is a slight distortion or displacement between the two images. And carrying out geometric correction on the first projection image by using the affine transformation matrix and the feature point similarity measure, so as to ensure that the image keeps correct proportion and position in the projection process. Any distortion that may occur during projection, such as lens distortion or unevenness of the projection surface, is compensated for by adjusting the shape and size of the image. The feature point positioning function is further enhanced by utilizing higher derivative and gradient information to adjust geometric correction parameters so as to realize higher-precision feature point positioning. The first projection image is photometrically calibrated by local histogram equalization. Photometric calibration is a process of adjusting the brightness and contrast of an image to ensure that the projected image maintains good visibility and consistent visual effects under different lighting conditions. The visual quality of the whole image is further improved by adjusting the local brightness distribution of the image.
In a specific embodiment, the process of executing step 102 may specifically include the following steps:
(1) Acquiring a hand interaction image between a target user and a projection plane through a target projector;
(2) Extracting color features of the hand interaction image to obtain average color features, and calculating structural features of the hand interaction image through local gradients to obtain average structural features;
(3) Performing hand and shadow segmentation on the hand interaction image according to the average color features and the average structure features to obtain a hand segmentation image;
(4) Performing hand shadow segmentation similarity calculation on the hand segmentation image to obtain hand shadow segmentation similarity measurement;
(5) Performing linear scanning fusion measurement on the hand segmentation image based on the hand shadow segmentation similarity measurement to obtain linear scanning fusion measurement;
(6) And carrying out hand and shadow boundary enhancement and hand image refinement on the hand segmentation image according to the linear scanning fusion metric to obtain a target hand image.
Specifically, a hand interaction image between a target user and a projection plane is acquired through a target projector. And extracting color features of the hand interaction image, and calculating average color features of a hand region in the image. The color feature extraction is based on the assumption of hand skin color or wearing gloves of a specific color, and by analyzing the color distribution, the hand can be effectively distinguished from the background. Meanwhile, the average structural characteristics are obtained by calculating the local gradient of the image, and the distinguishing degree of the hand and the background is enhanced by utilizing the image edge and texture information. The local gradient emphasizes the light and shade changes of the hand edge in the image, which is helpful for identifying the outline and structure of the hand. Based on the average color features and the average structural features, the hand interaction image is analyzed, and effective segmentation of the hand and the shadow is realized. The hand movements may create complex shadows that are visually very similar to the hand, easily causing misrecognitions. By comprehensively considering the color and the structural characteristics, the hand and the shadow are more accurately distinguished, and an accurate hand segmentation image is obtained. And (3) carrying out hand shadow segmentation similarity calculation on the hand segmentation image, and calculating the similarity measurement of hand shadow segmentation by comparing the characteristic difference of the hand region and the surrounding shadow region. Based on the hand shadow segmentation similarity measurement, the hand image is processed by adopting a linear scanning fusion measurement. The linear scanning fusion is an image fusion technology, and by scanning the images row by row or column by column, feature differences between adjacent pixels are comprehensively considered, so that smoother and natural hand and background segmentation is realized. And carrying out hand and shadow boundary enhancement and hand image refinement processing on the hand segmentation image according to the linear scanning fusion metric. The boundary enhancement can strengthen the definition of the hand contour line, and the hand image refinement is to keep the basic shape and the characteristics of the hand by removing redundant pixel points, so that the hand contour is finer and clearer, and the finally obtained target hand image can reduce the interference of shadows on the recognition result while keeping the hand detail.
In a specific embodiment, the process of executing step 103 may specifically include the following steps:
(1) 3D fingertip position analysis is carried out on the target hand image, fingertip 3D position data are obtained, and a 3D fingertip position analysis function is as follows:
;
Wherein, The 3D position data of the finger tip is represented,A rotation matrix representing the projector to the world coordinate system for adjusting the rotation of points in the target hand image to the real world coordinate system,Representing depth mapping functions, includingPosition on a planeThe depth value of the direction is set to be,Representing the inverse of the reference matrix in the projector,A translation vector representing the projector to the world coordinate system, for adjusting the origin of coordinates,Representing pixel coordinates on the image plane, 1 for homogeneity;
(2) Touch detection is carried out on the fingertip 3D position data based on a touch judging function, a touch detection result is obtained, the touch detection result comprises a touch action and a non-touch action, and the touch judging function is as follows:
;
Wherein, Representing a distance-based touch decision function,Is the shortest distance of the fingertip to the projection plane,Representing a base touch threshold, a minimum accepted distance of the fingertip to the projection plane,In order to adjust the parameters of the sensitivity of the function,Indicating that the amplitude of the modulation is to be adjusted,Indicating that the frequency of adjustment is to be adjusted,Representing the adjustment phase;
(3) If the touch detection result is that the touch action is generated, performing gesture track vector calculation on the target hand image to obtain an integral track vector in the gesture period, wherein the gesture track vector calculation function is as follows:
;
Wherein, For the integrated trajectory vector during the gesture, the overall movement direction and distance of the gesture is represented,Time of presentationIs provided, the temporal derivative of the fingertip position,A point in time representing the beginning and end of a gesture;
(4) And carrying out similarity calculation on the integral track vector and a preset gesture template vector during the gesture to obtain a similarity calculation result, wherein the similarity calculation function is as follows:
;
Wherein, Representing calculating the angle between the two vectors to quantify the similarity calculation between the gesture and the predefined template,Representing a preset gesture template vector,Representing an inverse cosine function;
(5) And generating gesture mode information of the target hand image according to the similarity calculation result.
Specifically, an interactive image of a user's hand is captured by a projector, and the fingertip position in the image is analyzed by using a 3D fingertip position analysis function, so as to obtain accurate 3D position data of the fingertip. The pixel coordinates on the image plane are converted to 3D positions in the real world coordinate system by combining the depth mapping function, the rotation matrix of the projector, the inverse of the reference matrix, and the translation vector. This conversion is dependent on the depth information and can be obtained by depth sensors or structured light techniques. And adopting a touch judging function to carry out touch detection on the fingertip 3D position data. And judging whether a touch action occurs or not by comparing the actual distance from the fingertip to the projection plane with a preset touch threshold value. The touch determination function takes into account dynamic changes when a user touches at different speeds and directions, and adapts to different touch intensities and speeds by adjusting parameters in the function. When the distance between the fingertip and the projection plane is detected to be smaller than the set threshold value, the system determines that the touch action is generated. If the touch action is detected, calculating a gesture track, and acquiring an integral track vector during the gesture. The overall direction and distance of movement of the gesture is characterized by integrating the instantaneous velocity vector of the fingertip, i.e. the rate of change of the fingertip position over time. The calculation of the gesture trajectory vector involves continuous monitoring of the speed of movement of the fingertip and integration of the speed vectors at the start and end times to obtain a net movement effect of the whole gesture process. And comparing the integrated track vector during the gesture with a preset gesture template vector through a similarity calculation function to calculate the similarity between the integrated track vector and the preset gesture template vector. By calculating the angle between the two vectors, the similarity between the gesture and the predefined template is quantified using an arccosine function. The similarity calculation may help the system identify a particular gesture performed by the user, such as a swipe, click, or other more complex gesture action. And generating gesture mode information of the target hand image according to the similarity calculation result. The information comprises the gesture type of the user, the speed and direction of executing the gesture and other data, and provides basis for the final command execution. For example, a gesture that slides left is recognized as a page turn command, and a gesture that lifts quickly upward is recognized as a pause command.
In a specific embodiment, the process of executing step 104 may specifically include the following steps:
(1) Controlling the target projector to perform real-time response operation according to the gesture mode information, and collecting a second projection image of the projection plane;
(2) And monitoring the characteristic point change of the second projection image and the first projection image to obtain the characteristic point change quantity, wherein the characteristic point change monitoring function is as follows:
;
Wherein, The variation amount of the feature points is represented,Representing the first in the second projection imageCoordinates of the individual feature points in three-dimensional space,Representing the first projection imageCoordinates of the individual feature points in three-dimensional space,A coefficient for balancing the influence of the depth change on the overall change amount;
(3) Performing image rotation and deformation compensation parameter analysis on the second projection image based on the Rodrigues rotation function and the characteristic point variation to obtain image rotation and deformation compensation parameters;
(4) And calculating a color adjustment coefficient of the second projection image to obtain the color adjustment coefficient, wherein the color adjustment coefficient calculation function is as follows:
;
Wherein, For the color adjustment factor(s),Representing the first projection image and the second projection image respectivelyThe color intensity of the individual pixels is determined,A weight representing the pixel level is given,Representing the total number of pixels in the second projection image,Representing a total number of pixels in the first projection image;
(5) And carrying out image compensation synthesis on the second projection image according to the image rotation and deformation compensation parameters and the color adjustment coefficient to obtain a target projection image.
Specifically, the target projector is controlled to perform corresponding real-time response operation according to the identified gesture mode information. Possibly including changing the projected content, resizing the image, moving the image position, etc. And meanwhile, the adjusted second projection image on the projection plane is acquired, the latest data is provided for subsequent image analysis, and the system is ensured to be optimized based on the latest projection state. And monitoring the characteristic point change of the first projection image and the second projection image. And calculating the position change quantity of the same characteristic point in the projection images at two moments, including the changes in the horizontal, vertical and depth directions. The feature point change monitoring function focuses on the movement of feature points on a two-dimensional plane, and adds a third dimension for the calculation of the change amount by introducing the consideration of depth change, so that the monitoring result is more accurate and comprehensive. And performing image rotation and deformation compensation parameter analysis by using the Rodrigues rotation function and the characteristic point variation. The rondrigas rotation function is a mathematical tool for describing and realizing rotation in a three-dimensional space, and can calculate a necessary rotation angle and direction according to a change in feature points. In combination with the feature point variation, the rotation and deformation compensation required by the image can be accurately analyzed to eliminate any unwanted image distortion or positional offset caused by user interaction. And simultaneously, calculating a color adjustment coefficient of the second projection image, and adjusting the color intensity of the projection image to adapt to the change of the projection environment or meet the visual preference of a user. And obtaining a global color adjustment coefficient by calculating the ratio of the color intensities of corresponding pixels in the first projection image and the second projection image. The coefficient reflects the average adjustment amplitude of the color intensity of the entire image. And carrying out comprehensive image compensation synthesis on the second projection image according to the obtained image rotation and deformation compensation parameters and the color adjustment coefficient. The geometric distortion of the image is corrected by applying the rotation and deformation compensation parameters, and the color intensity of the image is uniformly adjusted by utilizing the color adjustment coefficients, so that the visual consistency and the aesthetic property of the projection image are ensured. And generating an optimized target projection image through image processing and optimization.
In a specific embodiment, the process of executing step 105 may specifically include the following steps:
(1) And carrying out radial distortion correction on the lens of the target projector according to the target projection image to obtain radial distortion correction parameters, wherein the radial distortion correction function of the lens is as follows:
;
;
Wherein, Representing the image coordinates after the radial distortion correction,Representing the original coordinates in the projected image of the target,Representing the square of the distance of the original coordinates to the center of the image,Representing radial distortion coefficients, and controlling correction of radial distortion of different degrees;
(2) Performing lens tangential distortion correction on the target projector according to the target projection image to obtain tangential distortion correction parameters, wherein the lens tangential distortion correction function is as follows:
;
;
Wherein, Expressing tangential distortion coefficients, and controlling tangential distortion of a target projection image in the directions of an x axis and a y axis;
(3) Constructing a full-parameter calibration model of the target projector based on the radial distortion correction parameters and the tangential distortion correction parameters;
(4) And carrying out projection parameter optimization on the initialized projection parameters of the target projector based on the full-parameter calibration model, and generating target projection parameters of the target projector.
Specifically, lens radial distortion correction is performed on the target projector according to the target projection image, and radial distortion correction parameters are obtained. Radial distortion is typically manifested as a curvature of the image edge that increases with distance from the center of the image. The key to correcting this distortion is to adjust each point of the image to be closer to where it would otherwise be. The radial distortion correction function uses a plurality of predefined distortion coefficients) These coefficients are obtained by analyzing the difference between the actual projected image and the ideal image. This analysis process typically needs to be done by specialized correction software or correction patterns. By applying these coefficients, the position of each point is adjusted accordingly to its distance from the center of the image, thereby reducing or eliminating radial distortion. And correcting tangential distortion of the lens of the target projector according to the target projection image to obtain tangential distortion correction parameters. Tangential distortion is typically caused by the fact that the lens and imaging sensor planes are not perfectly parallel, resulting in an image that is offset in one direction. Tangential distortion correction function is performed by a set of different coefficients) To adjust each point in the image to compensate for this offset. Also, the determination of these coefficients needs to be made by analyzing the actual performance of the projected image. And constructing a full-parameter calibration model of the target projector based on the radial distortion correction parameters and the tangential distortion correction parameters. The model combines all correction parameters and can make comprehensive corrections to the target projected image, including adjusting the geometry, location and color of the image. The construction of the full-parameter calibration model requires precisely matching the differences between the actual projection image and the ideal image, and comprehensively analyzing and correcting the differences. And optimizing the initialized projection parameters of the target projector based on the full-parameter calibration model to generate the target projection parameters of the target projector. The projector is adjusted in various aspects such as an optical system, light source intensity, color balance and the like. The optimization of projection parameters aims to ensure that the projected image provides an optimal visual experience when viewed by a user, achieving high standards in terms of sharpness, color accuracy, and geometric accuracy.
In a specific embodiment, the performing step optimizes the projection parameters of the initialization projection parameters of the target projector based on the full parameter calibration model, and the process of generating the target projection parameters of the target projector may specifically include the following steps:
(1) Adjusting range prediction is carried out on initialized projection parameters of the target projector based on the full-parameter calibration model, and a projection parameter adjusting range set is obtained;
(2) Randomly initializing the initialized projection parameters based on the projection parameter adjustment range set to generate a plurality of candidate projection parameters;
(3) Respectively collecting test projection images under each candidate projection parameter, and carrying out image definition evaluation on the test projection images to obtain image definition evaluation indexes, wherein the image definition evaluation functions are as follows:
;
Wherein, An average sharpness index representing an image,Representing the total number of pixels in the image,Represent the firstThe luminance value of the individual pixel points,Is the firstA luminance gradient of each pixel point in the horizontal direction,Is the firstA brightness gradient of each pixel point in the vertical direction;
(4) Performing color accuracy evaluation on the test projection image to obtain a color accuracy evaluation index, wherein the color accuracy evaluation function is as follows:
;
Wherein, Represents the average color difference in the CIELAB color space, is a color accuracy evaluation index for measuring the color accuracy of the image,Representing the total number of pixels,Is the firstBrightness difference of each pixel point in CIELAB color space,Is the firstRed-green contrast difference of each pixel point in CIELAB color space,Is the firstYellow-blue contrast difference of each pixel point in CIELAB color space;
(5) And optimizing and selecting a plurality of candidate projection parameters according to the image definition evaluation index and the color accuracy evaluation index to generate target projection parameters of the target projector.
Specifically, the adjustment range prediction is performed on the initialized projection parameters of the target projector based on the full-parameter calibration model by analyzing the optical characteristics of the projector, the limitations of the imaging system and the specific conditions of the projection environment. The full parameter calibration model contains all the key parameters affecting the projection quality, such as focal length, aperture size, color balance, etc., as well as the specific impact of these parameters on the image quality. By this model, the system can predict the reasonable range of each parameter adjustment, which is set on the premise of ensuring that the image quality is not impaired by the parameter adjustment. And randomly initializing the initialized projection parameters according to the predicted parameter adjustment range to generate a plurality of candidate projection parameters. The best parameter combination is found by exploring the image quality difference under different parameter configurations. And respectively acquiring test projection images aiming at each group of candidate parameters, and comprehensively evaluating the test images. And respectively acquiring test projection images under each candidate projection parameter, and carrying out image definition evaluation on the test projection images. And measuring the overall definition of the image by calculating the brightness gradient of each pixel point in the test image by adopting a specific image definition evaluation function. Images with larger intensity gradients typically have higher definition and better represent detail and contours. This evaluation focuses not only on the global sharpness of the image, but also on the sharpness changes of the local areas, thereby evaluating the image quality more comprehensively. And simultaneously, evaluating the color accuracy of the test projection image. The color accuracy is evaluated based on color differences in the CIELAB color space. The accuracy of color reproduction is quantified by calculating the average color difference between the test image and the reference image. The smaller the color difference, the closer the color reproduction of the representation image is to the original scene or the intended effect, and thus the more accurate the color representation. And comparing and analyzing all candidate projection parameters based on the evaluation results of the image definition and the color accuracy. By comprehensively considering the definition evaluation index and the color accuracy evaluation index, the parameter configuration capable of generating the highest image quality under the current setting and the environmental conditions is identified. This process may involve multiple rounds of evaluation and adjustment to ensure that the final selected projection parameters not only perform well in a single dimension (e.g., sharpness or color accuracy), but also are optimally balanced across all important image quality metrics. And generating target projection parameters of the target projector according to the optimized evaluation result. These parameters will be used to adjust the actual projection settings of the projector to ensure that the final projected image is optimally represented in terms of sharpness, color accuracy, contrast, etc.
The method for analyzing the projector control parameters in the embodiment of the present application is described above, and the apparatus for analyzing the projector control parameters in the embodiment of the present application is described below, referring to fig. 2, where an embodiment of the apparatus for analyzing the projector control parameters in the embodiment of the present application includes:
The calibration module 201 is configured to perform initial image acquisition on a projection plane of the target projector to obtain a first projection image, and perform geometric and photometric calibration on the first projection image to obtain initialized projection parameters of the target projector;
the acquisition module 202 is configured to acquire a hand interaction image between a target user and a projection plane through a target projector, and perform hand shadow segmentation and linear scanning fusion on the hand interaction image to obtain a target hand image;
The detection module 203 is configured to perform 3D fingertip position analysis and touch detection on the target hand image to obtain a touch detection result, and perform gesture track reconstruction and gesture pattern matching on the target hand image according to the touch detection result to obtain gesture pattern information;
The synthesizing module 204 is configured to respond to the gesture mode information in real time by using the target projector, collect a second projection image of the projection plane, and perform feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image;
the optimization module 205 is configured to perform lens distortion correction and projection parameter optimization on the target projector according to the target projection image, and generate target projection parameters of the target projector.
Through the cooperation of the components, the initialized projection parameters can be effectively obtained by carrying out initial image acquisition and geometric and photometric calibration on the projection plane, and the geometric distortion of the projection image can be obviously improved by detecting and matching characteristic points and carrying out geometric correction based on an affine transformation matrix, so that the accurate alignment of the projection image and the true restoration of colors are ensured, and the overall projection quality is improved. By utilizing the hand shadow segmentation and linear scanning fusion technology, the target hand image can be accurately segmented and identified, so that not only is the accuracy of gesture identification improved, but also the response capability of the system to complex gestures is enhanced, and the interaction between a user and projection content is more visual and smooth. Through 3D fingertip position analysis and touch detection, and gesture track reconstruction and pattern matching, a gesture instruction of a user can be rapidly and accurately identified, and a projection image can be adjusted in real time according to the gesture instruction. Feature point change monitoring and image compensation synthesis are carried out on the second projection image, so that image deformation or color deviation caused by user interaction can be effectively compensated, and consistency and stability of visual effects of the projection image are ensured. The definition and accuracy of the projection image are improved by comprehensively correcting the lens distortion and comprehensively optimizing the projection parameters of the target projection image. Through the omnibearing image optimization processing, the projector can keep excellent projection performance under various different environments and conditions, so that dynamic projection parameter response of the projector is realized, and the image projection display effect of the projector is improved.
The present application also provides an electronic device including a memory and a processor, where the memory stores computer readable instructions that, when executed by the processor, cause the processor to perform the steps of the projector control parameter analysis method in the above embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, systems and units may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising several instructions for causing an electronic device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (8)
1. A projector control parameter analysis method, characterized in that the projector control parameter analysis method comprises:
the method comprises the steps of performing initial image acquisition on a projection plane of a target projector to obtain a first projection image, and performing geometric and photometric calibration on the first projection image to obtain initialized projection parameters of the target projector; the method specifically comprises the following steps: initial image acquisition is carried out on a projection plane of a target projector, so that a first projection image is obtained; detecting characteristic points of the first projection image to obtain a first characteristic point set in the first projection image; obtaining a second characteristic point set of the template projection image, and carrying out characteristic point matching on the first characteristic point set and the second characteristic point set to obtain a characteristic point similarity measure, wherein the characteristic point matching function is as follows:
;
Wherein, Representing feature pointsAnd feature pointsThe feature point similarity measure between them,Representing feature points in the first set of feature points,Representing feature points in the second set of feature points,Respectively represent characteristic pointsAnd feature pointsIs used to determine the feature descriptor vector of (a),A nonlinear mapping function, mapping the original descriptor space to a high-dimensional feature space to capture complex similarity relationships,Is a weighted Euclidean distance in whichIs a covariance matrix; performing geometric correction on the first projection image based on an affine transformation matrix and the feature point similarity measurement to obtain initial geometric correction parameters; parameter precision adjustment is carried out on the initial geometric correction parameters through a sub-pixel characteristic point positioning function to obtain target geometric correction parameters, wherein the sub-pixel characteristic point positioning function is as follows:
;
Wherein, The feature point positions representing sub-pixel accuracy,The pixel positions of the original feature points are represented,Representing a Hessian matrix for describingRelative to positionIs used to determine the local curvature of the lens,Representing characteristic response functionsRelative toAnd the second derivative of (c) represents the rate of change of the feature point positions,Representing characteristic response functionsIndicating the direction of maximum growth of the feature point positions; performing photometric calibration on the first projection image by adopting local histogram equalization to obtain target photometric calibration parameters of the target projector; generating initialization projection parameters of the target projector according to the target geometric correction parameters and the target photometric calibration parameters;
Acquiring a hand interaction image between a target user and the projection plane through the target projector, and performing hand shadow segmentation and linear scanning fusion on the hand interaction image to obtain a target hand image;
3D fingertip position analysis and touch detection are carried out on the target hand image to obtain a touch detection result, and gesture track reconstruction and gesture pattern matching are carried out on the target hand image according to the touch detection result to obtain gesture pattern information;
Responding the gesture mode information in real time through the target projector, collecting a second projection image of the projection plane, and carrying out feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image;
And carrying out lens distortion correction and projection parameter optimization on the target projector according to the target projection image, and generating target projection parameters of the target projector.
2. The projector control parameter analysis method of claim 1, wherein the obtaining, by the target projector, a hand interaction image between a target user and the projection plane, and performing hand shadow segmentation and linear scan fusion on the hand interaction image, to obtain a target hand image, includes:
Acquiring a hand interaction image between a target user and the projection plane through the target projector;
Extracting color features of the hand interaction image to obtain average color features, and calculating structural features of the hand interaction image through local gradients to obtain average structural features;
performing hand and shadow segmentation on the hand interaction image according to the average color features and the average structural features to obtain a hand segmentation image;
Performing hand shadow segmentation similarity calculation on the hand segmentation image to obtain hand shadow segmentation similarity measurement;
performing linear scanning fusion measurement on the hand segmentation image based on the hand shadow segmentation similarity measurement to obtain linear scanning fusion measurement;
and carrying out hand and shadow boundary enhancement and hand image refinement on the hand segmentation image according to the linear scanning fusion metric to obtain a target hand image.
3. The projector control parameter analysis method of claim 1, wherein the performing 3D fingertip position analysis and touch detection on the target hand image to obtain a touch detection result, and performing gesture track reconstruction and gesture pattern matching on the target hand image according to the touch detection result to obtain gesture pattern information includes:
3D fingertip position analysis is carried out on the target hand image to obtain fingertip 3D position data, wherein a 3D fingertip position analysis function is as follows:
;
Wherein, The 3D position data of the finger tip is represented,A rotation matrix representing the projector to the world coordinate system for adjusting the rotation of points in the target hand image to the real world coordinate system,Representing depth mapping functions, includingPosition on a planeThe depth value of the direction is set to be,Representing the inverse of the reference matrix in the projector,A translation vector representing the projector to the world coordinate system, for adjusting the origin of coordinates,Representing pixel coordinates on the image plane, 1 for homogeneity;
performing touch detection on the fingertip 3D position data based on a touch judging function to obtain a touch detection result, wherein the touch detection result comprises a touch action and a non-touch action, and the touch judging function is as follows:
;
Wherein, Representing a distance-based touch decision function,Is the shortest distance of the fingertip to the projection plane,Representing a base touch threshold, a minimum accepted distance of the fingertip to the projection plane,In order to adjust the parameters of the sensitivity of the function,Indicating that the amplitude of the modulation is to be adjusted,Indicating that the frequency of adjustment is to be adjusted,Representing the adjustment phase;
If the touch detection result is that the touch action is generated, performing gesture track vector calculation on the target hand image to obtain an integral track vector in the gesture period, wherein the gesture track vector calculation function is as follows:
;
Wherein, For the integrated trajectory vector during the gesture, the overall movement direction and distance of the gesture is represented,Time of presentationIs provided, the temporal derivative of the fingertip position,A point in time representing the beginning and end of a gesture;
And performing similarity calculation on the integral track vector during the gesture and a preset gesture template vector to obtain a similarity calculation result, wherein the similarity calculation function is as follows:
;
Wherein, Representing calculating the angle between the two vectors to quantify the similarity calculation between the gesture and the predefined template,Representing a preset gesture template vector,Representing an inverse cosine function;
and generating gesture mode information of the target hand image according to the similarity calculation result.
4. The method according to claim 1, wherein the responding the gesture mode information in real time by the target projector and collecting a second projection image of the projection plane, and performing feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image, includes:
Controlling the target projector to perform real-time response operation according to the gesture mode information, and collecting a second projection image of the projection plane;
And monitoring the characteristic point change of the second projection image and the first projection image to obtain the characteristic point change quantity, wherein the characteristic point change monitoring function is as follows:
;
Wherein, The variation amount of the feature points is represented,Representing the first in the second projection imageCoordinates of the individual feature points in three-dimensional space,Representing the first projection imageCoordinates of the individual feature points in three-dimensional space,A coefficient for balancing the influence of the depth change on the overall change amount;
Performing image rotation and deformation compensation parameter analysis on the second projection image based on the Rodrigues rotation function and the characteristic point variation to obtain image rotation and deformation compensation parameters;
and calculating a color adjustment coefficient of the second projection image to obtain the color adjustment coefficient, wherein the color adjustment coefficient calculation function is as follows:
;
Wherein, For the color adjustment factor(s),Representing the first projection image and the second projection image respectivelyThe color intensity of the individual pixels is determined,A weight representing the pixel level is given,Representing the total number of pixels in the second projection image,Representing a total number of pixels in the first projection image;
And carrying out image compensation synthesis on the second projection image according to the image rotation and deformation compensation parameters and the color adjustment coefficients to obtain a target projection image.
5. The projector control parameter analysis method of claim 1, wherein the performing lens distortion correction and projection parameter optimization on the target projector according to the target projection image, generating target projection parameters of the target projector, comprises:
And correcting radial distortion of a lens of the target projector according to the target projection image to obtain a radial distortion correction parameter, wherein the radial distortion correction function of the lens is as follows:
;
;
Wherein, Representing the image coordinates after the radial distortion correction,Representing the original coordinates in the projected image of the target,Representing the square of the distance of the original coordinates to the center of the image,Representing radial distortion coefficients, and controlling correction of radial distortion of different degrees;
Performing lens tangential distortion correction on the target projector according to the target projection image to obtain tangential distortion correction parameters, wherein the lens tangential distortion correction function is as follows:
;
;
Wherein, Expressing tangential distortion coefficients, and controlling tangential distortion of a target projection image in the directions of an x axis and a y axis;
Constructing a full-parameter calibration model of the target projector based on the radial distortion correction parameters and the tangential distortion correction parameters;
And carrying out projection parameter optimization on the initialized projection parameters of the target projector based on the full-parameter calibration model, and generating target projection parameters of the target projector.
6. The projector control parameter analysis method of claim 5, wherein the performing projection parameter optimization on the initialized projection parameters of the target projector based on the full parameter calibration model, generating target projection parameters of the target projector, comprises:
Performing adjustment range prediction on the initialized projection parameters of the target projector based on the full-parameter calibration model to obtain a projection parameter adjustment range set;
Randomly initializing the initialized projection parameters based on the projection parameter adjustment range set to generate a plurality of candidate projection parameters;
Respectively collecting test projection images under each candidate projection parameter, and carrying out image definition evaluation on the test projection images to obtain image definition evaluation indexes, wherein the image definition evaluation functions are as follows:
;
Wherein, An average sharpness index representing an image,Representing the total number of pixels in the image,Represent the firstThe luminance value of the individual pixel points,Is the firstA luminance gradient of each pixel point in the horizontal direction,Is the firstA brightness gradient of each pixel point in the vertical direction;
performing color accuracy evaluation on the test projection image to obtain a color accuracy evaluation index, wherein a color accuracy evaluation function is as follows:
;
Wherein, Represents the average color difference in the CIELAB color space, is a color accuracy evaluation index for measuring the color accuracy of the image,Representing the total number of pixels,Is the firstBrightness difference of each pixel point in CIELAB color space,Is the firstRed-green contrast difference of each pixel point in CIELAB color space,Is the firstYellow-blue contrast difference of each pixel point in CIELAB color space;
and optimally selecting the plurality of candidate projection parameters according to the image definition evaluation index and the color accuracy evaluation index to generate target projection parameters of the target projector.
7. A projector control parameter analysis device, characterized in that the projector control parameter analysis device comprises:
The calibration module is used for carrying out initial image acquisition on a projection plane of the target projector to obtain a first projection image, and carrying out geometric and photometric calibration on the first projection image to obtain initialized projection parameters of the target projector; the method specifically comprises the following steps: initial image acquisition is carried out on a projection plane of a target projector, so that a first projection image is obtained; detecting characteristic points of the first projection image to obtain a first characteristic point set in the first projection image; obtaining a second characteristic point set of the template projection image, and carrying out characteristic point matching on the first characteristic point set and the second characteristic point set to obtain a characteristic point similarity measure, wherein the characteristic point matching function is as follows:
;
Wherein, Representing feature pointsAnd feature pointsThe feature point similarity measure between them,Representing feature points in the first set of feature points,Representing feature points in the second set of feature points,Respectively represent characteristic pointsAnd feature pointsIs used to determine the feature descriptor vector of (a),A nonlinear mapping function, mapping the original descriptor space to a high-dimensional feature space to capture complex similarity relationships,Is a weighted Euclidean distance in whichIs a covariance matrix; performing geometric correction on the first projection image based on an affine transformation matrix and the feature point similarity measurement to obtain initial geometric correction parameters; parameter precision adjustment is carried out on the initial geometric correction parameters through a sub-pixel characteristic point positioning function to obtain target geometric correction parameters, wherein the sub-pixel characteristic point positioning function is as follows:
;
Wherein, The feature point positions representing sub-pixel accuracy,The pixel positions of the original feature points are represented,Representing a Hessian matrix for describingRelative to positionIs used to determine the local curvature of the lens,Representing characteristic response functionsRelative toAnd the second derivative of (c) represents the rate of change of the feature point positions,Representing characteristic response functionsIndicating the direction of maximum growth of the feature point positions; performing photometric calibration on the first projection image by adopting local histogram equalization to obtain target photometric calibration parameters of the target projector; generating initialization projection parameters of the target projector according to the target geometric correction parameters and the target photometric calibration parameters;
The acquisition module is used for acquiring a hand interaction image between a target user and the projection plane through the target projector, and carrying out hand shadow segmentation and linear scanning fusion on the hand interaction image to obtain a target hand image;
The detection module is used for carrying out 3D fingertip position analysis and touch detection on the target hand image to obtain a touch detection result, and carrying out gesture track reconstruction and gesture pattern matching on the target hand image according to the touch detection result to obtain gesture pattern information;
the synthesizing module is used for responding to the gesture mode information in real time through the target projector, collecting a second projection image of the projection plane, and carrying out feature point change monitoring and image compensation synthesis on the second projection image to obtain a target projection image;
And the optimization module is used for carrying out lens distortion correction and projection parameter optimization on the target projector according to the target projection image and generating target projection parameters of the target projector.
8. An electronic device, the electronic device comprising: a memory and at least one processor, the memory having instructions stored therein;
The at least one processor invokes the instructions in the memory to cause the electronic device to perform the projector control parameter analysis method of any one of claims 1-6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410775868.2A CN118509566B (en) | 2024-06-17 | 2024-06-17 | Projector control parameter analysis method, device and equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410775868.2A CN118509566B (en) | 2024-06-17 | 2024-06-17 | Projector control parameter analysis method, device and equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118509566A CN118509566A (en) | 2024-08-16 |
| CN118509566B true CN118509566B (en) | 2024-11-22 |
Family
ID=92246539
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410775868.2A Active CN118509566B (en) | 2024-06-17 | 2024-06-17 | Projector control parameter analysis method, device and equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118509566B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119831907B (en) * | 2025-03-17 | 2025-07-08 | 深圳市大屏影音技术有限公司 | A method and system for image projection distortion correction |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106774846A (en) * | 2016-11-24 | 2017-05-31 | 中国科学院深圳先进技术研究院 | Alternative projection method and device |
| CN114111633A (en) * | 2021-10-27 | 2022-03-01 | 深圳市纵维立方科技有限公司 | Projector lens distortion error correction method for structured light three-dimensional measurement |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4703476B2 (en) * | 2006-05-02 | 2011-06-15 | 株式会社日立製作所 | Adjustment method of video display device |
| US8007110B2 (en) * | 2007-12-28 | 2011-08-30 | Motorola Mobility, Inc. | Projector system employing depth perception to detect speaker position and gestures |
| CN102508574B (en) * | 2011-11-09 | 2014-06-04 | 清华大学 | Projection-screen-based multi-touch detection method and multi-touch system |
| CN103854293A (en) * | 2014-02-26 | 2014-06-11 | 奇瑞汽车股份有限公司 | Vehicle tracking method and device based on feature point matching |
| US10210607B1 (en) * | 2015-04-08 | 2019-02-19 | Wein Holding LLC | Digital projection system and method for workpiece assembly |
| CN106650554A (en) * | 2015-10-30 | 2017-05-10 | 成都理想境界科技有限公司 | Static hand gesture identification method |
| CN107305431A (en) * | 2016-04-25 | 2017-10-31 | 钦赛勇 | Projection demonstration device and method based on gesture operation |
| US10296812B2 (en) * | 2017-01-04 | 2019-05-21 | Qualcomm Incorporated | Systems and methods for mapping based on multi-journey data |
| CN110300292B (en) * | 2018-03-22 | 2021-11-19 | 深圳光峰科技股份有限公司 | Projection distortion correction method, device, system and storage medium |
| CN109375833B (en) * | 2018-09-03 | 2022-03-04 | 深圳先进技术研究院 | A method and device for generating a touch command |
| CN116645275A (en) * | 2022-02-15 | 2023-08-25 | 深圳光峰科技股份有限公司 | Method, device, projector and storage medium for correcting projection image |
| CN115474033A (en) * | 2022-09-19 | 2022-12-13 | 卓谨信息科技(常州)有限公司 | Realization method of virtual screen for intelligent recognition |
| CN116896617B (en) * | 2023-06-14 | 2024-04-05 | 宁波锦辉光学科技有限公司 | Projection image automatic correction method and system |
-
2024
- 2024-06-17 CN CN202410775868.2A patent/CN118509566B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106774846A (en) * | 2016-11-24 | 2017-05-31 | 中国科学院深圳先进技术研究院 | Alternative projection method and device |
| CN114111633A (en) * | 2021-10-27 | 2022-03-01 | 深圳市纵维立方科技有限公司 | Projector lens distortion error correction method for structured light three-dimensional measurement |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118509566A (en) | 2024-08-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7598917B2 (en) | Virtual facial makeup removal, fast face detection and landmark tracking | |
| CN105760826B (en) | Face tracking method and device and intelligent terminal | |
| CN110619628B (en) | Face image quality assessment method | |
| US8879847B2 (en) | Image processing device, method of controlling image processing device, and program for enabling computer to execute same method | |
| EP3241151B1 (en) | An image face processing method and apparatus | |
| CN104202547B (en) | Method, projection interactive approach and its system of target object are extracted in projected picture | |
| CN116664620B (en) | Picture dynamic capturing method and related device based on tracking system | |
| CN111460976A (en) | A data-driven real-time hand motion evaluation method based on RGB video | |
| CN110807427A (en) | Sight tracking method and device, computer equipment and storage medium | |
| CN111586424B (en) | Video live broadcast method and device for realizing multi-dimensional dynamic display of cosmetics | |
| CN115951783A (en) | A computer human-computer interaction method based on gesture recognition | |
| US11676357B2 (en) | Modification of projected structured light based on identified points within captured image | |
| CN118570865B (en) | Face recognition analysis method and system based on artificial intelligence | |
| CN112634125A (en) | Automatic face replacement method based on off-line face database | |
| CN118509566B (en) | Projector control parameter analysis method, device and equipment | |
| CN119847330A (en) | Intelligent control method and system for AR equipment | |
| JP5051671B2 (en) | Information processing apparatus, information processing method, and program | |
| Tian et al. | Robust facial marker tracking based on a synthetic analysis of optical flows and the YOLO network | |
| CN108255298B (en) | Infrared gesture recognition method and device in projection interaction system | |
| KR101357581B1 (en) | A Method of Detecting Human Skin Region Utilizing Depth Information | |
| Karaoglu et al. | Point light source position estimation from RGB-D images by learning surface attributes | |
| CN110880186A (en) | Real-time human hand three-dimensional measurement method based on one-time projection structured light parallel stripe pattern | |
| CN105740848B (en) | A kind of fast human-eye positioning method based on confidence level | |
| JP2005010900A (en) | Color image processing apparatus and method | |
| CN120235946B (en) | Interactive Spatial Target Localization Method and Device for Multiple VR Headsets |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |