CN104202547B - Method, projection interactive approach and its system of target object are extracted in projected picture - Google Patents
Method, projection interactive approach and its system of target object are extracted in projected picture Download PDFInfo
- Publication number
- CN104202547B CN104202547B CN201410429157.6A CN201410429157A CN104202547B CN 104202547 B CN104202547 B CN 104202547B CN 201410429157 A CN201410429157 A CN 201410429157A CN 104202547 B CN104202547 B CN 104202547B
- Authority
- CN
- China
- Prior art keywords
- image
- projection
- pixel
- pixel value
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 8
- 238000006243 chemical reaction Methods 0.000 claims description 84
- 238000001514 detection method Methods 0.000 claims description 80
- 239000013598 vector Substances 0.000 claims description 35
- 230000003993 interaction Effects 0.000 claims description 31
- 230000003287 optical effect Effects 0.000 claims description 28
- 239000011159 matrix material Substances 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 7
- 238000002156 mixing Methods 0.000 claims description 5
- 238000002310 reflectometry Methods 0.000 claims description 5
- 238000013178 mathematical model Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 12
- 230000006399 behavior Effects 0.000 description 37
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000000523 sample Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000035943 smell Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The present invention provides the method and system that target object is extracted in a kind of projected picture, and this method includes:The projected image of projector is obtained, projection prognostic chart picture of the projected image for perspective plane is generated according to default position transformational relation and pixel value transformational relation;The current display image shown on the perspective plane is gathered by camera device;The display image and the difference of the pixel value of same position pixel in the projection prognostic chart picture are contrasted, the pixel that difference described in the display image is more than predetermined threshold value is extracted, obtains target object;The present invention can accurately extract target object from projected picture.The present invention also provides a kind of projection interactive approach and system, can effectively improve interactive processing efficiency during projection.
Description
Technical Field
The invention relates to the technical field of projection, in particular to a method for extracting a target object from a projection picture, a system for extracting the target object from the projection picture, a projection interaction method and a projection interaction system.
Background
Although the multi-channel human-computer interaction technology comprehensively utilizing vision, hearing, touch, smell, taste and the like is increasingly applied, the two hands still play an irreplaceable role in the virtual reality system as important action and perception relation models in the virtual reality system. At present, a touch screen is the simplest, convenient and natural man-machine interaction mode at present as the latest computer input equipment. The multimedia interactive device gives the multimedia a brand-new appearance and is a brand-new multimedia interactive device with great attractiveness. With the progress of science and technology, projectors are widely used, such as training conferences, classroom teaching and movie theaters. The projector is convenient to use, and can change any plane into a display screen.
With the development of science and technology, cameras and projections gradually enter the lives of common people, and the projections are currently applied to various aspects such as teaching, various conferences and the like. The automatic recognition of gestures through projection and a camera becomes the current research focus, and better human-computer interaction is achieved through the automatic recognition of the gestures, so that the projection is more convenient to use.
In an interactive projection system, in order to recognize a gesture, a target region is first detected from a projection image: a hand region; in the prior art, a computer vision method is used for detecting a target area, the most common method is to detect by using the color and shape of a hand at present, however, the detection by using the skin color in a projection interactive system has two major disadvantages: firstly, when light emitted by the projector irradiates on the arm, the color of the arm can be changed, which brings difficulty to detection; secondly, when the projection picture contains hands, false detection can be caused.
In summary, in the field of projection interaction, for a method for detecting a target area in a projection image, the accuracy of the detection result is low, so that the recognition capability of the motion behavior of a target object in the projection image is poor, and the efficiency of projection interaction processing is not high.
Disclosure of Invention
Based on the above, the invention provides a method for extracting a target object from a projection picture, which can accurately extract the target object from the projection picture.
The invention also provides a projection interaction method which can effectively improve the interaction processing efficiency during projection.
A method for extracting a target object from a projection picture comprises the following steps:
acquiring a projection image of a projector, and generating a projection prediction image of the projection image on a projection surface according to a preset position conversion relation and a pixel value conversion relation;
acquiring a display image displayed on the projection surface currently through a camera device;
and comparing the difference value of the pixel values of the pixel points at the same position in the display image and the projection prediction image, and extracting the pixel points of which the difference value is greater than a preset threshold value in the display image to obtain the target object.
A projection interaction method comprises the method for extracting the target object from the projection picture, and further comprises the following steps:
detecting the space-time characteristics of the target object from the display images of the frames acquired by the camera device;
and inputting the space-time characteristics to a preset movement behavior classifier, identifying the movement behavior of the target object, and executing a preset control instruction corresponding to the movement behavior according to the movement behavior.
A system for extracting a target object from a projection screen, comprising:
the generating module is used for acquiring a projection image of the projector and generating a projection prediction image of the projection image on a projection surface according to a preset position conversion relation and a pixel value conversion relation;
the extraction module is used for acquiring a display image displayed on the projection surface currently through the camera device;
and the target object extraction module is used for comparing the difference value of the pixel values of the pixel points at the same position in the display image and the projection prediction image, and extracting the pixel points of which the difference value is greater than a preset threshold value in the display image to obtain the target object.
A projection interaction system comprises the system for extracting the target object from the projection picture, and further comprises:
a spatiotemporal feature detection module for detecting spatiotemporal features fromPick-up deviceDetecting the space-time characteristics of the target object from the display images of each frame;
and the identification module is used for inputting the space-time characteristics to a preset movement behavior classifier, identifying the movement behavior of the target object and executing a preset control instruction corresponding to the movement behavior according to the movement behavior.
According to the method and the system for extracting the target object from the projection picture, a projection prediction image of the projection image relative to a projection plane is generated according to a preset position conversion relation and a pixel value conversion relation; acquiring a display image displayed on the projection surface currently by a camera device; comparing the projection prediction image with the display image to obtain pixel points of which the difference value of the pixel values of the pixel points at the same position is greater than a preset threshold value, and extracting the pixel points as target objects; according to the invention, the projected image is predicted, the predicted projected image obtained through prediction is compared with an actual display image, and if the pixel values of pixel points are different, the target object on the projection surface can be judged, so that the target object area can be accurately extracted from the display image, and the detection result has high precision.
The projection interaction method and the projection interaction system can accurately extract the target object region, so that the recognition capability of the motion behavior of the target object in the projection picture is better, and the projection interaction processing efficiency is obviously improved.
Drawings
Fig. 1 is a view of an application scenario of the method for extracting a target object from a projection image according to a first embodiment of the present invention.
Fig. 2 is a flowchart illustrating a method for extracting a target object from a projection screen according to a second embodiment of the present invention.
Fig. 3 is a schematic flow chart of the process of obtaining the pixel position conversion relationship in fig. 2.
Fig. 4 is a schematic diagram of a position detection image.
Fig. 5 is a schematic flow chart of the process of obtaining the pixel value conversion relationship of the pixel point in fig. 2.
Fig. 6 is a schematic flow chart of the projection prediction image generated from the projection image in fig. 2.
Fig. 7 is a flowchart illustrating a projection interaction method according to a third embodiment of the invention.
Fig. 8 is a schematic flow chart of detecting the motion characteristic of the target object in fig. 7.
Fig. 9 is a schematic structural diagram of a system for extracting a target object from a projection screen according to a fourth embodiment of the present invention.
Fig. 10 is a schematic structural diagram of a projection interaction system according to a fifth embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
The first embodiment,
As shown in fig. 1, the application scene diagram of the method for extracting a target object from a projection screen according to an embodiment of the present invention includes a computing device 12, a projection apparatus 13, and an image capturing apparatus 11; the projection device 13 and the camera device 11 are respectively connected with the computing equipment 12, and the computing equipment 12 is used for storing projection contents, controlling the projection device 13, receiving data input by the camera device 11 and performing image processing analysis; the projection device 13 is used for projecting the projection content onto the projection surface 14, and the camera device 11 is used for shooting the projection surface 14 to collect video data and image data.
Example II,
Fig. 2 is a schematic flow chart of a method for extracting a target object from a projection screen according to an embodiment of the present invention, and the embodiment is described by applying the method to a computing device, and may include the following steps:
s21, acquiring a projection image of the projector, and generating a projection prediction image of the projection image on a projection surface according to a preset position conversion relation and a pixel value conversion relation;
s22, acquiring a display image displayed on the projection surface currently through a camera device;
s23, comparing the difference value of the pixel values of the pixel points at the same position in the display image and the projection prediction image, and extracting the pixel points of which the difference value is greater than a preset threshold value in the display image to obtain the target object.
In the embodiment, in a system formed by the projection device and the camera device, the projection content is stored in the computing equipment, and the computing equipment can acquire the projected picture content, so that the computing equipment can estimate and predict the image read by the camera device; if the projection picture is not blocked by the target object, the image read by the camera device is close to the image predicted by the computing device, and if a certain position of the projection picture is blocked by the target object, the image actually read by the camera device and the image predicted by the computing device are greatly different at the blocking position of the target object. According to the information, the projection device can accurately judge whether the projection picture contains the target object to be detected or not without being interfered by the picture of the projection device, and can accurately find the position of the target object on the projection picture.
For step S21, acquiring a projection image of the projector, and generating a projection prediction image of the projection image on the projection surface according to a preset position conversion relationship and a pixel value conversion relationship;
wherein the projected image, i.e., the projected content stored in the computing device; projecting a prediction image, namely an image obtained by prediction estimation of a computing device according to the projection image;
in order to predict an image on a projection surface acquired by a camera device, a conversion corresponding relation between a projection image and a projection prediction image needs to be obtained, wherein two pieces of information, namely a pixel point position conversion relation and a pixel point pixel value conversion relation, are related in the image, and accordingly two conversion relations, namely the pixel point position conversion relation and the pixel point pixel value conversion relation, need to be determined, and the projection prediction image of the projection image can be generated according to the position conversion relation and the pixel value conversion relation.
In order to obtain a projection prediction image of a projection image, a corresponding relation between the projection prediction image and the projection prediction image needs to be established, and a position conversion relation is obtained by projecting a preset position detection image onto a projection plane, acquiring a first image formed by the projection prediction image through a camera device and comparing differences of pixel points in the position detection image and the first image; the embodiment takes the process of establishing the position relationship between the two pixel points as an example for explanation; as shown in fig. 3, the method may further include the following steps:
s31, controlling the projection device to project a preset position detection image to the projection surface, wherein the position detection image is a preset grid image, and the grid image is divided into a plurality of grids with alternate black and white colors;
s32, acquiring a first image formed by the position detection image on the projection surface through the camera device;
s33, calculating to obtain the position conversion relation according to the proportion of the coordinates of the corner points of the position detection image and the coordinates of the corner points of the first image;
in this embodiment, when the projection image is projected to the projection plane by the projection device after being input to the projection device, and the image capturing device captures the projection plane to acquire the display image on the current projection plane, in the processing process, the projection plane may be uneven, and the system error between the projector and the image capturing device may cause that the projection image and the display image are not completely consistent; for example, if the projection plane is not flat, the display image may be stretched, compressed, or distorted; therefore, the position conversion relationship of the pixel points between the two is required to be determined.
Firstly, controlling the projection device to project a preset position detection image to the projection surface; the position detection image is a preset grid image, and the grid image is divided into a plurality of grids with alternate black and white colors; the grid image is adopted, and because the image is provided with a plurality of grids which are provided with four vertexes, the calibration of the geometric position of the pixel point is facilitated; as shown in fig. 4, a schematic diagram of a position detection image is shown; in the grid image of the embodiment, the number of the squares in the grid image can be determined according to actual needs, which is not limited in the embodiment, and the more the number of the squares is, the higher the accuracy of the processing result is; in the grid image, the colors of adjacent squares are different and are black and white, and the black and white color in this embodiment refers to the colors with gray values of 0 and 255 respectively; the difference between the gray values of the black color and the white color is the largest, which is beneficial to improving the processing precision.
After the position detection image is projected onto the projection surface, the position detection image is collected by a camera device to obtain a first image formed on the projection surface;
the corner in this embodiment refers to a pixel point with a fast gray value gradient change; in the position detection image and the first image in the embodiment, because the color of the alternate squares is black and white, the vertex gradient of each square changes the fastest, and the corner point refers to the vertex of each square; acquiring the corner of the position detection image and the corner of the first image, and recording the coordinates of each corner in the two images;
the corner points of the position detection images correspond to the corner points of the first image, and if the corner points have coordinate difference in the two images, the position conversion relation can be obtained according to the coordinate proportion of the two images.
Specifically, the position conversion relationship is actually a mapping relationship of each pixel point in the position detection image in the first image, and the position conversion relationship can be represented by a matrix; assuming that P is any point on the position detection image, obtaining P 'corresponding to P on the first image according to P' ═ K × P; k is the position conversion relation matrix; if the matrix K is a 3 × 3 matrix, 9 unknown elements exist in the matrix, and the coordinates of the corner point of the obtained position detection image and the corner point of the first image are respectively substituted into P' ═ K × P, so that a plurality of equations can be obtained, and the matrix can be obtained by solving the equations to obtain the position conversion relationship.
The projected image and the projected predicted image also have a pixel value conversion relation, the pixel value conversion relation is obtained by projecting a preset pixel value detection image to a projection plane, acquiring a second image formed by the projection plane through a camera device and comparing pixel values of pixel points in the pixel value detection image and the second image; the present embodiment takes the process of establishing the pixel value conversion relationship between the two pixels as an example for explanation; as shown in fig. 5, the method may further include the following steps:
s51, controlling the projector to project a preset pixel value detection image to the projection surface, wherein the pixel value detection image comprises three primary color images, a black image and a white image;
s52, acquiring a second image formed by the pixel value detection image on the projection surface through the camera device;
s53, calculating to obtain the pixel value conversion relation according to the proportion of the pixel value of each pixel point of the pixel value detection image to the pixel value of each pixel point of the second image;
wherein, the pixel value conversion relationship may be:
C=A(VP+F);
c is the pixel value of the pixel point M in the second image, A is the reflectivity of the projection surface, V is a color mixing matrix, P is the pixel value of the pixel point M 'of the pixel value detection image, F is the contribution of the ambient light, and the position of the pixel point M is the same as the position of the pixel point M'.
In this embodiment, when the projection image is input to the projection device, projected to the projection plane by the projection device, and then captured by the image capturing device, and the display image on the current projection plane is acquired, in this processing process, even if the same color shows different pixel values at the edge and the center of the camera due to the non-uniform light exposure of the camera, the distortion of the camera lens, the influence of ambient light, and the like, the color difference between the projection image and the display image is caused, and therefore, the pixel value conversion relationship between the projection image and the display image needs to be determined.
Firstly, controlling the projection device to project a preset color detection image to the projection surface; the pixel value detection image comprises a three-primary-color image, a black image (RGB color values: R: 0, G: 0, B: 0) and a white image (RGB color values: R: 255, G: 255, B: 255); the three primary color images are a red image (RGB color values: R: 255, G: 0, B: 0), a green image (RGB color values: R: 0, G: 255, B: 0), and a blue image (RGB color values: R: 0, G: 0, B: 255).
After the pixel value detection image is projected onto the projection surface, the pixel value detection image is collected by a camera device to obtain a second image formed on the projection surface; and calculating to obtain the pixel value conversion relation according to the proportion of the pixel value of each pixel point of the pixel value detection image to the pixel value of each pixel point of the second image.
In this embodiment, the pixel value conversion relationship may be a mathematical model, as shown in the following formula:
C=A(VP+F)
wherein,
the vector C represents the pixel value of a certain pixel point in the second image; the vector P represents the pixel value of a corresponding pixel point in the pixel value detection image; the matrix A represents the reflectivity of the projection surface; vector F represents the contribution of ambient light; the matrix V, called the color mixing matrix, describes the interaction between the individual color channels in the system. By the above-described pixel value detection image, the matrix a, the matrix V, and the matrix F can be calculated, and a pixel value conversion relationship is obtained.
In a preferred embodiment, when the position conversion relationship and the pixel value conversion relationship are determined, a corresponding projection prediction image can be generated according to a projection image, and the step of generating the projection prediction image of the projection image according to the preset position conversion relationship and the pixel value conversion relationship can include:
s61, converting the position of each pixel point in the projected image according to the position conversion relation to obtain the converted position of the pixel point;
s62, converting the pixel value of each pixel point of the projected image according to a preset pixel value conversion relation to obtain the converted pixel value of each pixel point;
s63, setting corresponding pixel points according to the converted positions and the converted pixel values to obtain the projection prediction image;
in this embodiment, according to the position conversion relationship and the pixel value conversion relationship, the projection prediction image can be obtained quickly through corresponding conversion.
For step S22, acquiring, by a camera device, a display image currently displayed on the projection surface;
the projection image is projected on the projection surface through the projection device, the projection surface is shot through the camera device, the shot video data is composed of a multi-frame image sequence, and the display image on the projection surface is obtained.
For step S23, comparing the difference between the pixel values of the pixel points at the same position in the display image and the projected prediction image, and extracting the pixel points in the display image for which the difference is greater than a preset threshold value to obtain a target object;
in this embodiment, each pixel point in the projected prediction image may be compared with each pixel point at the same corresponding position in the display image, and the pixel values of the two pixels may be compared; if the projection plane does not have the detection target, the similarity between the projected prediction image and the display image is higher; if a detection target appears on the projection plane and the detection target is projected on the projection picture, the acquired display image on the projection picture is different from the projection prediction image in the detection target area; therefore, by comparing the pixel values of the pixel points at the same position, if the difference value of the pixel values is larger, the pixel point can be used as a target pixel point; specifically, the pixel value difference may be compared with a preset threshold, and the preset threshold may be set according to actual needs, which is not limited in this embodiment; and obtaining all target pixel points in the display image, and obtaining the target object to be detected.
In the method for extracting a target object from a projection picture according to this embodiment, a projection prediction image of the projection image relative to a projection plane is generated according to a preset position conversion relationship and a pixel value conversion relationship; acquiring a display image displayed on the projection surface currently by a camera device; comparing the projection prediction image with the display image to obtain pixel points of which the difference value of the pixel values of the pixel points at the same position is greater than a preset threshold value, and extracting the pixel points as target objects; according to the embodiment, the projected image is predicted, the predicted projected image obtained through prediction is compared with an actual display image, and if pixel values of pixel points are different, the target object on the projection surface can be judged, so that the target object area can be accurately extracted from the display image, and the detection result is high in precision.
Example III,
As shown in fig. 7, the present invention further provides a projection interaction method, which is described in this embodiment by taking an example that the projection interaction method is applied to a computing device, and the method includes a method for extracting a target object from a projection picture in the second embodiment, including the following steps:
s21, acquiring a projection image of the projector, and generating a projection prediction image of the projection image on a projection surface according to a preset position conversion relation and a pixel value conversion relation;
s22, acquiring a display image displayed on the projection surface currently through a camera device;
s23, comparing the difference value of the pixel values of the pixel points at the same position in the display image and the projected prediction image, and extracting the pixel points of which the difference value is greater than a preset threshold value in the display image to obtain a target object;
s74, detecting the space-time characteristics of the target object from the display images of the frames collected by the camera device;
s75, inputting the space-time characteristics to a preset movement behavior classifier, identifying the movement behavior of the target object, and executing a preset control instruction corresponding to the movement behavior according to the movement behavior;
the implementation of steps S21-S23 in this embodiment can be as described in embodiment two, and will not be described herein again.
Detecting a spatiotemporal feature of the target object from the display image of each frame acquired by the camera in step S74;
if a target object appears on the projection picture and the target object has a motion behavior, the motion characteristic of the target object can be identified from the continuous multi-frame display images in the video data because the target object can be extracted from the display images.
In a preferred embodiment, as shown in fig. 8, the step of detecting the motion characteristic of the target object from the display image of each frame in the video data comprises:
s81, extracting a SURF characteristic point set and an optical flow characteristic point set from the target object of the display image with N continuous frames to obtain the interest point of the target object; the interest point is the intersection of the SURF feature point set and the optical flow feature point set, and N is a preset frame number detection unit;
s82, constructing a plurality of Delaunay triangles of the interest point by utilizing a Delaunay triangle rule;
s83, performing weighted calculation on the SURF feature vector and the optical flow feature of each Delaunay triangle according to a preset weighting coefficient to obtain the space-time feature;
the target object is composed of target pixel points in a display image, and the principle of extracting SURF (speeded up robust features) features in the image is as follows:
1) extracting characteristic points:
assuming the function f (x, y), the Hessian matrix H is composed of partial derivatives of the function. Firstly, defining a Hessian matrix of a certain pixel point I (x, y) in an image as follows:
therefore, each pixel point can obtain a Hessian matrix, and the judgment formula of the Hessian matrix is as follows:
the value of the discriminant is the characteristic value of the H matrix, all pixel points can be classified by using the symbol of the decision result, and the pixel point is judged to be an extreme point or not according to the positive and negative values of the discriminant.
Then, a filter is selected, and the filter of the embodiment selects a second-order standard gaussian function. The second partial derivative is calculated by convolution between specific kernels, so that three matrix elements L of the H matrix can be calculatedxx,Lxy,LyyThus, the H matrix formula is calculated as follows:
the value of its determinant:
then, detecting a target pixel point, wherein the following two steps are mainly adopted:
A. gaussian filtering: using different sigma generationOrThe template is used for carrying out convolution operation on a target object area in the image;
B. searching corresponding peak values in the position space and the scale space of the target object area in the image.
In this embodiment, a concept of image pile is introduced, that is, a group of images with the same size are arranged along the Z-axis direction from small to large according to the gaussian filter second derivative templates with different sizes, so that the field of each pixel point in the middle layer is 3 × 3 × 3 (including the upper and lower layers). If the feature value α of the pixel point is the maximum value of the 27 points, the point can be regarded as a SURF feature point.
2) Matching of the feature points:
A. searching for a feature vector: if the feature points are matched, the feature vectors of the feature points are extracted, and the similarity degree of the two vectors is used for judging whether the two points are points corresponding to the two images.
Calculating the main direction of the feature points: in this embodiment, a template with a side length of 4 may be adopted, and for any target pixel point, the template is applied to the point, and then a difference between black and white pixel values in the Haar feature is calculated as a pixel value of the point, so as to obtain a feature vector harrx in the horizontal direction and a feature vector harry in the vertical direction.
To ensure rotation invariance, in SURF, its gradient histogram is not counted, but Harr features in the feature point domain are counted: namely, taking the feature point as the center, calculating the radius as 6 scale values, counting the sum of all points in the sector of 60 degrees in the horizontal and vertical directions, and giving different Gaussian weights to the sum. The method comprises the steps of establishing a circle field with 6 pixels of radius by taking a feature point as a circle center, calculating to obtain 109 pixel points in the circle field, respectively calculating the direction of each vector for the 109 pixel points, dividing the angles into 6 values such as 60, 120, 300 and 360 according to the nearest neighbor principle, and respectively adding harrx and harry of the pixel points divided in the same range. However, in order to reflect the larger influence of the adjacent pixel points, the gaussian weight coefficient also needs to be considered. This yields the largest harrx and largest harry, which constitute the principal direction vector.
B. Constructing SURF feature point descriptors: in SURF, a square box is also taken around the feature point, the side length of the box being 8 dimensions. The gradient size and direction of 4 ﹡ 4 pixel blocks are calculated (harrx and harry calculated in step a can be used), and the region with the side length of 8 is divided into regions T1, 2, 3 and 4 with the side length of 2, so that each region includes 4 smaller regions consisting of 4 pixels. harrx and harry are direction vectors obtained by subtracting the gray value of the black part pixel from the gray value of the white part pixel. A total of 16 such vectors are obtained, the direction angles of these direction vectors are merged into 8 directions which are inclined upward, downward, leftward, rightward, upward, and downward, and the values of these 8 directions are calculated in T1, 2, 3, 4. Then, since there are 8 subregions in total, the feature descriptor is composed of 32 feature vectors in total.
C. Matching of the feature points: and setting a threshold value by using the simplest maximum value of the inner product of the two vectors as the point of the best matching, and only when the maximum value is larger than the threshold value, considering that the two feature points are matched.
The principle of extracting optical flow feature points in an image is as follows:
the optical flow refers to the instantaneous speed of the pixel motion of a spatial moving object on an observation imaging plane, and is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames.
The optical flow method is a method for deducing the moving speed and direction of an object by detecting the change of the intensity of image pixel points along with time. Each time instant has a two-or multi-dimensional set of vectors, e.g., (x, y, t), representing the instantaneous velocity of the given coordinate at point t. Assuming that I (x, y, t) is the intensity at time (x, y) t, x, y increase by Δ x, Δ y, respectively, in a very short time Δ t, we can obtain:
assuming that the object is located at point (x, y) at time t and at point (x + Δ x, y + Δ y) at time t + Δ t, then there is the following equation:
I(x+Δx,y+Δy,t+Δt)=I(x,y,t)
thus, it is possible to provide
Suppose thatThen
Ixu+Iyu=-ItI.e. byAssuming that the luminance is constant in a small local area of (u, v), then:
namely:
the purpose of the optical flow calculation is to makeThe minimum value is the direction and magnitude of the optical flow.
The SURF method is adopted in the embodiment to detect SURF points in the target object area in each frame of display image, and SURF feature description is carried out on the extracted SURF feature points to obtain SURF feature vectors of the SURF feature points; in the present embodiment, N consecutive frames of display images in the video data are used as a detection unit, and may be, for example, 4 frames, 5 frames, or 6 frames, which may be set according to actual needs.
Meanwhile, optical flow feature points are obtained through an optical flow method, optical flow features are calculated from the 1 st frame to the N th frame in the N frames of images, and optical flow feature points and optical flow feature vectors of the optical flow feature points are obtained.
Then obtaining an interest point, wherein the interest point is obtained by screening the obtained SURF point and optical flow characteristic point, namely the characteristic point of both the SURF point and the optical flow characteristic point is the interest point, namely the intersection of the SURF point set and the optical flow characteristic point set;
the Delaunay triangle rules are used to construct the Delaunay triangles of the interest point, and the Delaunay triangle rules are used to constrain the obtained interest point. Thus, spatio-temporal features can be extracted from a set of feature points rather than an independent feature point. In each Delaunay triangle, a triangular region can be formed at every three points, and the spatio-temporal features are extracted by taking the triangular region as a unit.
Optical flow features are extracted from each of the interest points in the N-1 frames of the N-frame display image, and then each of the interest points is tracked based on the optical flow features.
As local motion features, motion feature points can be first estimated from a matrix obtained by an optical flow method, and then a 5-dimensional feature vector can be used to represent a motion interest point in each video segment. The 5-dimensional feature vector includes x +, x-, y +, y-and no light flux x0The motion features of each video segment are normalized so that the sum of all components is approximately 1. the 5-dimensional feature vectors obtained from all (N-1) frame intervals are combined into one motion vector, and thus the dimension of the motion feature is (N-1) × 5.
In the last step, a space-time feature vector can be obtained by integrating the SURF feature and the motion feature of each Delaunay triangle after normalization.
SURF descriptors of three feature points of the Delaunay triangle are adopted as local texture features. The SURF descriptors of the three points are combined in descending order of their absolute values. Since the SURF descriptor is 64-dimensional, the dimension of the texture feature is 64 × 3 — 192-dimensional.
Then, (M-1) × 5-dimensional motion vectors are combined into one local motion feature in descending order of their absolute values.
Since all points are subjected to Delaunay triangulation, three points in each triangle are taken as a whole, and thus the feature vector is 192-dimensional, i.e., 3 × 64;
integrating SURF feature and motion feature vectors, weighting them to obtain a space-time feature vector, where the weighting coefficient w may be set according to actual needs, for example, may be determined through experiments, and the dimension of the space-time feature vector is: 192+ (M-1) × 3 × 5, where "× 3" denotes merging motion vectors for 3 points in the Delaunay triangle.
The present embodiment re-subdivides the extracted interest points using Delaunay triangulation. Therefore, the subsequent space-time characteristics are extracted from each triangular region instead of each point, so that the information of the space-time characteristics is more robust and richer, and the accuracy of motion behavior identification is improved.
For step S75, inputting the spatio-temporal features into a preset motion behavior classifier, identifying a motion behavior of the target object, and executing a preset control instruction corresponding to the motion behavior according to the motion behavior;
the motion behavior classifier is provided with a plurality of preset space-time characteristics and corresponding motion behaviors, and after the space-time characteristics of the target object are extracted, the space-time characteristics are input into the classifier to be matched, so that the motion behaviors are recognized, and then corresponding control instructions are executed according to the motion behaviors.
In the embodiment, a Support Vector Machine (SVM) method is adopted to perform classification and identification of space-time features:
the present embodiment uses radial basis functions, i.e. gaussian kernel functions:
the principle is as follows:
assume there is a pile of positive and negative samples of training data, labeled { x }i,yi},i=1,...,l,yi∈{-1,1},xi∈RdAssuming that there is a hyperplane H, w · x + b ═ 0, these samples can be correctly split apart, while there are two hyperplanes H1 and H2 parallel to H:
w·x+b=1
w·x+b=-1
the positive and negative samples nearest to H are made to fall exactly on H1 and H2, and such samples are support vectors. Then all other training samples will be outside of H1 and H2, i.e., the following constraints are satisfied:
w·xi+b≥1,yi=1
w·xi+b≤-1,yi=-1
write a uniform equation as:
yi(w·xi+b)-1≥0
the SVM algorithm mainly comprises the following steps:
A. given a training set:
T={(x1,y1),(x2,y2),...,(xn,yn)}
B. solving a quadratic programming problem:
wherein
Is solved out
C. Calculating a parameter w and selecting a positive componentAnd b:
D. constructing a decision boundary: g (x) ═ w*·x)+b*When 0, the decision function is thus determined:
f(x)=sgn(g(x))
in this embodiment, data of a plurality of scenes is selected according to a motion behavior to be identified, in addition, some scenes not including the behavior are selected, after bottom-layer motion features and SURF features are extracted from a sample video, clustering is performed to generate video vectors, the operations are performed on all video samples in sequence, each video sample obtains a group of video vectors, and svm gaussian kernel function training is performed on all sample images.
The method comprises the steps of extracting features of input video data, generating space-time features, judging certain movement behaviors through a classifier, and executing corresponding control instructions according to the recognized movement behaviors, so that projection interaction is realized.
The projection interaction method can accurately extract the target object region and perform feature extraction on the target region, and the SURF feature points and the optical flow detection method are adopted, so that the motion behavior of the target object can be identified more simply, quickly and effectively, the identification capability of the motion behavior of the target object in a projection picture is better, and the projection interaction processing efficiency is obviously improved.
Example four,
Fig. 9 is a schematic structural diagram of a system for extracting a target object from a projection screen according to an embodiment of the present invention, including:
the generating module 91 is configured to acquire a projection image of the projector, and generate a projection prediction image of the projection image on a projection surface according to a preset position conversion relationship and a pixel value conversion relationship;
an extracting module 92, configured to acquire, by the camera device, a display image currently displayed on the projection surface;
and the target object extraction module 93 is configured to compare a difference between pixel values of pixels at the same position in the display image and the projection prediction image, and extract a pixel in the display image where the difference is greater than a preset threshold, so as to obtain a target object.
In a preferred embodiment, the method further comprises:
the first control module is used for controlling the projection device to project a preset position detection image to the projection surface, wherein the position detection image is a preset grid image, and the grid image is divided into a plurality of grids with alternate black and white colors;
the first acquisition module is used for acquiring a first image formed by the position detection image on the projection surface through the camera device;
and the position conversion relation calculation module is used for calculating to obtain the position conversion relation according to the proportion of the coordinates of the corner points of the position detection image and the coordinates of the corner points of the first image.
In a preferred embodiment, the method further comprises:
the second control module is used for controlling the projection device to project a preset pixel value detection image to the projection surface, wherein the pixel value detection image comprises three primary color images, a black image and a white image;
the second acquisition module is used for acquiring a second image formed by the pixel value detection image on the projection surface through the camera device;
the pixel value conversion relation calculation module is used for detecting the pixel value of each pixel point of the image and the pixel value of each pixel point of the second image according to the pixel values and calculating to obtain the pixel value conversion relation;
wherein, the pixel value conversion relationship may be:
C=A(VP+F);
c is the pixel value of the pixel point M in the second image, A is the reflectivity of the projection surface, V is a color mixing matrix, P is the pixel value of the pixel point M 'of the pixel value detection image, F is the contribution of the ambient light, and the position of the pixel point M is the same as the position of the pixel point M'.
In a preferred embodiment, the generating module 91 is further configured to: converting the position of each pixel point in the projected image according to the position conversion relation to obtain the converted position of the pixel point; converting the pixel value of each pixel point of the projected image according to a preset pixel value conversion relation to obtain the converted pixel value of each pixel point; setting corresponding pixel points according to the converted position and the converted pixel value to obtain the projection prediction image;
in this embodiment, according to the position conversion relationship and the pixel value conversion relationship, the projection prediction image can be obtained quickly through corresponding conversion.
In the system for extracting the target object from the projection picture, a projection prediction image of the projection image relative to a projection plane is generated according to a preset position conversion relation and a pixel value conversion relation; acquiring a display image displayed on the projection surface currently by a camera device; comparing the projection prediction image with the display image to obtain pixel points of which the difference value of the pixel values of the pixel points at the same position is greater than a preset threshold value, and extracting the pixel points as target objects; according to the invention, the projected image is predicted, the predicted projected image obtained through prediction is compared with an actual display image, and if the pixel values of pixel points are different, the target object on the projection surface can be judged, so that the target object area can be accurately extracted from the display image, and the detection result has high precision.
Example V,
Fig. 10 is a schematic structural diagram of a projection interaction system in a fifth embodiment of the present invention, including a system for extracting a target object from a projection screen according to a fourth embodiment, including the following modules:
the generating module 91 is configured to acquire a projection image of the projector, and generate a projection prediction image of the projection image on a projection surface according to a preset position conversion relationship and a pixel value conversion relationship;
an extracting module 92, configured to acquire, by the camera device, a display image currently displayed on the projection surface;
a target object extraction module 93, configured to compare a difference between pixel values of pixels at the same position in the display image and the projection prediction image, and extract a pixel in the display image where the difference is greater than a preset threshold, so as to obtain a target object;
a spatiotemporal feature detection module 101, configured to detect spatiotemporal features of the target object from the display images of the frames acquired by the camera;
in a preferred embodiment, the spatio-temporal feature detection module 101 is further configured to extract a SURF feature point set and an optical flow feature point set from the target object of N consecutive display images to obtain a point of interest of the target object; the interest point is the intersection of the SURF feature point set and the optical flow feature point set, and N is a preset frame number detection unit; constructing a plurality of Delaunay triangles for the point of interest using a Delaunay triangle rule; and performing weighted calculation on the SURF feature vector and the optical flow feature of each Delaunay triangle according to a preset weighting coefficient to obtain the space-time feature.
The identification module 102 is configured to input the spatio-temporal features into a preset motion behavior classifier, identify a motion behavior of the target object, and execute a preset control instruction corresponding to the motion behavior according to the motion behavior;
the projection interaction system can accurately extract the target object region, performs feature extraction on the target region, and can identify the motion behavior of the target object more simply, quickly and effectively by adopting the SURF feature points and the optical flow detection method, so that the identification capability of the motion behavior of the target object in the projection picture is better, and the projection interaction processing efficiency is obviously improved.
The invention relates to a method and a system for extracting a target object from a projection picture, which are used for generating a projection prediction image of a projection image relative to a projection plane according to a preset position conversion relation and a pixel value conversion relation; acquiring a display image displayed on the projection surface currently by a camera device; comparing the projection prediction image with the display image to obtain pixel points of which the difference value of the pixel values of the pixel points at the same position is greater than a preset threshold value, and extracting the pixel points as target objects; according to the invention, the projected image is predicted, the predicted projected image obtained through prediction is compared with an actual display image, and if the pixel values of pixel points are different, the target object on the projection surface can be judged, so that the target object area can be accurately extracted from the display image, and the detection result has high precision.
The projection interaction method and the projection interaction system can accurately extract the target object region, so that the recognition capability of the motion behavior of the target object in the projection picture is better, and the projection interaction processing is obviously improved.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (8)
1. A method for extracting a target object from a projection picture is characterized by comprising the following steps:
acquiring a projection image of a projector, and generating a projection prediction image of the projection image on a projection surface according to a preset position conversion relation and a pixel value conversion relation;
acquiring a display image displayed on the projection surface currently through a camera device;
comparing the difference value of the pixel values of the pixel points at the same position in the display image and the projection prediction image, and extracting the pixel points of which the difference value is greater than a preset threshold value in the display image to obtain a target object;
also comprises the following steps:
controlling the projection device to project a preset pixel value detection image to the projection surface, wherein the pixel value detection image comprises three primary color images, a black image and a white image;
acquiring a second image formed by the pixel value detection image on the projection surface through the camera device;
calculating to obtain the pixel value conversion relation according to the proportion of the pixel value of each pixel point of the pixel value detection image to the pixel value of each pixel point of the second image;
the mathematical model of the pixel value conversion relation is as follows:
C=A(VP+F);
c is the pixel value of the pixel point M in the second image, A is the reflectivity of the projection surface, V is a color mixing matrix, P is the pixel value of the pixel point M 'of the pixel value detection image, F is the contribution of the ambient light, and the position of the pixel point M is the same as the position of the pixel point M'.
2. The method for extracting the target object from the projection screen according to claim 1, further comprising the steps of:
controlling the projection device to project a preset position detection image to the projection surface, wherein the position detection image is a preset grid image, and the grid image is divided into a plurality of grids with alternate black and white colors;
acquiring a first image formed by the position detection image on the projection surface through the camera device;
and calculating to obtain the position conversion relation according to the proportion of the coordinates of the corner points of the position detection image and the coordinates of the corner points of the first image.
3. The method for extracting a target object from a projection screen according to claim 2, wherein the step of generating a projection prediction image of the projection image based on a preset position conversion relationship and a pixel value conversion relationship comprises:
converting the position of each pixel point in the projected image according to the position conversion relation to obtain the converted position of the pixel point;
converting the pixel value of each pixel point of the projected image according to a preset pixel value conversion relation to obtain the converted pixel value of each pixel point;
and setting corresponding pixel points according to the converted position and the converted pixel value to obtain the projection prediction image.
4. A projection interaction method, comprising the method for extracting the target object from the projection picture according to any one of claims 1 to 3, further comprising the following steps:
detecting the space-time characteristics of the target object from the display images of the frames acquired by the camera device;
and inputting the space-time characteristics to a preset movement behavior classifier, identifying the movement behavior of the target object, and executing a preset control instruction corresponding to the movement behavior according to the movement behavior.
5. The projection interaction method as claimed in claim 4, wherein the step of detecting the spatiotemporal features of the target object from the display images of the frames acquired by the camera device comprises:
extracting a SURF characteristic point set and an optical flow characteristic point set from the target object of the N continuous display images to obtain a point of interest of the target object; the interest point is the intersection of the SURF feature point set and the optical flow feature point set, and N is a preset frame number detection unit;
constructing a plurality of Delaunay triangles for the point of interest using a Delaunay triangle rule;
and performing weighted calculation on the SURF feature vector and the optical flow feature of each Delaunay triangle according to a preset weighting coefficient to obtain the space-time feature.
6. A system for extracting a target object from a projection image, comprising:
the generating module is used for acquiring a projection image of the projector and generating a projection prediction image of the projection image on a projection surface according to a preset position conversion relation and a pixel value conversion relation;
the extraction module is used for acquiring a display image displayed on the projection surface currently through a camera device;
the target object extraction module is used for comparing the difference value of the pixel values of the pixel points at the same position in the display image and the projection prediction image, and extracting the pixel points of which the difference value is greater than a preset threshold value in the display image to obtain a target object;
further comprising:
the second control module is used for controlling the projection device to project a preset pixel value detection image to the projection surface, wherein the pixel value detection image comprises three primary color images, a black image and a white image;
the second acquisition module is used for acquiring a second image formed by the pixel value detection image on the projection surface through the camera device;
the pixel value conversion relation calculation module is used for calculating the pixel value conversion relation according to the proportion of the pixel value of each pixel point of the pixel value detection image to the pixel value of each pixel point of the second image;
the mathematical model of the pixel value conversion relation is as follows:
C=A(VP+F);
c is the pixel value of the pixel point M in the second image, A is the reflectivity of the projection surface, V is a color mixing matrix, P is the pixel value of the pixel point M 'of the pixel value detection image, F is the contribution of the ambient light, and the position of the pixel point M is the same as the position of the pixel point M'.
7. The system for extracting target object from projection picture according to claim 6, further comprising:
the first control module is used for controlling the projection device to project a preset position detection image to the projection surface, wherein the position detection image is a preset grid image, and the grid image is divided into a plurality of grids with alternate black and white colors;
the first acquisition module is used for acquiring a first image formed by the position detection image on the projection surface through the camera device;
and the position conversion relation calculation module is used for calculating to obtain the position conversion relation according to the proportion of the coordinates of the corner points of the position detection image and the coordinates of the corner points of the first image.
8. A projection interactive system, comprising the system for extracting the target object from the projection picture according to any one of claims 6 to 7, further comprising:
the space-time characteristic detection module is used for detecting the space-time characteristics of the target object from the display images of the frames acquired by the camera device;
and the identification module is used for inputting the space-time characteristics to a preset movement behavior classifier, identifying the movement behavior of the target object and executing a preset control instruction corresponding to the movement behavior according to the movement behavior.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410429157.6A CN104202547B (en) | 2014-08-27 | 2014-08-27 | Method, projection interactive approach and its system of target object are extracted in projected picture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410429157.6A CN104202547B (en) | 2014-08-27 | 2014-08-27 | Method, projection interactive approach and its system of target object are extracted in projected picture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104202547A CN104202547A (en) | 2014-12-10 |
CN104202547B true CN104202547B (en) | 2017-10-10 |
Family
ID=52087768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410429157.6A Expired - Fee Related CN104202547B (en) | 2014-08-27 | 2014-08-27 | Method, projection interactive approach and its system of target object are extracted in projected picture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104202547B (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559627B (en) * | 2015-09-25 | 2021-05-11 | 中兴通讯股份有限公司 | Projection method, device and equipment |
CN106559629B (en) * | 2015-09-29 | 2020-08-04 | 中兴通讯股份有限公司 | Projection method, device and equipment |
CN106611004B (en) * | 2015-10-26 | 2019-04-12 | 北京捷泰天域信息技术有限公司 | Points of interest attribute display methods based on vector regular quadrangle grid |
CN106998463A (en) * | 2016-01-26 | 2017-08-01 | 宁波舜宇光电信息有限公司 | The method of testing of camera module based on latticed mark version |
CN107133911B (en) * | 2016-02-26 | 2020-04-24 | 比亚迪股份有限公司 | Method and device for displaying reversing image |
CN106774827B (en) * | 2016-11-21 | 2019-12-27 | 歌尔科技有限公司 | Projection interaction method, projection interaction device and intelligent terminal |
CN106713884A (en) * | 2017-02-10 | 2017-05-24 | 南昌前哨科技有限公司 | Immersive interactive projection system |
CN110244840A (en) * | 2019-05-24 | 2019-09-17 | 华为技术有限公司 | Image processing method, relevant device and computer storage medium |
CN110827289B (en) * | 2019-10-08 | 2022-06-14 | 歌尔光学科技有限公司 | Method and device for extracting target image in projector definition test |
CN111447171B (en) * | 2019-10-26 | 2021-09-03 | 四川蜀天信息技术有限公司 | Automated content data analysis platform and method |
CN112040208B (en) * | 2020-09-09 | 2022-04-26 | 南昌虚拟现实研究院股份有限公司 | Image processing method, image processing device, readable storage medium and computer equipment |
CN111932686B (en) * | 2020-09-09 | 2021-01-01 | 南昌虚拟现实研究院股份有限公司 | Mapping relation determining method and device, readable storage medium and computer equipment |
CN114520894B (en) * | 2020-11-18 | 2022-11-15 | 成都极米科技股份有限公司 | Projection area determining method and device, projection equipment and readable storage medium |
CN113421313B (en) * | 2021-05-14 | 2023-07-25 | 北京达佳互联信息技术有限公司 | Image construction method and device, electronic equipment and storage medium |
CN113949853B (en) * | 2021-10-13 | 2022-10-25 | 济南景雄影音科技有限公司 | Projection system with environment adaptive adjustment capability |
CN117664730B (en) * | 2023-12-12 | 2024-05-17 | 青岛中科鲁控燃机控制系统工程有限公司 | Testing arrangement based on decentralized control system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1666248A (en) * | 2002-06-26 | 2005-09-07 | Vkb有限公司 | Multifunctional integrated image sensor and application to virtual interface technology |
CN101140661A (en) * | 2007-09-04 | 2008-03-12 | 杭州镭星科技有限公司 | Real time object identification method taking dynamic projection as background |
CN102063618A (en) * | 2011-01-13 | 2011-05-18 | 中科芯集成电路股份有限公司 | Dynamic gesture identification method in interactive system |
CN102184008A (en) * | 2011-05-03 | 2011-09-14 | 北京天盛世纪科技发展有限公司 | Interactive projection system and method |
CN103677274A (en) * | 2013-12-24 | 2014-03-26 | 广东威创视讯科技股份有限公司 | Interactive projection method and system based on active vision |
CN103988150A (en) * | 2011-03-25 | 2014-08-13 | 奥布隆工业有限公司 | Fast fingertip detection for initializing vision-based hand tracker |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9128366B2 (en) * | 2012-05-22 | 2015-09-08 | Ricoh Company, Ltd. | Image processing system, image processing method, and computer program product |
-
2014
- 2014-08-27 CN CN201410429157.6A patent/CN104202547B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1666248A (en) * | 2002-06-26 | 2005-09-07 | Vkb有限公司 | Multifunctional integrated image sensor and application to virtual interface technology |
CN101140661A (en) * | 2007-09-04 | 2008-03-12 | 杭州镭星科技有限公司 | Real time object identification method taking dynamic projection as background |
CN102063618A (en) * | 2011-01-13 | 2011-05-18 | 中科芯集成电路股份有限公司 | Dynamic gesture identification method in interactive system |
CN103988150A (en) * | 2011-03-25 | 2014-08-13 | 奥布隆工业有限公司 | Fast fingertip detection for initializing vision-based hand tracker |
CN102184008A (en) * | 2011-05-03 | 2011-09-14 | 北京天盛世纪科技发展有限公司 | Interactive projection system and method |
CN103677274A (en) * | 2013-12-24 | 2014-03-26 | 广东威创视讯科技股份有限公司 | Interactive projection method and system based on active vision |
Non-Patent Citations (1)
Title |
---|
《基于SURF特征和Delaunay三角网格的图像匹配》;闫自庚 等;《自动化学报》;20140630;第1-2节 * |
Also Published As
Publication number | Publication date |
---|---|
CN104202547A (en) | 2014-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104202547B (en) | Method, projection interactive approach and its system of target object are extracted in projected picture | |
CN109684924B (en) | Face living body detection method and device | |
US11893789B2 (en) | Deep neural network pose estimation system | |
CN108292362B (en) | Gesture recognition for cursor control | |
US9710698B2 (en) | Method, apparatus and computer program product for human-face features extraction | |
JP6438403B2 (en) | Generation of depth maps from planar images based on combined depth cues | |
CN109684925B (en) | Depth image-based human face living body detection method and device | |
JP6428266B2 (en) | COLOR CORRECTION DEVICE, COLOR CORRECTION METHOD, AND COLOR CORRECTION PROGRAM | |
CN104166841B (en) | The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network | |
JP2016015045A (en) | Image recognition device, image recognition method, and program | |
CN104036284A (en) | Adaboost algorithm based multi-scale pedestrian detection method | |
JP2018120283A (en) | Information processing device, information processing method and program | |
Lamer et al. | Computer vision based object recognition principles in education | |
Arunkumar et al. | Estimation of vehicle distance based on feature points using monocular vision | |
GB2522259A (en) | A method of object orientation detection | |
CN106682582A (en) | Compressed sensing appearance model-based face tracking method and system | |
Agarwal et al. | Weighted Fast Dynamic Time Warping based multi-view human activity recognition using a RGB-D sensor | |
Molina-Giraldo et al. | Video segmentation based on multi-kernel learning and feature relevance analysis for object classification | |
Fihl et al. | Invariant gait continuum based on the duty-factor | |
Goyal et al. | Moving object detection in video streaming using improved DNN algorithm | |
Agrafiotis et al. | HDR Imaging for Enchancing People Detection and Tracking in Indoor Environments. | |
CN118524289B (en) | Target tracking shooting method, device, equipment and storage medium | |
JP2013120504A (en) | Object extraction device, object extraction method and program | |
Yeh et al. | Vision-based virtual control mechanism via hand gesture recognition | |
Jadav et al. | Dynamic shadow detection and removal for vehicle tracking system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: Kezhu road high tech Industrial Development Zone, Guangzhou city of Guangdong Province, No. 233 510670 Patentee after: VTRON GROUP Co.,Ltd. Address before: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233 Patentee before: VTRON TECHNOLOGIES Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171010 Termination date: 20210827 |