CN102508544A - Intelligent television interactive method based on projection interaction - Google Patents
Intelligent television interactive method based on projection interaction Download PDFInfo
- Publication number
- CN102508544A CN102508544A CN2011103253978A CN201110325397A CN102508544A CN 102508544 A CN102508544 A CN 102508544A CN 2011103253978 A CN2011103253978 A CN 2011103253978A CN 201110325397 A CN201110325397 A CN 201110325397A CN 102508544 A CN102508544 A CN 102508544A
- Authority
- CN
- China
- Prior art keywords
- user
- characteristic target
- user interactions
- intelligent television
- interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 33
- 230000033001 locomotion Effects 0.000 claims abstract description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 5
- 230000009471 action Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000005266 casting Methods 0.000 description 6
- 230000003190 augmentative effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an intelligent television interactive method based on projection interaction and relates to an intelligent television interactive technology. The invention aims to integrate a micro projection technology to an intelligent television, so that a user can interact with the intelligent television by using various body movements, and an intelligent television user can gain a good interaction experience. The method provided by the invention has the following technical points: a micro projector in the intelligent television projects an interactive background on a certain region; the camera of the intelligent television collects and preprocesses an interactive background image in the projection region; a characteristic target in the interactive background image is extracted; the characteristic target is calibrated so as to acquire the coordinate position of the characteristic target in the interactive background image; the camera of the intelligent television collects a user interactive scene, and recognizes the location of the user in the interactive scene according to the sheltered region and sequence of the characteristic target and a motion trail; and finally the intelligent television integrates the definition of the interactive scene in advance to explain an interactive instruction of the user and makes a feedback to the interactive instruction of the user.
Description
Technical field
The present invention relates to the intelligent television interaction technique, especially a kind of based on the interactive intelligent television exchange method of projection.
Background technology
Fast development along with intelligent television; The number of users of intelligent television is more and more; Different with traditional DTV, intelligent television increased surf the web, various new functions such as receiving and dispatching mail, TV shopping, remote teaching, tele-medicine, stock exchange, information consultation, require under these new functions that the user's is mutual more and more; Conventional digital tv relies on the mutual of IR remote controller, can not create good interactive experience to the user.In order to improve intelligent television user's interactive experience; Intelligent television begins new interaction techniques such as innovation and application mobile phone interaction, gesture identification, but all there is some deficiency in these new interaction techniques, and it is mutual still to rely on the user to hold interactive media such as mobile phone interaction; Though gesture identification user no longer holds interactive media; But only limit to gesture alternately, and the influence of bracelet border illumination, user-interaction experience is bad.
Summary of the invention
Projection belongs to the new interaction technique of rise in recent years alternately, receives the restriction of traditional shadow casting technique, is confined to zones such as museum, exhibition room, market, sales field at present and is used for image display.Full-fledged day by day along with micro projector, shadow casting technique begins to occur being incorporated into the trend on the consumer electronics product.
The objective of the invention is to the micro projection technology is incorporated on the intelligent television; Use shadow casting technique to create the interaction scenarios of an augmented reality as the intelligent television user; The user can be immersed in the interaction scenarios mutual, and the intelligent television user can obtain good interactive experience.
The technical scheme that the present invention adopts is such: comprising:
Step 101: the inner micro-projector of intelligent TV set on the certain space zone, is provided with characteristic target in the mutual background with mutual background plane, and the characteristic target is made up of the color lump of setting gray scale, and the gray scale of each color lump has tangible difference;
Step 102: the mutual background in the said view field of intelligent TV set camera collection, and the mutual background image of gathering carried out filtering, with the noise in the mutual background image of removing collection;
Step 103: extract the characteristic target in the mutual background image;
Step 104: demarcate the characteristic target to obtain the coordinate position of characteristic target in mutual background image;
Step 105: intelligent TV set camera collection user interactions scene, the user interactions scene image of gathering is carried out filtering, with the noise in the user interactions scene image of removing collection;
Step 106: extract the characteristic target in the user interactions scene image;
Step 107: the characteristic target is to obtain the coordinate position of characteristic target in the user interactions scene image in the demarcation user interactions scene image;
Step 108: the coordinate position of characteristic target in the coordinate position of characteristic target and the said mutual background image in the comparison user interactions scene image; Identify the characteristic target is blocked by the user in the user interactions scene image position and size, and then the position of definite user in the user interactions scene;
Repeatedly repeating step 105 ~ step 108 is obtained the time sequencing that the characteristic target is blocked by the user in the user interactions scene image, and then the movement locus of definite user in the user interactions scene; At last, according to the position of user in the user interactions scene and its movement locus, to the definition of user interactions limb action, the interactive instruction of interpreting user is made feedback to user's interactive instruction in conjunction with in advance.
Preferably, also comprise characteristic target in the said mutual background adding of hiding non-step that manifests watermark and in mutual background and user interactions scene image, extract the characteristic target step of separating watermark before.
Preferably, the filtering method in said step 102 or the step 105 is a median filter method.
Preferably, the method for characteristic target is in mutual background of said extraction or the user interactions scene image: the gray-scale value of at first judging each pixel in the image; Be gray-scale value that the gray scale of the pixel of said setting gray scale changes black into again, the gray scale of rest of pixels changes white into.
In sum, owing to adopted technique scheme, the invention has the beneficial effects as follows:
The present invention is incorporated into the micro projection technology on the intelligent television; Use shadow casting technique to create the mutual background of an augmented reality as the intelligent television user; And with suitable Flame Image Process and algorithm for pattern recognition identification user's reciprocal process; And user's interactive instruction made correct feedback; Make interactive mode no longer be confined to gesture, the user can carry out alternately at various limb motions of utilization and the intelligent TV set in the mutual background that micro-projector is created, and the intelligent television user can obtain better interactive experience.
Description of drawings
The present invention will explain through example and with reference to the mode of accompanying drawing, wherein:
Fig. 1 is the image when the characteristic target is not extracted in the mutual background.
Fig. 2 is the image after the characteristic target is extracted in the mutual background.
Fig. 3 is the calibrated result of characteristic target in the mutual background.
Fig. 4 is the result that the characteristic target identifies in coordinate system in the mutual background.
Embodiment
Disclosed all characteristics in this instructions, or the step in disclosed all methods or the process except mutually exclusive characteristic and/or the step, all can make up by any way.
Disclosed arbitrary characteristic in this instructions (comprising any accessory claim, summary and accompanying drawing) is only if special narration all can be replaced by other equivalences or the alternative features with similar purpose.That is, only if special narration, each characteristic is an example in a series of equivalences or the similar characteristics.
Comprise based on the interactive intelligent television exchange method of projection among the present invention: little shadow casting technique creates the step of the mutual background of an augmented reality; Obtain the image and the pretreated step of mutual background; Extract the step of characteristic target in the mutual background; Demarcate the step of characteristic target in the mutual background; Obtain user interactions scene image and pretreated step; Extract the step of characteristic target in the user interactions scene; Demarcate the step of characteristic target in the user interactions scene; Confirm the position of user in the user interactions scene and the step of movement locus; The step of response user interactions instruction.
1, the step of the mutual background of an augmented reality of little shadow casting technique's creation is specifically:
Be embedded in the inner micro projector of intelligent television and go out the mutual background that the user need be mutual at the certain space region projection.In order accurately to detect user's interaction locations and track; Improve mutual accuracy; The crucial interaction locations of interaction scenarios is provided with the characteristic target, and the characteristic target is made up of the color lump of prior setting, and these color lumps intensity profile on the grey level histogram of image will have tangible differentiation; In order to obtain better interactive experience, can utilize to add the non-watermarking algorithm that manifests the characteristic target is hidden in the mutual background image with the non-form that manifests watermark, the user just can not see the characteristic target that has no aesthetic feeling in the mutual background when experiencing like this.
2, image and the pretreated step of obtaining mutual background be specifically:
Be embedded in the image in the inner mutual background plane of the CCD camera collection zone of intelligent TV set.Because more or less there is noise pollution to a certain degree in the original image of gathering, make image fault, flood useful characteristics of image, must carry out pre-service to the image of gathering.The mutual background plane zone of considering the intelligent television collection often has noises such as pulse and point-like noise; Adopt non-linear median filtering technology to suppress the pulse and the point-like noise of images acquired in this method; This treatment technology has better maintenance effect to the image border, and there is humidification at the edge.
The mathematic(al) representation of medium filtering is as shown in the formula 1:
(1)
Wherein Sxy representes that the center is in that (x y), is of a size of the set of coordinates of m * n rectangle subimage window, and (s t) is image (x, y) view data in the Sxy zone to g.
3, the step of the interior characteristic target of the mutual background of extraction is specifically:
If when the characteristic target in the mutual background manifests watermarking algorithm and hides with Jia Jiafei, at first need adopt Jia Jiafei to manifest the watermarking algorithm inverse operation, extract the interior characteristic target image of interaction scenarios.For the characteristic target of interaction scenarios, the gray-scale value of its each color lump is to know in advance, selects suitable segmentation threshold, and directly with the characteristic target and background separately, mathematic(al) representation is as shown in the formula 2:
Wherein, (x, y) ([T1, T2] representation feature target gray value is interval for x, gray-scale value y), b (x, y) ecbatic image, 0 representation feature target (black picture element is represented), 1 expression background (white pixel is represented) for the expression original image.Among result such as Fig. 1, Fig. 2, the effect that center of circle characteristic target is come out from background extracting.
4, the step of the interior characteristic target of the mutual background of demarcation is specifically:
The connected region that each characteristic target is made up of a plurality of pixels constitutes, and need measure these regional characteristic parameters, demarcates the coordinate position of each characteristic target in the interaction scenarios zone exactly.Demarcation travels through whole mutual background image by means of formation, to the corresponding label of characteristic Target Assignment.The present invention adopts and existingly in the prior art carries out the demarcation of specific objective based on 4 neighborhood region growing connected region scaling methods, can effectively save resources such as flush bonding processor internal memory.What Fig. 3, Fig. 4 showed is that center of circle characteristic target is demarcated out, and present marking on a map gone up the result that sign is come out.
5, obtain user interactions scene image and pretreated step specifically:
Be embedded in the inner CCD camera collection user interactions scene image of intelligent TV set, and image is carried out filtering to suppress the pulse and the point-like noise of images acquired, specifically can adopt the method for medium filtering.
6, the method for characteristic target is identical with the method for the interior characteristic target of the mutual background of extraction of front introduction in the extraction user interactions scene, and the grey scale pixel value that drops in the characteristic target gray value interval is made as 0, and the gray-scale value of rest of pixels is made as 1.
7, the step of characteristic target is identical with the method for the interior characteristic target of the mutual background of demarcation of front introduction in the demarcation user interactions scene, repeats no more at this.
8, the step of confirming position and the movement locus of user in the user interactions scene is specifically:
The coordinate position of characteristic target in the coordinate position of characteristic target and the mutual background image of said scene in the comparison user interactions scene image; Identify the characteristic target is blocked by the user in the user interactions scene image position and size, and then the position of definite user in the user interactions scene.
Repeatedly repeat above-mentioned steps 5 ~ step 8, obtain the time sequencing that the characteristic target is blocked by the user in the user interactions scene image, and then the movement locus of definite user in the user interactions scene.
At last, according to the position of user in the user interactions scene and its moving track, to the definition of user interactions limb action, the interactive instruction of interpreting user is made feedback to user's interactive instruction in conjunction with in advance.
The present invention is not limited to aforesaid embodiment.The present invention expands to any new feature or any new combination that discloses in this manual, and the arbitrary new method that discloses or step or any new combination of process.
Claims (4)
1. the intelligent television exchange method based on the projection interaction is characterized in that, comprising:
Step 101: the inner micro-projector of intelligent TV set on the certain space zone, is provided with characteristic target in the mutual background with mutual background plane, and the characteristic target is made up of the color lump of setting gray scale, and the gray scale of each color lump has tangible difference;
Step 102: the mutual background in the said view field of intelligent TV set camera collection, and the mutual background image of gathering carried out filtering, with the noise in the mutual background image of removing collection;
Step 103: extract the characteristic target in the mutual background image;
Step 104: demarcate the characteristic target to obtain the coordinate position of characteristic target in mutual background image;
Step 105: intelligent TV set camera collection user interactions scene, the user interactions scene image of gathering is carried out filtering, with the noise in the user interactions scene image of removing collection;
Step 106: extract the characteristic target in the user interactions scene image;
Step 107: the characteristic target is to obtain the coordinate position of characteristic target in the user interactions scene image in the demarcation user interactions scene image;
Step 108: the coordinate position of characteristic target in the coordinate position of characteristic target and the said mutual background image in the comparison user interactions scene image; Identify the characteristic target is blocked by the user in the user interactions scene image position and size, and then the position of definite user in the user interactions scene;
Repeatedly repeating step 105 ~ step 108 is obtained the time sequencing that the characteristic target is blocked by the user in the user interactions scene image, and then the movement locus of definite user in the user interactions scene; At last, according to the position of user in the user interactions scene and its movement locus, to the definition of user interactions limb action, the interactive instruction of interpreting user is made feedback to user's interactive instruction in conjunction with in advance.
2. according to claim 1 a kind of based on the interactive intelligent television exchange method of projection; It is characterized in that, also comprise characteristic target in the said mutual background adding of hiding non-step that manifests watermark and in mutual background and user interactions scene image, extract the characteristic target step of separating watermark before.
3. according to claim 2 a kind of based on the interactive intelligent television exchange method of projection, it is characterized in that the filtering method in said step 102 or the step 105 is a median filter method.
4. will gas 2 according to right or 3 described a kind of based on the interactive intelligent television exchange methods of projection, it is characterized in that the method for characteristic target is in mutual background of said extraction or the user interactions scene image: the gray-scale value of at first judging each pixel in the image; Be gray-scale value that the gray scale of the pixel of said setting gray scale changes black into again, the gray scale of rest of pixels changes white into.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103253978A CN102508544A (en) | 2011-10-24 | 2011-10-24 | Intelligent television interactive method based on projection interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103253978A CN102508544A (en) | 2011-10-24 | 2011-10-24 | Intelligent television interactive method based on projection interaction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102508544A true CN102508544A (en) | 2012-06-20 |
Family
ID=46220644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103253978A Pending CN102508544A (en) | 2011-10-24 | 2011-10-24 | Intelligent television interactive method based on projection interaction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102508544A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105915987A (en) * | 2016-04-15 | 2016-08-31 | 济南大学 | Implicit interaction method facing smart television set |
CN108563981A (en) * | 2017-12-31 | 2018-09-21 | 广景视睿科技(深圳)有限公司 | A kind of gesture identification method and device based on projector and camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101183276A (en) * | 2007-12-13 | 2008-05-21 | 上海交通大学 | Interactive system based on camera projector technology |
CN101359250A (en) * | 2007-08-03 | 2009-02-04 | 俞前 | Virtual keyboard system |
CN102200834A (en) * | 2011-05-26 | 2011-09-28 | 华南理工大学 | television control-oriented finger-mouse interaction method |
CN102221879A (en) * | 2010-04-15 | 2011-10-19 | 韩国电子通信研究院 | User interface device and method for recognizing user interaction using same |
-
2011
- 2011-10-24 CN CN2011103253978A patent/CN102508544A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101359250A (en) * | 2007-08-03 | 2009-02-04 | 俞前 | Virtual keyboard system |
CN101183276A (en) * | 2007-12-13 | 2008-05-21 | 上海交通大学 | Interactive system based on camera projector technology |
CN102221879A (en) * | 2010-04-15 | 2011-10-19 | 韩国电子通信研究院 | User interface device and method for recognizing user interaction using same |
CN102200834A (en) * | 2011-05-26 | 2011-09-28 | 华南理工大学 | television control-oriented finger-mouse interaction method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105915987A (en) * | 2016-04-15 | 2016-08-31 | 济南大学 | Implicit interaction method facing smart television set |
CN108563981A (en) * | 2017-12-31 | 2018-09-21 | 广景视睿科技(深圳)有限公司 | A kind of gesture identification method and device based on projector and camera |
CN108563981B (en) * | 2017-12-31 | 2022-04-15 | 广景视睿科技(深圳)有限公司 | Gesture recognition method and device based on projector and camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136229B (en) | Method and equipment for real-time virtual face changing | |
CN109804622B (en) | Recoloring of infrared image streams | |
US8698796B2 (en) | Image processing apparatus, image processing method, and program | |
US20080181507A1 (en) | Image manipulation for videos and still images | |
CN102567727B (en) | Method and device for replacing background target | |
CN105096321B (en) | A kind of low complex degree Motion detection method based on image border | |
Charles et al. | Learning shape models for monocular human pose estimation from the Microsoft Xbox Kinect | |
CN102081918A (en) | Video image display control method and video image display device | |
US20150243031A1 (en) | Method and device for determining at least one object feature of an object comprised in an image | |
US11159717B2 (en) | Systems and methods for real time screen display coordinate and shape detection | |
KR20180087918A (en) | Learning service Method of virtual experience for realistic interactive augmented reality | |
CN111145135B (en) | Image descrambling processing method, device, equipment and storage medium | |
WO2007076890A1 (en) | Segmentation of video sequences | |
CN110827193A (en) | Panoramic video saliency detection method based on multi-channel features | |
US20200304713A1 (en) | Intelligent Video Presentation System | |
CN105718885B (en) | A kind of Facial features tracking method | |
CN113132800B (en) | Video processing method and device, video player, electronic equipment and readable medium | |
CN102915542A (en) | Image processing apparatus, image processing method, and program | |
CN110858277A (en) | Method and device for obtaining attitude classification model | |
CN106780757B (en) | A method of augmented reality | |
CN109191398B (en) | Image processing method, apparatus, computer-readable storage medium and electronic device | |
CN102508544A (en) | Intelligent television interactive method based on projection interaction | |
Ibrahim et al. | A GAN-based blind inpainting method for masonry wall images | |
KR101305725B1 (en) | Augmented reality of logo recognition and the mrthod | |
Shao et al. | Imagebeacon: Broadcasting color images over connectionless bluetooth le packets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120620 |