[go: up one dir, main page]

CN102667858B - Including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object - Google Patents

Including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object Download PDF

Info

Publication number
CN102667858B
CN102667858B CN201180003615.4A CN201180003615A CN102667858B CN 102667858 B CN102667858 B CN 102667858B CN 201180003615 A CN201180003615 A CN 201180003615A CN 102667858 B CN102667858 B CN 102667858B
Authority
CN
China
Prior art keywords
image
controller
self adaptation
training image
correlation filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201180003615.4A
Other languages
Chinese (zh)
Other versions
CN102667858A (en
Inventor
简·特伦斯基
普拉杰·卡马特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
J T CONSULTANCY Ltd
Original Assignee
J T CONSULTANCY Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by J T CONSULTANCY Ltd filed Critical J T CONSULTANCY Ltd
Publication of CN102667858A publication Critical patent/CN102667858A/en
Application granted granted Critical
Publication of CN102667858B publication Critical patent/CN102667858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of simulator.The simulator includes the method and apparatus for determining the two-dimensional coordinate of object.The photographing unit and infrared camera of image for capture controller are provided.Infrared image is processed using movable contour model to produce training image according to the image from the photographing unit.Self adaptation correlation filter, the self adaptation correlation filter position to measure the controller related to the image from the photographing unit are built according to the training image.

Description

Including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object
Technical field
The present invention relates to including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object.More specifically but do not arrange Its ground, the present invention relates to for the simulator for training handworker.
Background technology
Traditionally, the handworker of such as plumber etc learns their vocational skills as apprentice.Apprentice is by tasting Examination replicates the work of their master workers learning various requisite skills.Apprenticeship provides a kind of centralized individual training experience.So And, as the time of master worker is dispersed on multiple students, therefore this training form can not expand scale.Additionally, at least in instruction Experienced early stage, apprentice can be made mistakes, and this increases can the cost of employer, and him/her can be hindered to employ apprentice in future.
In order to attempt shortening the initial cost of the apprenticeship big stage, vocational training course is have developed, is learned so as to give Member is used as the initial experience needed for handworker's start-up operation.However, substantial amounts of mistake is made in early stage due to student, therefore These courses bear at a relatively high expense.
Content of the invention
According to the first aspect of the invention, there is provided a kind of equipment for determining the two-dimensional coordinate of object, including:First shines Camera, for producing the infrared image of the object;Second photographing unit, for producing the calibration image and the thing of the object The image stream of body;First computing module, for producing training image, is configured with active contour module from the infrared figure The vector corresponding with the edge of the object is extracted as in, and it is described to produce that the vector is applied to the calibration image Training image;Self adaptation correlation filter, is fabricated according to the training image and forms;And second computing module, it is configured Being will be related to the self adaptation correlation filter at least piece image of the described image stream in second photographing unit, For by the peak-peak in the dependency plane of the self adaptation correlation filter and detection threshold value are compared to really The x-y of the fixed object.
The equipment can pass through to the image application activity profile module from infrared camera, only infrared using one Photographing unit accurately tracks the object, wherein after the image of infrared camera is used to create a log assembly that training image for institute State self adaptation correlation filter.In the prior art, multiple infrared cameras are used for tracking the object.Therefore, the present invention Reduce the cost for tracking object by eliminating the needs to multiple expensive infrared cameras.Movable contour model and infrared photograph Camera collaborative work, the contrast between object and the mankind allow to produce the accurate vector corresponding with the edge of object.Therefore, institute State equipment to work in the cluttered environment in such as living room etc well.
Therefore, the present invention can use standard camera, the such as color camera of VGA photographing units etc, and which provides The image of the visible spectrum that can be used by the equipment.
Preferably, multiple rotation training images are generated, and the rotation training image is by rotating the training image quilt Built-up, and the sef-adapting filter is fabricated and forms according to the rotation training image.
The equipment can pass through to rotate the training image and keep the tracking to the object, you can with not newly instructing Practice in the case of image update self adaptation correlation filter in the longer period accurately and consistently measure the position of the object Put.The rotation training image increases tolerance of the equipment to directed change, size change and position.
Preferably, training image or rotation training image are periodically produced to update the self adaptation correlation filtering Device.The self adaptation correlation filter can be updated between per 0.5 to 1.5 second, or it is highly preferred that per second be updated.
By updating the self adaptation correlation filter, and the set using rotation training image with new training image, The self adaptation correlation filter is by re -training.This improves the self adaptation correlation filter and is directed to from the second photographing unit Successive image accuracy.
The self adaptation correlation filter can be MACH wave filter or OT-MACH wave filter etc..
According to the second aspect of the invention, there is provided a kind of method for determining the two-dimensional coordinate of object, methods described bag Include following steps:Obtaining from infrared camera includes the infrared image of the object;Obtaining from a photographing unit includes the object Image stream and the calibration image including the object;Produced and the thing according to the infrared image using movable contour model The corresponding vector in the edge of body;Training figure is produced by using object described in the vector from the calibration image zooming-out Picture;Self adaptation correlation filter is built using the training image;And by by least piece image in described image stream Related to the self adaptation correlation filter, determine x-y of the object in dependency graph.
Methods described may further include the multiple rotation training images of generation and be built according to the rotation training image The step of self adaptation correlation filter, rotation training image set, are built-up by rotating the training image.
The training image or multiple rotation training images can update the self adaptation correlation filter periodically.Described Self adaptation correlation filter can be updated between per 0.5 to 1.5 second, or it is highly preferred that per second be updated.
A kind of computer program on a computer-readable medium of presenting can be configured to execute according to the present invention the Described method in terms of two.
Description of the drawings
Now will embodiments of the present invention will be described by referring to the drawings by way of example, in accompanying drawing:
Fig. 1 is illustrated including computer, the simulator of the display, controller and camera unit that are arranged on head;
Fig. 2 illustrates the flow chart that the x-axis of the Mersure Controler for illustrating the embodiment of the present invention and y-axis sit calibration method;
Fig. 3 illustrates the OT-MACH filters of the embodiment of Fig. 2;And
Fig. 4 illustrates that the method for the z-axis of Mersure Controler is only for reference.
Specific embodiment
Fig. 1 illustrates the sketch plan of simulator 1.Simulator 1 includes controller 100, computer 200, camera unit 300 and peace It is mounted in the display 400 of head.In order to be described, computer 200 is configured to operation to such as using blowtorch or swan-neck The computer program that the Training scene in road etc is simulated.
Computer 200 is from 300 receiving data of controller 100 and camera unit.Controller 100 includes that measurement such as adds The space attribute of speed and orientation etc simultaneously measures the various sensors of user input.Controller 100 is by the number from sensor Computer 200 is arrived according to output.Camera unit 300 includes the first photographing unit 310 obtained for image and infrared second photograph Machine 320.View data output is arrived computer 200 by camera unit 300.
Computer 200 be configured to process from controller 100 and camera unit 300, as computer program in Input variable data.Controller 100 provides the spatial data of such as acceleration and orientation etc and user input, and shines Camera unit 300 is provided and can be processed with the image of tracking control unit 100 by the method according to the invention.Therefore, it can to instruction The computer program for practicing scene simulation can provide the user with the reality technology such as using blowtorch or crooked pipeline etc Immersion accurate simulation.The method of tracking control unit 100 is described more particularly below.
Tracking
In normal use, simulator 1 is built up in room, and wherein camera unit 300 faces controller 100.Typically For, camera unit 300 can be put by wall paper, and in the face of being located at the controller 100 in room central authorities.Controller 100 is by user Hold.
In order to be described, three dimensions are represented as along Descartes's x-axis, y-axis and z-axis, and wherein z-axis is from photographing unit Unit 300 is to (i.e. the axle is parallel to floor) on the direction of controller 100.X-axis and y-axis are all orthogonal with z-axis, and each other just Hand over.Computer 200 is configured to x-axis and y-axis coordinate via first method computing controller, and calculates via second method Z-axis coordinate.
First method, the i.e. x-axis of computing controller 100 are described referring now to Fig. 2 to Fig. 3 and y-axis sits calibration method.Should Method is executed on computer 200 using the view data from photographing unit 300.Camera unit 300 is via the first photographing unit 310 obtain calibration image, and obtain infrared image via infrared second photographing unit 320.As described above, 300 face of camera unit To the controller 100 that is held by user.Therefore, calibration image and infrared image include controller 100 and user.
The general survey of first method is shown in Figure 2.As preliminary step, will via the infrared image background subtraction of time difference Controller 100 and user are distinguished with normal background.The infrared image after processing is this generates, only includes controller and user, fitted For subsequent step.
To the infrared image application activity skeleton pattern after process, to produce the standard of the profile for delineating 110 edge of controller True vector.Due to having used IR reflectance coatings on controller 110, therefore in infrared image after treatment, controller 110 can Easily to distinguish with user.
Movable contour model is devoted to minimum energy principle, to determine controller 100 in infrared image after treatment The vector at edge.The energy of each vector point is calculated based on its adjacent pixel.Difference of Gaussian (DoG) filtering image is calculated, Edge for prominent controller 100.This energy minimization process is iteration and continuous process, until accurately calculating The vector of controller frontside edge.Calculate for each vector point and the energy function of iteration is described with following formula, wherein i is iteration time Number, from 1 to n, n be on vector put number, andIt is the energy of the vector point for calculating.
Computer 200 is included for changing the configuration of the iterationses i needed for the vector of accurate computing controller frontside edge File.
Once the vector at 110 edge of controller is calculated, just to the calibration image application vector from the first photographing unit, To extract controller 110.Then, to the center applications of the blank background suitable for OT-MACH wave filter for forming training image The controller 110 for being extracted.
In this embodiment, training image is further processed, to produce the multiple rotations for OT-MACH wave filter Turn training image.For example, training image is rotated with increment twice in -6 degree between+6 degree, so as to obtain 7 rotation training figures Picture.Rotation training image is multiplexed, and is input to OT-MACH wave filter.
Referring now to the operation that Fig. 3 is more fully described OT-MACH wave filter.(optimal compromise is maximum average for OT-MACH Relevant height) wave filter executed on computer 200 using FFTW (the most fast fast fourier transform in west) storehouse.FFTW storehouses are to use In the C subroutine librarys for calculating one or more dimensions discrete Fourier transform (DFT).FFTW storehouses and the Intel (RTM) for computer version OpenCV bank interfaces so that OT-MACH wave filter is more effective in process time and frequency.
As shown in Fig. 3 left-hand columns, OT-MACH wave filter receives the set t of rotation training imageI=1 to N, wherein N is rotation The number of training image.Fourier transform FT (T are carried out to each rotation training imagei).The output of FFTW is not displacement FFT.The null component of FFT is executed to the displacement of spectral centroid using the hereinafter referred to as function of C.
cvFFTWShifi()
The function has the effect that left upper quadrant is exchanged with left lower quadrant and exchanged right upper quadrant with left lower quadrant.
OT-MACH is stated with following formula, wherein mxIt is rotation training image vector x in frequency domain1 to NAverage, C be selection appoint The power spectral density diagonal matrix of meaning noise model, DxIt is the diagonal average power spectral density for rotating training image, and SxRepresent The similar matrix of rotation training image set.These parameters can be drawn from training image.α, β and γ are the optimal compromise ginsengs of non-negative Number, its allow OT-MACH wave filter to be adjusted for the external condition of such as intensity level etc.α, β and γ can be in configurations Change in file.
Computer 200 receives image stream from the first photographing unit.As shown in the left-hand side row in Fig. 3, from a width of image stream Image draws the set S of subimageK=1 to N, wherein N is the number of subimage.Fourier transform FT is carried out to each subimage (Sk).Via with minor function in a frequency domain by Fourier transform after subimage related to OT-MACH wave filter.
conj(FT(h))FT(Sk)
Then, by the peak-peak in dependency plane and detection threshold value to be compared, each subimage is divided To in classification or outside classification.Detection threshold value is given in following formula.
Dependency graph is made for subimage in each classification.The position in the x and y direction of controller 100 corresponds to phase Peak in Guan Xingtu.
To each the m image application OT-MACH wave filter from the first photographing unit, to generate dependency graph, and determine The position of controller 110.Parameter m can be changed in configuration file.OT-MACH wave filter can be in real time or configuring text Frequency determined by parameter in part, by obtained and be applied to OT-MACH wave filter rotation training image newly gather into Row updates.
Reference will be made to only now Fig. 4 and describe second method, that is, the z-axis for being used for computing controller 100 sits calibration method.Z-axis is sat Mark is the distance from the barycenter of the first photographing unit and the second photographing unit to controller 100.
The half angle θ of the first photographing unit1Half angle θ with the second photographing unit2Calculated using following formula, wherein D is first Photographing unit or the field stop of the second photographing unit, and f is the focal length of the first photographing unit or the second photographing unit.
With reference to Fig. 4, z-axis coordinate can be determined according to following formula, wherein α1,2Can use visual angle half-angle and The x-axis of the controller 100 calculated with first direction and y-axis position are measuring.
Alternately, if the first photographing unit and the second photographing unit are calibrated, can be found out using OpenCV functions interior Portion and external camera parameter.
It will be appreciated by those skilled in the art that rotation multiplexing, as producing multiple rotation training images has carried out training image Rotation, be the present invention inessential feature.Conversely, OT-MACH wave filter can be built according to training image.This area skill Art personnel are it will be appreciated that be preferred according to multiple rotation training images structure OT-MACH wave filter, reason is which in wave filter The tolerance of OT-MACH wave filter is provided between renewal so that the accuracy of location recognition increases, and cause computer not Tracking to controller 100 may be lost greatly.
Skilled person will also understand that, the renewal of OT-MACH wave filter, the i.e. product of the new set of rotation training image Raw, it is non-essential feature.Conversely, OT-MACH wave filter can be built according to the first set of training image, and do not carry out more Newly.Certainly, those skilled in the art also will appreciate that, it is most preferred to update OT-MACH wave filter, and reason is which provides control The more accurately location recognition of device 100.
Additionally, once (that is, right to OT-MACH filter updates per 25 images in the image stream of the first photographing unit In ordinary camera, in the case of photographing unit 25 frames of capture per second, it is updated once every second) it is non-essential feature.This area skill Art personnel are it will be appreciated that the frequency for updating OT-MACH wave filter can be changed by changing configuration file.
It will be appreciated by those skilled in the art that it is not necessary that wave filter is OT-MACH wave filter.On the contrary, it is possible to use The arbitrary form of self adaptation correlation filter, for example, can substitute using MACH wave filter, ASEF wave filter, UMACE wave filter Or MOSSE wave filter.
Skilled person will also understand that, simulator 1 is not limited to above-described scene detection.Conversely, simulator 1 can For various forms of virtual real scenes, such as other training, amusement or industrial context.Specifically, above general The object tracking methods that states can be applied to other scenes, such as process industry.
In the embodiment above, computer 200 includes computer program.It will be appreciated by those skilled in the art that computer journey Sequence can be presented on a computer-readable medium, such as in compact disk or USB flash, or be able to can be downloaded by the Internet.Meter Calculation machine program can also be stored on the server of remote location, and the personal computer of user can pass through network connection to Server sends data receiving data.
Skilled person will also understand that, it is non-essential feature installed in the display of head.Conversely, computer 200 Can be to the output pattern such as computer monitor, projector, TV, HDTV or 3DTV.Display installed in head is preferred spy Levy, reason is which provides the user immersion experience, and also can provide the data related to the head orientation of user, so After can by simulator 1 use these data.
It will be appreciated by those skilled in the art that the combination in any of feature is in the scope of the invention limited without departing from claim In the case of be possible.

Claims (12)

1. a kind of equipment for determining the two-dimensional coordinate of controller operated by human user, the equipment include:
First photographing unit, for producing the infrared image of the controller and the user;
Second photographing unit, for the calibration image and the controller that produce the controller and the user and the user Image stream;
First computing module, for producing training image, is configured with active contour module from the controller and described The vector of the profile at the edge of the controller is extracted in the infrared image of user, and the vector is applied to the school Quasi- image to produce the training image, so as to the controller is distinguished with the user;
Self adaptation correlation filter, is fabricated according to the training image and forms;And
Second computing module, is configured at least piece image of the described image stream in second photographing unit and institute State self adaptation correlation filter related, for passing through the peak-peak in the dependency plane of the self adaptation correlation filter The x-y that determines the controller is compared to detection threshold value.
2. equipment according to claim 1, plurality of rotation training image are generated, and the rotation training image passes through Rotate the training image and be fabricated and form, and the self adaptation correlation filter is fabricated according to the rotation training image Form.
3. equipment according to claim 1, wherein described training image are periodically produced to update the self adaptation Correlation filter.
4. equipment according to claim 2, wherein described rotation training image periodically produced with update described from Adapt to correlation filter.
5. the equipment according to claim 3 or 4, wherein described self adaptation correlation filter is between per 0.5 second to 1.5 seconds It is updated.
6. equipment according to any one of claim 1 to 4, wherein described self adaptation correlation filter are OT-MACH filters Ripple device.
7. a kind of method for determining the two-dimensional coordinate of controller operated by human user, methods described include following step Suddenly:
Obtaining from infrared camera includes the infrared image of the controller and the user;
Obtaining from a photographing unit includes the image stream of the controller and the user and including the controller and the use The calibration image at family;
The controller is produced according to the infrared image for including the controller and the user using movable contour model Edge profile vector, so as to the controller is distinguished with the user;
Extracted the controller to produce training image relative to the user from the calibration image by using the vector;
Self adaptation correlation filter is built using the training image;And
By at least piece image in described image stream is related to the self adaptation correlation filter, the controller is determined X-y in dependency graph.
8. method according to claim 7, further includes to produce multiple rotation training images and according to the rotation instruction The step of practicing self adaptation correlation filter described in picture construction, the rotation training image are built by rotating the training image Form.
9. method according to claim 7, wherein periodically produces the training image to update the self adaptation phase Close wave filter.
10. method according to claim 8, wherein periodically produces the rotation training image described adaptive to update Answer correlation filter.
11. methods according to any one of claim 7 to 10, wherein described self adaptation correlation filter is per 0.5 second It is updated between 1.5 seconds.
12. methods according to any one of claim 7 to 10, wherein described self adaptation correlation filter is OT-MACH Wave filter.
CN201180003615.4A 2010-11-10 2011-11-10 Including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object Active CN102667858B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB1018974.4 2010-11-10
GB1018974.4A GB2485359B (en) 2010-11-10 2010-11-10 A simulator including a method and apparatus for determining the co-ordinates of an object in two dimensions
PCT/GB2011/052187 WO2012063068A1 (en) 2010-11-10 2011-11-10 A simulator including a method and apparatus for determining the co-ordinates of an object in two dimensions

Publications (2)

Publication Number Publication Date
CN102667858A CN102667858A (en) 2012-09-12
CN102667858B true CN102667858B (en) 2017-03-15

Family

ID=43414641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201180003615.4A Active CN102667858B (en) 2010-11-10 2011-11-10 Including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object

Country Status (5)

Country Link
CN (1) CN102667858B (en)
GB (2) GB2485359B (en)
IN (1) IN2012DN00884A (en)
RU (1) RU2608350C2 (en)
WO (1) WO2012063068A1 (en)

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5267329A (en) * 1990-08-10 1993-11-30 Kaman Aerospace Corporation Process for automatically detecting and locating a target from a plurality of two dimensional images
US5870486A (en) * 1991-12-11 1999-02-09 Texas Instruments Incorporated Method of inferring sensor attitude through multi-feature tracking
US6529614B1 (en) * 1998-08-05 2003-03-04 California Institute Of Technology Advanced miniature processing handware for ATR applications
US6757422B1 (en) * 1998-11-12 2004-06-29 Canon Kabushiki Kaisha Viewpoint position detection apparatus and method, and stereoscopic image display system
US6546117B1 (en) * 1999-06-10 2003-04-08 University Of Washington Video object segmentation using active contour modelling with global relaxation
US6785402B2 (en) * 2001-02-15 2004-08-31 Hewlett-Packard Development Company, L.P. Head tracking and color video acquisition via near infrared luminance keying
US7399969B2 (en) * 2003-01-21 2008-07-15 Suren Systems, Ltd. PIR motion sensor
US7483569B2 (en) * 2003-05-29 2009-01-27 Carnegie Mellon University Reduced complexity correlation filters
GB0404269D0 (en) * 2004-02-26 2004-03-31 Leuven K U Res & Dev Time-dependent three-dimensional musculo-skeletal modeling based on dynamic surface measurements of bodies
KR100657915B1 (en) * 2004-11-26 2006-12-14 삼성전자주식회사 Corner detection method and corner detection device
US7693331B2 (en) * 2006-08-30 2010-04-06 Mitsubishi Electric Research Laboratories, Inc. Object segmentation using visible and infrared images
JP2008271024A (en) * 2007-04-18 2008-11-06 Fujifilm Corp Image processing apparatus and method and program
CN101383004A (en) * 2007-09-06 2009-03-11 上海遥薇实业有限公司 Passenger target detecting method combining infrared and visible light images

Also Published As

Publication number Publication date
RU2608350C2 (en) 2017-01-18
GB2485471B (en) 2016-10-12
WO2012063068A1 (en) 2012-05-18
RU2012105335A (en) 2014-12-20
GB2485359B (en) 2012-10-31
IN2012DN00884A (en) 2015-07-10
CN102667858A (en) 2012-09-12
GB2485359A (en) 2012-05-16
GB201018974D0 (en) 2010-12-22
GB201119413D0 (en) 2011-12-21
GB2485471A (en) 2012-05-16

Similar Documents

Publication Publication Date Title
CN105426827B (en) Living body verification method, device and system
CN109325437A (en) Image processing method, device and system
CA3042819A1 (en) Method and system for postural analysis and measuring anatomical dimensions from a digital image using machine learning
CN112925223B (en) Unmanned aerial vehicle three-dimensional tracking virtual test simulation system based on visual sensing network
CN104656890A (en) Virtual realistic intelligent projection gesture interaction all-in-one machine
CN102893124A (en) Projecting patterns for high resolution texture extraction
DE15864375T1 (en) SHAPING A BODY
CN108369785A (en) Activity determination
US20180025664A1 (en) Computerized methods and systems for motor skill training
WO2013087084A1 (en) Method and device for estimating a pose
EP2325725A1 (en) Method for producing an effect on virtual objects
WO2022080165A1 (en) Movement visualization system and movement visualization method
US20120035498A1 (en) Apparatus and method for improving eye-hand coordination
Caarls et al. Augmented reality for art, design and cultural heritage—system design and evaluation
CN110443884B (en) Method and device for hand motion reconstruction
KR20130067856A (en) Apparatus and method for performing virtual musical instrument on the basis of finger-motion
CN110477921B (en) Height measurement method based on skeleton broken line Ridge regression
CN205028239U (en) Interactive all -in -one of virtual reality intelligence projection gesture
CN107247466A (en) Robot head gesture control method and system
CN102667858B (en) Including for determining the simulator of the method and apparatus of the two-dimensional coordinate of object
JP2010238134A (en) Image processor and program
CN109801346A (en) A kind of original painting neural network based auxiliary painting methods and device
CN110298917A (en) A kind of facial reconstruction method and system
JPWO2019150431A1 (en) Information processing device
KR20180129301A (en) Method and apparatus for emulating behavior of robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant