CN109523567A - A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique - Google Patents
A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique Download PDFInfo
- Publication number
- CN109523567A CN109523567A CN201811248452.6A CN201811248452A CN109523567A CN 109523567 A CN109523567 A CN 109523567A CN 201811248452 A CN201811248452 A CN 201811248452A CN 109523567 A CN109523567 A CN 109523567A
- Authority
- CN
- China
- Prior art keywords
- image
- practitioner
- urheen
- finger
- fingering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B15/00—Teaching music
- G09B15/06—Devices for exercising or strengthening fingers or arms; Devices for holding fingers or arms in a proper position for playing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physical Education & Sports Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Image Analysis (AREA)
Abstract
Auxiliary urheen practitioner's fingering detection method based on computer vision technique that the invention discloses a kind of, is related to technical field of computer vision.The present invention is the following steps are included: depth transducer and Visible Light Camera acquire background image information from different perspectives respectively;Depth transducer primary segmentation background image extracts the image information of practitioner region;Depth transducer and Visible Light Camera cooperation, further divide background image, extract the image information of urheen string position;Visible Light Camera and depth transducer cooperation, further divide background image, extract the image information of finger position;Judge whether the position that practitioner's finger is put is accurate.The detection that the present invention operates urheen practitioner by the cooperation of depth transducer and Visible Light Camera mitigates the work load of teacher, improves the learning efficiency of practitioner so as to find defect of the urheen practitioner in terms of fingering.
Description
Technical field
The invention belongs to technical field of computer vision, more particularly to a kind of auxiliary two based on computer vision technique
Hu practitioner's fingering detection method.
Background technique
Now with the slowly raising of quality of life, people are slowly transitioned into spiritual pursuit from material pursuits, and urheen is one
Kind of bowed stringed instrument is mainly made of piano stick, bow and two strings, and modern urheen is tuned up with pure five degree, belong among Chinese fiddle in height
Range musical instrument, each place is as needed and condition, independent development go out the bowed stringed instrument of different shapes.
Traditional Erhu learning method is all the mode that a teacher instructs several students, it is easy to because of teacher strength
Shortcoming, cause not finding the details defect in students'learning, cause the learning efficiency of student lower.Also,
During self practice of student, in default of the supervision of teacher, also it is unfavorable for improving pace of learning.
Summary of the invention
The purpose of the present invention is to provide a kind of auxiliary urheen practitioner's fingering detection side based on computer vision technique
Method, the detection that urheen practitioner is operated by the cooperation of depth transducer and Visible Light Camera, so as to find two
Defect of the Hu practitioner in terms of fingering, mitigates the work load of teacher, improves the learning efficiency of practitioner.
In order to solve the above technical problems, the present invention is achieved by the following technical solutions:
The present invention is a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique, including following step
It is rapid:
SS01 acquires information using depth transducer: acquisition is carried on the back from different perspectives respectively for depth transducer and Visible Light Camera
Scape image information;
The utilization of SS02 depth information and primary segmentation: depth transducer primary segmentation background image extracts practitioner institute
Image information in region;
The positioning of SS03 urheen string: depth transducer and Visible Light Camera cooperation further divide background image, extract urheen
The image information of string position;
SS04 visible detection finger position and fingering are analyzed: Visible Light Camera and depth transducer cooperate, and further divide
Background image is cut, the image information of finger position is extracted;Judge whether the position that practitioner's finger is put is accurate.
Further, comprising the following steps:
SS021 obtains intensity value ranges: setting the image of acquisition as M, the maximum value and minimum value of image M pixel value are distinguished
For MmaxAnd MminThen the distribution of pixel value is [Mmin,Mmax] be
, the abscissa range of histogram, the number that ordinate, which is pixel value, to be occurred when being corresponding abscissa value;
SS022 draws grey level histogram: each pixel of traversal image M draws the corresponding intensity histogram of image M
Figure;
SS023 finds threshold value: analyzing the grey level histogram, finds abscissa corresponding to the value discontinuity of ordinate, makees
For the threshold value k of segmentation;
SS024 image segmentation: being split using threshold value k, and point traversal present image M, pixel value are greater than threshold value pixel-by-pixel
K, then pixel value belongs to background dot region, and pixel value is set to 0, and otherwise, which belongs to foreground area, i.e., practitioner and
The region of urheen;
SS025 obtains segmented image: obtaining image M' and uses for next-step operation;
SS026 repetitive operation: the image N that another depth transducer obtains is divided according to SS021-SS025 step, is obtained
Image N';
Further, comprising the following steps:
SS031 image gray processing: the image of Visible Light Camera acquisition is set as T, the image setting after gray processing is T';
SS032 filtering background: in conjunction with depth information segmented image background;
SS033 detects string: detection urheen string straight line;
SS034 angle detection: detection urheen angle of chord degree;
SS035 real-time detection: urheen string position is obtained in real time.
Further, comprising the following steps:
SS041 visible detection finger position: it positions and detects including left hand, the accurate positioning of finger;
The analysis of SS042 fingering: whether analysis practitioner's fingering is accurate.
Further, finger accurate positioning the following steps are included:
SS0411 obtains segmented image: repeating SS031-SS032 step, obtains image after segmentation;
SS0412 obtains hand region;
SS0413 positions palm back center;
SS0414 detects finger;
SS0415 determines finger order: judging whether the placement position of practitioner's finger is correct.
The invention has the following advantages: passing through depth transducer and Visible Light Camera using computer vision technique
Cooperation detection that urheen practitioner is operated, so as to find defect of the urheen practitioner in terms of fingering, and energy
Enough real-time detections simultaneously judge whether the placement position of practitioner's finger is correct, mistake present on timely correction practitioner's fingering,
The work load of teacher is dramatically reduced, the learning efficiency of practitioner is improved.
Certainly, it implements any of the products of the present invention and does not necessarily require achieving all the advantages described above at the same time.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will be described below to embodiment required
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of functional block diagram of auxiliary urheen practitioner fingering detection method based on computer vision technique;
Fig. 2 is the flow diagram of depth information segmentation;
Fig. 3 is the flow diagram of urheen string positioning;
Fig. 4 is the flow diagram of visible detection finger position and fingering analysis;
Fig. 5 is the pinpoint flow diagram of finger;
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, the present invention is a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique,
The following steps are included:
SS01 acquires information using depth transducer: acquisition is carried on the back from different perspectives respectively for depth transducer and Visible Light Camera
Scape image information, including two depth transducers and two Visible Light Cameras: the first depth transducer is located in front of practitioner 1 meter
Distance face practitioner apart from 0.5 meter of ground level;Second depth transducer is located at 1 meter on the left of practitioner of distance,
Apart from 0.5 meter of ground level, practitioner is faced;First Visible Light Camera is located at 1 meter in front of practitioner of distance, apart from ground
Highly 0.5 meter, face practitioner;Second Visible Light Camera is located at 1 meter on the right side of practitioner of distance, apart from ground level 0.8
Rice, faces practitioner;It is from background image to the image in practitioner front, side different angle acquisition urheen practitioner
In be partitioned into urheen practitioner relevant range and lay the foundation;Depth transducer is acquired background image using Kinect technology,
Kinect mono- shares 3 cameras, and the camera on both sides belongs to depth camera, after Kinect connection power supply, has using it
Depth take the photograph acquisition background image;
The utilization of SS02 depth information and primary segmentation: depth transducer primary segmentation background image extracts practitioner institute
Image information in region, avoid in background image metope or other people interfere;
The positioning of SS03 urheen string: depth transducer and Visible Light Camera cooperation further divide background image, extract urheen
The image information of string position;
SS04 visible detection finger position and fingering are analyzed: Visible Light Camera and depth transducer cooperate, and further divide
Background image is cut, the image information of finger position is extracted;Judge whether the position that practitioner's finger is put is accurate.
Wherein as shown in Figure 2, comprising the following steps:
SS021 obtain intensity value ranges: set depth sensor obtain image be M, the maximum value of image M pixel value and
Minimum value is respectively MmaxAnd Mmin, then the distribution of pixel value is [Mmin,Mmax], the as abscissa range of histogram is erected
The number that coordinate, which is pixel value, to be occurred when being corresponding abscissa value;
SS022 draws grey level histogram: each pixel of traversal image M draws the corresponding intensity histogram of image M
Figure;
SS023 finds threshold value: analyzing the grey level histogram, finds abscissa corresponding to the value discontinuity of ordinate, makees
For the threshold value k of segmentation;
SS024 image segmentation: being split using threshold value k, and point traversal present image M, pixel value are greater than threshold value pixel-by-pixel
K, then pixel value belongs to background dot region, and pixel value is set to 0, and otherwise, which belongs to foreground area, i.e., practitioner and
The region of urheen;
SS025 obtains segmented image: obtaining image M' and uses for next-step operation;
SS026 repetitive operation: the image N that another depth transducer obtains is divided according to SS021-SS025 step, is obtained
Image N';
Wherein as shown in Figure 3, comprising the following steps:
SS031 image gray processing: the image of Visible Light Camera acquisition is set as T, the image setting after gray processing is T';
SS032 filtering background: in conjunction with depth information segmented image background;Have immediately ahead of practitioner the first depth transducer and
The image size of first Visible Light Camera, the acquisition of two sensors is the same, therefore by the way of putting traversal pixel-by-pixel minute
Background is cut, in image M, the corresponding pixel value size of setting pixel (x, y) is 0, then the pixel is background in image M'
Region, then in image T', corresponding pixel (x, y) also belongs to background area, therefore its pixel value size is set 0, otherwise
Pixel value size remains unchanged, and loops through all pixels point, obtains image T ", the as image after segmentation background, wherein 0≤
X≤W-1,0≤y≤H-1, W and H are respectively the length and width of image M';
SS033 detects string: detection urheen string straight line;The string of urheen is straight, therefore uses the straight line based on Hough transformation
Detection method detects string, sets straight line as y=kx+b, k and b are respectively the straight slope and intercept, which is changed
It is r=xcos θ+ysin θ at polar coordinate system space, (x, y) is the coordinate position of certain point, and (r, θ) is to arrive by the straight line
The distance of origin, θ are the angle of r and X positive axis;For image T ", point is traversed pixel-by-pixel, each pixel is mapped to
Under polar coordinate system, then it can all obtain one group (r, θ), so that multiple (r, θ) groups are obtained, by highest that two groups of repetition rate
(r, θ) brings the linear equation under polar coordinate system into, and three groups of line segments can be got in image T ", as bow chord, wherein one
Line segment be it is inclined, two other line segment is vertical;It is embodied as the prior art, calling directly corresponding program can be real
Now detect straight line;
SS034 angle detection: detection urheen angle of chord degree;Detect in image T " after three groups of line segments, using angle detection come
Judge real string, string is located in piano stick, therefore is perpendicular to horizontal direction, and bow is held with tilt angle by practitioner, therefore examines
The angle for surveying all line segments and vertical direction detects the angle of two lines section and vertical direction less than 10 degree, then it is corresponded to
String, be denoted as Z respectively1And Z2, finger is located on string, therefore in acquisition image, it will cause the disconnection of string, the line segment being located above
For Z1, the line segment of lower section is denoted as Z2;
SS035 real-time detection: urheen string position is obtained in real time;Sensor real-time image acquisition around practitioner, setting
The a certain moment is i, then the first depth transducer and the first Visible Light Camera acquired image are respectively M immediately ahead of practitioneri
And Ti, first to image MiStep SS021-SS024 is executed, image M is obtainedi';Then to image TiAnd Mi' execute step
SS031-SS034, to obtain the string for i constantlyWith
Wherein as shown in Figure 4, comprising the following steps:
SS041 visible detection finger position: it positions and detects including left hand, the accurate positioning of finger;Left hand positioning and
Detection: the two lines section positioned at moment i has been obtained by step SS035WithThe string respectively separated by finger
Two parts, then shown in the position proportional such as formula (1) that left hand is placed, whereinWithRespectively two
Line segmentWithLength then illustrate that the placement position of left hand is accurate, otherwise illustrate to put when the value of h meets range of set value
Put position inaccuracy;Depth transducer multiple frames of continuous acquisition in 1s, then moment i acquired image M in step SS035i,
The i-th frame image for depth transducer acquisition acquires an image, is analyzed, to analyze current time every 1000 frames
Left hand put it is whether correct;
The analysis of SS042 fingering: whether analysis practitioner's fingering is accurate.
Wherein as shown in figure 5, the accurate positioning of finger the following steps are included:
SS0411 obtains segmented image: repeating SS031-SS032 step, obtains image after the corresponding segmentation of current i moment
Ti";
SS0412 obtains hand region;For image Ti" repetition step SS035 two line segments separated of acquisition, two
Gap length between line segment is ll, then centered on the mid-point position at interval, drawing a length is 1.5*ll, and width is
The region of ll, is denoted as ROIi, then this rectangle local corresponds to the actual area of left hand;
SS0413 positions palm back center;In region ROIiThe middle profile for drawing left hand, method for drafting is the prior art,
It calling directly, elliptical fitting is carried out to the contour curve of left hand, then the elliptical center after being fitted is the center at the palm back side,
Equally, approximating method is the prior art, is called directly;
SS0414 detects finger;The ellipse obtained in step SS0413, minor axis radius are set as ri, center is set as Oi, then
With OiFor the center of circle, radius R=1.5*riLength draw a circle, then the circle can be with the left hand wheel that obtains in step SS0413
Exterior feature is partitioned into multiple small semiellipse profiles, then each smaller part elliptic contour, corresponds to the profile of a finger;
SS0415 determines finger order: the finger contours obtained in step SS0414 and actual finger order are carried out pair
Than to judge whether the placement position of practitioner's finger is correct.
In the description of this specification, the description of reference term " one embodiment ", " example ", " specific example " etc. means
Particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one implementation of the invention
In example or example.In the present specification, schematic expression of the above terms may not refer to the same embodiment or example.
Moreover, particular features, structures, materials, or characteristics described can be in any one or more of the embodiments or examples to close
Suitable mode combines.
Present invention disclosed above preferred embodiment is only intended to help to illustrate the present invention.There is no detailed for preferred embodiment
All details are described, are not limited the invention to the specific embodiments described.Obviously, according to the content of this specification,
It can make many modifications and variations.These embodiments are chosen and specifically described to this specification, is in order to better explain the present invention
Principle and practical application, so that skilled artisan be enable to better understand and utilize the present invention.The present invention is only
It is limited by claims and its full scope and equivalent.
Claims (5)
1. a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique, it is characterised in that including following step
It is rapid:
SS01 acquires information using depth transducer: depth transducer and Visible Light Camera acquire Background from different perspectives respectively
As information;
The utilization of SS02 depth information and primary segmentation: depth transducer primary segmentation background image extracts practitioner location
The image information in domain;
The positioning of SS03 urheen string: depth transducer and Visible Light Camera cooperation further divide background image, extract urheen string institute
Image information in position;
SS04 visible detection finger position and fingering are analyzed: Visible Light Camera and depth transducer cooperate, further segmentation back
Scape image extracts the image information of finger position;Judge whether the position that practitioner's finger is put is accurate.
2. a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique according to claim 1,
Characterized by comprising the following steps:
SS021 obtains intensity value ranges: setting the image of acquisition as M, the maximum value and minimum value of image M pixel value are respectively
MmaxAnd Mmin, then the distribution of pixel value is [Mmin,Mmax], as the abscissa range of histogram, ordinate are pixel value
For the number occurred when corresponding abscissa value;
SS022 draws grey level histogram: each pixel of traversal image M draws the corresponding grey level histogram of image M;
SS023 find threshold value: analyze the grey level histogram, find abscissa corresponding to the value discontinuity of ordinate, as point
The threshold value k cut;
SS024 image segmentation: being split using threshold value k, and point traversal present image M, pixel value are greater than threshold value k pixel-by-pixel, then
Pixel value belongs to background dot region, and pixel value is set to 0, and otherwise, which belongs to foreground area, i.e. practitioner and urheen
Region;
SS025 obtains segmented image: obtaining image M' and uses for next-step operation;
SS026 repetitive operation: the image N that another depth transducer obtains is divided according to SS021-SS025 step, obtains image
N'。
3. a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique according to claim 1,
Characterized by comprising the following steps:
SS031 image gray processing: the image of Visible Light Camera acquisition is set as T, the image setting after gray processing is T';
SS032 filtering background: in conjunction with depth information segmented image background;
SS033 detects string: detection urheen string straight line;
SS034 angle detection: detection urheen angle of chord degree;
SS035 real-time detection: urheen string position is obtained in real time.
4. a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique according to claim 1,
Characterized by comprising the following steps:
SS041 visible detection finger position: it positions and detects including left hand, the accurate positioning of finger;
The analysis of SS042 fingering: whether analysis practitioner's fingering is accurate.
5. a kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique according to claim 4,
It is characterized in that, the accurate positioning of finger the following steps are included:
SS0411 obtains segmented image: repeating SS031-SS032 step, obtains image after segmentation;
SS0412 obtains hand region;
SS0413 positions palm back center;
SS0414 detects finger;
SS0415 determines finger order: judging whether the placement position of practitioner's finger is correct.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811248452.6A CN109523567A (en) | 2018-10-25 | 2018-10-25 | A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811248452.6A CN109523567A (en) | 2018-10-25 | 2018-10-25 | A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109523567A true CN109523567A (en) | 2019-03-26 |
Family
ID=65774048
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811248452.6A Pending CN109523567A (en) | 2018-10-25 | 2018-10-25 | A kind of auxiliary urheen practitioner's fingering detection method based on computer vision technique |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109523567A (en) |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009288342A (en) * | 2008-05-27 | 2009-12-10 | Roland Corp | Tuning device |
| CN103108105A (en) * | 2011-11-11 | 2013-05-15 | 株式会社Pfu | Image processing apparatus, and line detection method |
| CN103235641A (en) * | 2013-03-17 | 2013-08-07 | 浙江大学 | 6-dimensional sensory-interactive virtual keyboard instrument system and realization method thereof |
| CN103390168A (en) * | 2013-07-18 | 2013-11-13 | 重庆邮电大学 | Intelligent wheelchair dynamic gesture recognition method based on Kinect depth information |
| CN103598870A (en) * | 2013-11-08 | 2014-02-26 | 北京工业大学 | Optometry method based on depth-image gesture recognition |
| CN105976800A (en) * | 2015-03-13 | 2016-09-28 | 三星电子株式会社 | Electronic device, method for recognizing playing of string instrument in electronic device |
| CN106485984A (en) * | 2015-08-27 | 2017-03-08 | 中国移动通信集团公司 | A kind of intelligent tutoring method and apparatus of piano |
| CN106909216A (en) * | 2017-01-05 | 2017-06-30 | 华南理工大学 | A kind of Apery manipulator control method based on Kinect sensor |
| CN106910381A (en) * | 2017-04-11 | 2017-06-30 | 华东交通大学 | A kind of piano intelligent tutoring system |
| CN107341439A (en) * | 2017-03-20 | 2017-11-10 | 长沙理工大学 | Finger number identification method |
-
2018
- 2018-10-25 CN CN201811248452.6A patent/CN109523567A/en active Pending
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009288342A (en) * | 2008-05-27 | 2009-12-10 | Roland Corp | Tuning device |
| CN103108105A (en) * | 2011-11-11 | 2013-05-15 | 株式会社Pfu | Image processing apparatus, and line detection method |
| CN103235641A (en) * | 2013-03-17 | 2013-08-07 | 浙江大学 | 6-dimensional sensory-interactive virtual keyboard instrument system and realization method thereof |
| CN103390168A (en) * | 2013-07-18 | 2013-11-13 | 重庆邮电大学 | Intelligent wheelchair dynamic gesture recognition method based on Kinect depth information |
| CN103598870A (en) * | 2013-11-08 | 2014-02-26 | 北京工业大学 | Optometry method based on depth-image gesture recognition |
| CN105976800A (en) * | 2015-03-13 | 2016-09-28 | 三星电子株式会社 | Electronic device, method for recognizing playing of string instrument in electronic device |
| CN106485984A (en) * | 2015-08-27 | 2017-03-08 | 中国移动通信集团公司 | A kind of intelligent tutoring method and apparatus of piano |
| CN106909216A (en) * | 2017-01-05 | 2017-06-30 | 华南理工大学 | A kind of Apery manipulator control method based on Kinect sensor |
| CN107341439A (en) * | 2017-03-20 | 2017-11-10 | 长沙理工大学 | Finger number identification method |
| CN106910381A (en) * | 2017-04-11 | 2017-06-30 | 华东交通大学 | A kind of piano intelligent tutoring system |
Non-Patent Citations (1)
| Title |
|---|
| 刘国华: ""HALCON数字图像处理"", 《西安电子科技大学出版社》 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106821694B (en) | A kind of mobile blind guiding system based on smart phone | |
| CN106228161B (en) | An automatic reading method of a pointer dial | |
| CN110874844A (en) | Line segment detection method, device and equipment | |
| CN109993099A (en) | A kind of lane line drawing recognition methods based on machine vision | |
| CN105046206B (en) | Based on the pedestrian detection method and device for moving prior information in video | |
| CN108960229A (en) | One kind is towards multidirectional character detecting method and device | |
| CN103218615B (en) | Face judgment method | |
| CN112598733A (en) | Ship detection method based on multi-mode data fusion compensation adaptive optimization | |
| CN106228569A (en) | A kind of fish speed of moving body detection method being applicable to water quality monitoring | |
| CN108133216A (en) | The charactron Recognition of Reading method that achievable decimal point based on machine vision is read | |
| CN110689000B (en) | Vehicle license plate recognition method based on license plate sample generated in complex environment | |
| CN106097368A (en) | A kind of recognition methods in veneer crack | |
| CN103996049A (en) | Ship overlength and overwidth detection method based on video image | |
| CN110084129A (en) | A kind of river drifting substances real-time detection method based on machine vision | |
| CN110472628A (en) | A kind of improvement Faster R-CNN network detection floating material method based on video features | |
| CN109255336A (en) | Arrester recognition methods based on crusing robot | |
| CN106778736A (en) | The licence plate recognition method and its system of a kind of robust | |
| CN107944437B (en) | A kind of Face detection method based on neural network and integral image | |
| CN105740844A (en) | Insulator cracking fault detection method based on image identification technology | |
| CN109614924A (en) | A kind of garbage on water detection method based on deep learning algorithm | |
| CN104598914A (en) | Skin color detecting method and device | |
| CN104835142A (en) | Vehicle queuing length detection method based on texture features | |
| CN110473255A (en) | A kind of ship bollard localization method divided based on multi grid | |
| CN105654140B (en) | Location and identification method of railway oil tanker number for complex industrial environment | |
| CN113642570A (en) | Method for recognizing license plate of mine car in dark environment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190326 |