CN110334620A - Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction - Google Patents
Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction Download PDFInfo
- Publication number
- CN110334620A CN110334620A CN201910547359.3A CN201910547359A CN110334620A CN 110334620 A CN110334620 A CN 110334620A CN 201910547359 A CN201910547359 A CN 201910547359A CN 110334620 A CN110334620 A CN 110334620A
- Authority
- CN
- China
- Prior art keywords
- video
- sets
- facial pose
- frame
- posture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 230000001815 facial effect Effects 0.000 claims abstract description 158
- 208000028752 abnormal posture Diseases 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 14
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000004888 barrier function Effects 0.000 claims description 5
- 230000000903 blocking effect Effects 0.000 claims description 5
- 230000002085 persistent effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 25
- 238000012937 correction Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 21
- 238000004891 communication Methods 0.000 description 15
- 238000001514 detection method Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 210000003128 head Anatomy 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012407 engineering method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06395—Quality analysis or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Educational Technology (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- Operations Research (AREA)
- Game Theory and Decision Science (AREA)
- Primary Health Care (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses appraisal procedure, device, storage medium and the electronic equipment of a kind of quality of instruction, belongs to online education field.On the one hand the application may be implemented to assess quality of instruction in real time, can provide the problems in reference, timely correction teaching process in real time according to the quality of instruction of Real-time Feedback for subsequent teaching, improve the quality and efficiency of entire teaching process;On the other hand quality of instruction is assessed by facial pose simultaneously, facial pose is able to achieve cordless and simple recognizer to identify, the accuracy of identification process can be improved as a significant biological characteristic.
Description
Technical field
This application involves online education field more particularly to a kind of appraisal procedure of quality of instruction, device, storage medium and
Electronic equipment.
Background technique
With the development of internet, online education receives the welcome of more and more people, the online education scientific research unlimited time and
The flexible study in place, sufficiently promotes the technical ability of itself.Facilitation is more moved relative to the fixed classroom of traditional use, is being drawn
Face, audio more visualize and it is more attractive.
In existing Method of Teaching Quality Evaluation, observation and analysis means for teaching behavior rest on artificial reality mostly
When check instructional video stream or pay a return visit the state that first classroom video checks student and teacher, to carry out record student and teacher
State, or judge that the state of teacher and the feedback of teacher judge the state of student by student side feedback information, but judge to learn
Raw and teacher's state time compares lag, and the data of acquisition are seldom, and evaluation result is more subjective.How teaching is evaluated in real time
The current urgent problem to be solved of quality.
Summary of the invention
Appraisal procedure, device, storage medium and the terminal for the quality of instruction that the embodiment of the present application provides, can solve nothing
Method assesses the problem of quality of instruction of online teaching process in real time.The technical solution is as follows:
In a first aspect, the embodiment of the present application provides a kind of appraisal procedure of quality of instruction, which comprises
Obtain the first video flowing of first terminal equipment acquisition and the second video flowing of second terminal equipment acquisition;
The video frame in first video flowing is extracted to obtain the first sets of video frames and extract in second video flowing
Video frame obtain the second sets of video frames;
It identifies the facial pose of each video frame in first sets of video frames and identifies second sets of video frames
In each video frame facial pose;
According in the facial pose of each video frame in first sets of video frames and/or second sets of video frames
The facial pose of each video frame assesses quality of instruction.
Second aspect, the embodiment of the present application provide a kind of assessment device of quality of instruction, the assessment of the quality of instruction
Device includes:
Video acquisition unit, what the first video flowing and second terminal equipment for obtaining the acquisition of first terminal equipment acquired
Second video flowing;
Video extraction unit obtains the first sets of video frames and extraction for extracting the video frame in first video flowing
Video frame in second video flowing obtains the second sets of video frames;
Gesture recognition unit, for identification in first sets of video frames facial pose of each video frame and identification institute
State the facial pose of each video frame in the second sets of video frames;
Teaching evaluation unit, for according to the facial pose of each video frame and/or institute in first sets of video frames
State the facial pose assessment quality of instruction of each video frame in the second sets of video frames.
The third aspect, the embodiment of the present application provide a kind of computer storage medium, and the computer storage medium is stored with
A plurality of instruction, described instruction are suitable for being loaded by processor and executing above-mentioned method and step.
Fourth aspect, the embodiment of the present application provide a kind of electronic equipment, it may include: processor and memory;Wherein, described
Memory is stored with computer program, and the computer program is suitable for being loaded by the processor and being executed above-mentioned method step
Suddenly.
The technical solution bring beneficial effect that some embodiments of the application provide includes at least:
The student's video flowing and/or teacher's video flowing during attending class are obtained, student's video flowing and teacher's video flowing are extracted
In video frame, identify the facial pose of video frame, according to facial pose assess quality of instruction.On the one hand the application may be implemented
Assessment quality of instruction in real time can provide reference in real time according to the quality of instruction of Real-time Feedback for subsequent teaching, entangle in time
The problem of during positive teaching, improve the quality and efficiency of entire teaching process;On the other hand it is commented simultaneously by facial pose
Estimate quality of instruction, facial pose is able to achieve cordless and simple recognizer is come as a significant biological characteristic
Identification, can be improved the accuracy of identification process.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of network architecture diagram provided by the embodiments of the present application;
Fig. 2 is the flow diagram of the appraisal procedure of quality of instruction provided by the embodiments of the present application;
Fig. 3 is another flow diagram of the appraisal procedure of quality of instruction provided by the embodiments of the present application;
Fig. 4 is the timing diagram of acquisition video flowing provided by the embodiments of the present application;
Fig. 5 is the user interface map of terminal device provided by the embodiments of the present application;
Fig. 6 A~6C is the schematic illustration of identification facial pose provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram of device provided by the embodiments of the present application;
Fig. 8 is a kind of another structural schematic diagram of device provided by the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the embodiment of the present application
Mode is described in further detail.
Fig. 1 shows showing for the appraisal procedure for the quality of instruction that can be applied to the application or the assessment device of quality of instruction
Example property system architecture 100.
As shown in Figure 1, system architecture 100 may include first terminal equipment 100, first network 101, server 102,
Two networks 103 and second terminal equipment 104.First network 104 between first terminal equipment 101 and server 102 for mentioning
For the medium of communication link, the second network 103 is for providing communication link between second terminal equipment 104 and server 102
Medium.First network 101 and the second network 103 may include various types of wired communications links or wireless communication link,
Such as: wired communications links include optical fiber, twisted pair or coaxial cable, and wireless communication link includes bluetooth communications link, nothing
Line fidelity (WIreless-FIdelity, Wi-Fi) communication link or microwave communications link etc..
First terminal equipment 100 passes through first network 101, server 102, the second network 103 and second terminal equipment 104
Between communicated, first terminal equipment 100 sends message to server 102, and server 102 forwards messages to second terminal
Equipment 104, second terminal equipment 104 transmit the message to server 102, and server 102 forwards messages to second terminal and sets
Standby 100, the communication being achieved between first terminal equipment 100 and second terminal equipment 104, first terminal equipment 100 and the
The type of message of interaction includes control data and business datum between two terminal devices 104.
Wherein, in this application, first terminal equipment 100 is the terminal that student attends class, and second terminal equipment 104 is teacher
The terminal attended class;Or first terminal equipment 100 is the terminal of class-teaching of teacher, second terminal equipment 104 is the terminal that student attends class.
Such as: business datum is video flowing, and first terminal equipment 100 leads to the first video flowing during camera acquisition student attends class,
Second terminal equipment acquires the second video flowing during class-teaching of teacher by camera 104, and first terminal equipment 100 is by first
First video flowing is transmitted to second terminal equipment 104, second terminal equipment to server 102, server 102 by video stream
104 show the first video flowing and the second video flowing on interface;Second terminal equipment 104 is by the second video stream to server
102, the second video flowing is transmitted to first terminal equipment 100 by server 102, and first terminal equipment 100 shows the first video flowing
With the second video flowing.
Wherein, the mode of attending class of the application can be one-to-one or one-to-many, i.e. the corresponding student or one of a teacher
A teacher corresponds to multiple students.Correspondingly, one is used for the terminal and a use of class-teaching of teacher in one-to-one teaching method
It is communicated between the terminal that student attends class;In one-to-many teaching method, one for the terminal of class-teaching of teacher and more
It is communicated between a terminal attended class for student.
Various communication customer end applications can be installed in first terminal equipment 100 and second terminal equipment 104, such as:
Video record application, video playing application, interactive voice application, searching class application, timely means of communication, mailbox client, society
Hand over platform software etc..
First terminal equipment 100 and second terminal equipment 104 can be hardware, be also possible to software.When terminal device 101
~103 be hardware when, can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, knee
Mo(u)ld top half portable computer and desktop computer etc..When first terminal equipment 100 and second terminal equipment 104 are software,
It can be and install in above-mentioned cited electronic equipment.Its may be implemented in multiple softwares or software module (such as: for mentioning
For Distributed Services), single software or software module also may be implemented into, be not specifically limited herein.
When first terminal equipment 100 and second terminal equipment 104 are hardware, be also equipped with thereon display equipment and
Camera, display equipment, which is shown, can be the various equipment for being able to achieve display function, and camera is for acquiring video flowing;Such as:
Display equipment can be cathode-ray tube display (Cathode ray tubedisplay, abbreviation CR), diode displaying
Device (Light-emitting diode display, abbreviation LED), electronic ink screen, liquid crystal display (Liquid crystal
Display, abbreviation LCD), Plasmia indicating panel (Plasma displaypanel, abbreviation PDP) etc..User can use
Display equipment on one terminal device 100 and second terminal equipment 104, come information such as the texts, picture, video of checking display.
It should be noted that the appraisal procedure of quality of instruction provided by the embodiments of the present application is generally executed by server 102,
Correspondingly, the assessment device of quality of instruction is generally positioned in server 102.Such as: server 102 detects first terminal equipment
In the facial pose of first video flowing middle school student of 100 acquisitions, and the second video flowing of the detection acquisition of second terminal equipment 104
The facial pose of teacher assesses teaching quality information according to the facial pose of student and the facial pose of teacher.In addition, working as student
Facial pose be continuously abnormal posture duration be more than duration threshold value in the case where, server 102 is to first terminal equipment 100
Prompt information is sent, to prompt attention of student not concentrate;When the duration that the facial pose of teacher is continuously abnormal posture is more than
In the case where duration threshold value, server 102 sends prompt information to second terminal equipment 104, to prompt teachers ' teaching quality not
It is good.
Server 102 can be to provide the server of various services, and server 102 can be hardware, be also possible to software.
When server 105 is hardware, the distributed server cluster of multiple server compositions may be implemented into, list also may be implemented into
A server.When server 102 is software, multiple softwares or software module may be implemented into (such as providing distribution
Service), single software or software module also may be implemented into, be not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only illustrative.It, can according to needs are realized
To be any number of terminal device, network and server.
Below in conjunction with attached drawing 2- attached drawing 7, the appraisal procedure of quality of instruction provided by the embodiments of the present application is carried out detailed
It introduces.Wherein, the assessment device of the quality of instruction in the embodiment of the present application can be Fig. 2-server shown in Fig. 7.
Fig. 2 is referred to, a kind of flow diagram of the appraisal procedure of quality of instruction is provided for the embodiment of the present application.Such as figure
Shown in 2, the embodiment of the present application the method may include following steps:
The second video flowing of S201, the first video flowing for obtaining the acquisition of first terminal equipment and the acquisition of second terminal equipment.
Wherein, video flowing uses the continuous time-base media of streaming technology in internet or Intranet, and streaming video exists
Entire file is not downloaded before playing, beginning content is only stored in memory, the data flow of streaming video transmits at any time at any time
It plays, is only beginning with some delays.First terminal equipment acquires video flowing using internal or external camera, will adopt
The video stream collected is to server;Second terminal will using internal or external camera
Such as: first terminal equipment is the terminal that student attends class, for second terminal equipment is the terminal of class-teaching of teacher, the
One terminal device collected student is attended class during video stream to server, second terminal equipment will be collected
For the second video stream during class-teaching of teacher to server, server can receive the first view from first terminal equipment
First video flowing and the second video flowing can be spliced to obtain by the second video flowing of frequency stream and second terminal equipment, server
One video flowing, first terminal equipment and second terminal equipment play spliced video flowing in a broadcast window;It can also
Only to forward the first video flowing and the second video flowing, first terminal equipment and second terminal equipment are respectively in two broadcast windows
Play the first video flowing and the second video flowing.There is different user types to identify for first video flowing and the second video flowing, so as to
It is the video flowing for including teacher which server, which distinguishes, which is the video flowing for including student.Such as: the packet header of the first video flowing
Middle carrying user type is identified as " student ", and server indicates to determine the packet of the one the second video flowings according to the user type
User type is carried in head is identified as " teacher ".
Wherein, server can periodically acquire the first video flowing and the second video flowing, the first video flowing and the second view
Frequency flows corresponding identical initial time and terminates the time, and the length of the first video flowing and the second video flowing is preset length, in advance
If the period of length and acquisition according to actual needs depending on, the application is with no restriction.Such as: server every 6 minutes to acquire
The first video flowing and the second video flowing that length is 5 minutes.
Video frame in S202, the first video flowing of extraction obtains the first sets of video frames and extracts the view in the second video flowing
Frequency frame obtains the second sets of video frames.
Wherein, the first video flowing and the second video flowing include multiple video frames.By taking the first video flowing as an example, server can be with
Video frame all in the first video flowing is extracted as the first sets of video frames, the first view can also be extracted according to default rule
The video frame of part is as the first sets of video frames in frequency stream.
In one embodiment, server only extracts the key frame in the first video flowing, using the I frame extracted as first
Sets of video frames;The key frame in the second video flowing is only extracted simultaneously as the second sets of video frames.Further, server can
Key frame will be extracted from the first video flowing according to the preset sampling interval as the first sets of video frames, and according to default
Sampling interval key frame is extracted from the second video flowing as the second sets of video frames.First sets of video frames and the second video
Video frame in frame set may include face, it is also possible to not include face.
Wherein, server can be after extracting the first sets of video frames and the second sets of video frames, in each video frame
In detect whether there are face, the case where there are faces can be divided into: complete face, part face are blocked by barrier
Face.Server can detect video frame with the presence or absence of face, if detecting face, by face according to Face datection algorithm
It is marked.Face datection algorithm include the recognizer based on human face characteristic point, the recognizer based on whole picture facial image,
Recognizer, the algorithm identified using neural network based on template and the algorithm identified using support vector machines,
The application is with no restriction.
Wherein, server is identified according to the user type of the user type of the first video flowing mark and the second video flowing and is distinguished
The first video flowing is that acquisition student's process of attending class generates out, and the second video flowing is that acquisition class-teaching of teacher process generates.
It is each in the second sets of video frames of facial pose and identification of each video frame in S203, the first sets of video frames of identification
The facial pose of a video flowing.
Wherein, facial pose is expression of the head with respect to some reference axis degree of deflection of people, and identification facial pose takes
Business device estimates the process of facial deflection angle from video frame, and deflection angle indicates that face is around in the angle of y-axis.Estimate in posture
It counts in precision, Attitude estimation can be divided into two major classes: rough estimate mode and thin estimation mode.The embodiment of the present application is only regarding
When there is complete unobstructed face in frequency frame, can just remove the deflection angle for identifying the face, i.e., only to complete face into
The estimation of horizontal deflection angle identifies that face is positive face posture or side face posture according to deflection angle.If wrapped in video frame
Include part face, identification division face is also to be above the acquisition range of camera caused by blocking to cause;If in video frame
There is no in the case where face, identify facial pose for no face posture.
In rough estimate mode, facial pose estimation is the process in the coarse face deflection direction for estimating people, such as:
Face is deflected or is upward deflected to the left.
In thin estimation mode, facial pose estimation is the precisive on three-dimensional space to deflection angle, that is,
Deflection angle of the head relative to some coordinate plane.In the ideal situation, in three reference axis, facial pose is denoted as: being enclosed
Being around in X-axis facial pose range is -90 °~90 °, is -90 °~90 ° around Y-axis facial pose range, surrounds Z axis facial pose
Range is -90 °~90 °
In one embodiment, the method based on template estimates facial pose: by image to be identified and known face
Pose template is compared to obtain facial pose.
In one embodiment, facial pose is estimated based on detector array: is trained by sample form multiple and different
Human-face detector, to achieve the effect that different angle Face datection.In terms of facial pose, using detection zone array approach
Principle: by support vector machines or the multiple facial pose classifiers of Adboost cascaded iteration algorithm training, to detect difference
Facial pose.
In one embodiment, facial pose is estimated based on elastic model: in facial pose estimation, two different peoples
Image is also not quite identical, because face characteristic position is different with the difference of people, nowadays, using based on local feature region
The variable image of (canthus point, prenasale, corners of the mouth point etc.) is to be trained to every in order to train this algorithm as template
Face figure carries out handmarking's human face characteristic point, and on each local feature, description is extracted using Gaborjets.This
A little characteristic points are extracted from multiple and different visual angles, and extra invariance can be retouched by being stored in a series of of each node
Middle acquisition is stated, these descriptions are referred to as elastic bunch graph.Will more a branch of figure and a new facial image, beam figure is placed on face figure
As on, the shortest distance between two characteristic points is being found by thorough or repeated deformation in the position of each node of graph, this
A process is elastic graph matching.
In facial pose estimation, different beam figures is created to every kind of posture, each beam figure is used for face more to be tested
Portion's posture figure, because beam figure distributes a discrete head posture using maximum similitude.Since EGM uses face key portion
The characteristic point of position greatly reduces the variation between individual, this makes between model compared with using the face position not adjusted
Similitude is bigger may to be equal to facial pose.
S204, according to each in the facial pose of video frame each in the first sets of video frames and/or the second sets of video frames
The facial pose of a video frame assesses quality of instruction.
In general, quality of instruction indicates that multiple quality-classes can be used in the measurement of the effect in teaching process, quality of instruction
Do not grade, the quantity of quality scale can according to actual needs depending on, the application is with no restriction.
Such as: teaching quality information is divided into normal and bad 2 quality scales to indicate;Another example is: teaching quality information
It is divided into outstanding, general and bad 3 quality scales to indicate.Preparatory trained Teaching Quality Assessment mould can be used in server
Type is come to evaluate the first video flowing and/or the corresponding teaching quality information of the second video flowing, Evaluation Model of Teaching Quality be using instruction
Practice what sample set training obtained, training sample concentrates training sample and quality scale label including multiple facial poses, quality
The quality scale of rank tag representation training sample, such as: indicate bad using 0,1 expression is general, 2 indicate outstanding.
In one embodiment, teaching quality information is divided into two quality scales, and the facial pose of the first video frame is also known as
First facial posture, the facial pose of the second video frame are also known as the second facial pose, and monitoring server first facial posture is held
In the case that the duration that continuous time and/or the second facial pose are continuously abnormal posture is greater than duration threshold value, quality of instruction is determined
It is bad;Conversely, quality of instruction is normal.
When being executed, server obtains student's video flowing and/or teacher during attending class to the scheme of the embodiment of the present application
Video flowing detects the face in student's video flowing and teacher's video flowing, identifies the facial pose of face, assessed according to facial pose
Quality of instruction.On the one hand the application may be implemented to assess quality of instruction in real time, can be in real time according to the quality of instruction of Real-time Feedback
It is that subsequent teaching provides the problems in reference, timely correction teaching process, improves the quality and efficiency of entire teaching process;
On the other hand quality of instruction is assessed by facial pose simultaneously, facial pose is able to achieve as a significant biological characteristic
Cordless and simple recognizer identify, the accuracy of identification process can be improved.
Fig. 3 is referred to, a kind of flow diagram of the appraisal procedure of quality of instruction is provided for the embodiment of the present application.This reality
It applies example and is applied to illustrate in server with the appraisal procedure of quality of instruction.The appraisal procedure of the quality of instruction may include
Following steps:
S301, facial pose identification model is obtained according to facial pose sample set progress model training.
Wherein, facial pose sample set includes multiple facial pose samples, each facial pose sample carrier state mark
It infuses, includes the sample of a variety of different facial poses in facial pose sample set, the quantity of the sample of each posture can be equal
Or it is roughly equal, to improve the accuracy of model training.
In one embodiment, the sample set type in facial pose sample set includes: no face posture sample, positive face
Posture sample, side face posture sample, without face posture sample, block posture sample and appearance posture sample, no face posture sample table
Show that there is no faces in video frame;Positive face posture sample indicates in video frame there are complete face, and the deflection angle of the face
The absolute value of degree is less than angle threshold;Side face posture sample indicates that there are the deflection angles of complete face and the face in video frame
The absolute value of degree is greater than angle threshold;Blocking posture sample indicates in video frame there are complete face but the face is by barrier
It blocks;Appearance posture sample indicates it is as caused by the acquisition range beyond camera there are part face in video frame.Example
Such as: in facial pose sample set without face posture sample, positive face posture sample, side face posture sample, without face posture sample and
The quantity of appearance posture sample is respectively 1000, and the quantity of each sample is more balanced, can train facial appearance faster
State identification model.
In general, the type of facial pose identification model can be gauss hybrid models, neural network model and hidden Ma Er
Section's husband's model.
S302, the first video flowing for periodically obtaining the acquisition of first terminal equipment.
Wherein, first terminal equipment can acquire the first video flowing by camera, and collected first video flowing is sent out
Server is given, server periodically acquires the first video flowing from first terminal equipment since initial time of attending class.Clothes
Business device periodically acquires the first video flowing of preset length, the period of acquisition and preset length can according to actual needs and
Fixed, the application is with no restriction.
S303, the key frame extracted in the first video flowing form the first sets of video frames.
Wherein, it extracts the key frame in the first video flowing and forms the first sets of video frames, had in key frame video frame
Whole image information, key frame need not rely on other video frames to unzip it and decode, the view in the first sets of video frames
Frequency frame is the figure for unziping it and decoding according to key frame.
First face of each video frame in S304, the first sets of video frames of detection.
Wherein, Face datection is scanned for video frame, if it find that face, then return to the location information of face, people
Face detection is the important foundation of subsequent human face analysis.Such as: firstly the need of Face datection is carried out, it then just can be carried out face knowledge
Not.If not detecting the first face in video frame, it is determined that the facial pose of the video frame is no face posture, if video
The first face is detected in frame, then further identifies facial pose.
In some embodiments, describe face characteristic according to the existing priori knowledge of face to construct a series of rules
Between relationship.Due to there is the features such as symmetrical eyes, eyebrow, nose and mouth in face, and between these features
Relative position or relative distance are fixed and invariable to a certain extent.Knowledge based engineering method is by the criterion of building come one by one
The candidate item of face is screened, and finally realizes Face datection.
In some embodiments, the method based on feature is that only possible searching can be in different perspectives, different posture and not
With illumination etc. it is changeable under the conditions of be able to maintain stable invariant, and this finds face.Such as: it can be from edge feature, line
Reason one of feature and color characteristic a variety of detects face.
In some embodiments, face standard form or skin detection according to the pre-stored data are stored, and are being examined
When surveying face, image to be detected and face standard form are compared to detection face, face table template or face characteristic mould
Plate needs are configured in advance.
S305, the first face is detected.
Wherein, the first face is detected in the video frame, and the first face detected may be complete face, it is also possible to
It is part face, subsequent deflection angle, coverage extent and integrated degree further according to the first face identifies facial pose.
S306, the characteristic information for extracting the first face.
Wherein, the characteristic information of the first face includes color characteristic information, texture feature information and shape feature information, spy
Reference breath can be indicated with multi-C vector.
S307, it the characteristic information of the first face is input to facial pose identification model obtains facial pose recognition result.
Wherein, facial pose recognition result includes a variety of facial poses, the facial pose of facial pose identification model output
Recognition result is a score value, and score value is within a preset range.Such as: preset range is between 0 to 1, and different facial poses is pre-
It is first configured with different value intervals, the value interval where being determined according to the score value of facial pose identification model output, then
Obtain the associated facial pose of value interval.
S308, the second video flowing for periodically obtaining the acquisition of second terminal equipment.
Wherein, the second video flowing is that second terminal equipment is collected by camera, and second terminal equipment will collect
The second video stream server, server receive the second video flowing from second terminal equipment.Server periodically obtains
Fetch the second video flowing from second terminal equipment.Wherein, server obtains the period of the first video flowing and obtains the second video
The period of stream is identical.
For example: it is shown in Figure 4, server detection reach it is preset attend class initial time when, create Virtual Class,
First terminal equipment and second terminal equipment are added in Virtual Class, first terminal equipment starts to start camera acquisition the
One video flowing, by the first video stream to server;Second terminal equipment starts to start camera the second video flowing of acquisition, will
For second video stream to server, the first video flowing and the second video flowing are real-time continuous Media Streams.Server uses phase
With cycle T 2 acquire the video flowing of preset duration T1, the duration of T1 and T2 can according to actual needs depending on, t0 is to have attended class
Begin the moment.
S309, the second sets of video frames of key frame composition in the second video flowing is extracted.
Wherein, in order to reduce calculation amount, server can be with each video frame in collected first video flowing at
Reason, server extract the key frame in the second video flowing and form the second sets of video frames, and key frame has complete picture, disobeys
Rely other frames that decoding can be completed, key frame is usually I frame.In general, each video frame in the second sets of video frames is
The image that key frame is unziped it and decoded.
Second face of each video frame in S310, the second sets of video frames of detection.
Wherein, Face datection is with the presence or absence of face in detection video frame, and face, returns to the position letter of face if it exists
Breath.Face if it does not exist determines that the video frame corresponds to facial pose as no face posture.
S311, the second face is detected.
S312, the characteristic information for extracting the second face.
S313, it the characteristic information of the second face is input to facial pose identification model obtains facial pose recognition result.
Wherein, the detailed process of S309~S313 can refer to the description of S303~S307, and details are not described herein again.
In one embodiment, the method for server identification facial pose may also is that
The similarity value between video frame to be identified and preset facial pose template is calculated, is greater than phase in similarity value
In the case where like degree threshold value, determine that the facial pose of the video frame to be identified is the associated facial appearance of the facial pose template
State;Video frame to be identified is any one video frame in the first sets of video frames or the second sets of video frames.
It is shown in Figure 5, it is the interface schematic diagram of first terminal equipment or second terminal equipment, is with first terminal equipment
Example is illustrated, and first terminal equipment is provided with camera 50, and when initial time arrives at school, camera sets first terminal
Standby and second terminal equipment is added in Virtual Class, indicates the camera 50 of first terminal equipment by collected first video
Stream is shown in first window 51, and the second video stream that the camera of second terminal equipment is acquired is to first terminal
Equipment shows the second video flowing of second terminal equipment acquisition in the second window 52 of first terminal equipment.First terminal equipment
Interface further include chat window 53, text input box 54 and send button 55, chat window is for showing first terminal equipment
User and second terminal equipment user chat record, text input box user inputs text, picture, video and expression packet
Etc. information, send button 55 be used to send information in text input box 54.
It is the schematic diagram of each facial pose referring to shown in Fig. 6 A~6C, Fig. 6 A table is positive the schematic diagram of face posture, face
Y-axis deflection angle between -90 °~+90 °, positive face posture indicates that the absolute value of deflection angle of the face in y-axis is less than angle
Threshold value is spent, such as: angle threshold is 20 °;Fig. 6 B is the schematic diagram of side face posture, and side face posture indicates face in the deflection of y-axis
The absolute value of angle is greater than angle threshold.Fig. 6 C is the schematic diagram of appearance state, and appearance state indicates in video frame due to part
Except face is beyond the acquisition range of camera, part face is only existed.
S314, assessment quality of instruction.
Wherein, a variety of facial poses are abnormal posture and normal attitude, in one embodiment, a variety of facial poses in advance
Include: positive face posture, side face posture, block posture, appearance posture or without face posture, abnormal posture includes positive face posture, it is abnormal
Posture includes side face posture, blocks posture, appearance posture or without face posture.The first sets of video frames septum reset posture is calculated to continue
The second duration of abnormal posture is continuously for the first duration of abnormal posture and the second sets of video frames septum reset posture,
In the case that first duration and/or the second duration are greater than duration threshold value, the bad prompt information of quality of instruction is generated;Or first
In the case that duration and the second duration are both less than duration threshold value, the normal prompt information of quality of instruction is generated.
Wherein it is possible to count the first sets of video frames and the second sets of video frames septum reset posture as the video of abnormal posture
The quantity of frame, the duration of each video frame are known, therefore according to the quantity of the video frame of abnormal posture and can be held
The continuous time determines the duration of abnormal posture.
In one embodiment, server quality of instruction can normally be indicated information be sent to first terminal equipment and
Second terminal equipment, so as to the quality of instruction current to teacher or student's Real-time Feedback.
In one embodiment, in the case where the first duration is greater than duration threshold value, server can be by quality of instruction not
Good information is sent to first terminal equipment;In the case where the second duration is greater than duration threshold value, server can be by matter of imparting knowledge to students
It measures bad information and is sent to second terminal equipment, so as to the user feedback in time to first terminal equipment and second terminal equipment
Current quality of instruction.
Implement embodiments herein, server obtains the student's video flowing and/or teacher's video flowing during attending class, inspection
The face in student's video flowing and teacher's video flowing is surveyed, identifies the facial pose of face, quality of instruction is assessed according to facial pose.
On the one hand the application may be implemented to assess quality of instruction in real time, can in real time be subsequent according to the quality of instruction of Real-time Feedback
Teaching provides reference, and the problems in timely correction teaching process improves the quality and efficiency of entire teaching process;On the other hand same
When quality of instruction is assessed by facial pose, facial pose is able to achieve cordless as a significant biological characteristic
It is identified with simple recognizer, the accuracy of identification process can be improved.
Following is the application Installation practice, can be used for executing the application embodiment of the method.It is real for the application device
Undisclosed details in example is applied, the application embodiment of the method is please referred to.
Fig. 7 is referred to, it illustrates the knots for assessing device for the quality of instruction that one exemplary embodiment of the application provides
Structure schematic diagram.Hereinafter referred to as device 7, device 7 can pass through the whole of software, hardware or both being implemented in combination with as terminal
Or a part.Device 7 includes video acquisition unit 701, video extraction unit 702, gesture recognition unit 703 and teaching evaluation list
Member 704.
Video acquisition unit 701, the first video flowing and second terminal equipment for obtaining the acquisition of first terminal equipment are adopted
Second video flowing of collection.
Video extraction unit 702, for extract the video frame in first video flowing obtain the first sets of video frames and
The video frame extracted in second video flowing obtains the second sets of video frames.
Gesture recognition unit 703, for identification in first sets of video frames each video frame facial pose and knowledge
The facial pose of each video frame in not described second sets of video frames.
Teaching evaluation unit 704, for according to the facial pose of each video frame in first sets of video frames and/or
The facial pose of each video frame assesses quality of instruction in second sets of video frames.
The facial pose and identification second video frame of each video frame in identification first sets of video frames
The facial pose of each video frame in set, comprising:
The similarity value between video frame to be identified and preset facial pose template is calculated, is greater than phase in similarity value
In the case where like degree threshold value, determine that the facial pose of the video frame to be identified is the associated facial appearance of the facial pose template
State;Wherein, the video frame to be identified is any one view in first sets of video frames or the second sets of video frames
Frequency frame.
In one embodiment, gesture recognition unit 703 is used for:
Feature extraction is carried out to video frame to be identified and obtains image;Wherein the video frame to be identified is described first
Any one video frame in sets of video frames or second sets of video frames;
Described image feature is input to preset facial pose identification model and obtains facial pose recognition result
Feature extraction is carried out to first video frame and obtains the first characteristics of image;
First characteristics of image is input to preset facial pose identification model and obtains facial pose recognition result.
In one embodiment, facial pose includes positive face posture, side face posture, blocks posture, without face posture or appearance
Posture;The positive face posture indicates that the absolute value of the deflection angle of face is less than angle threshold;The side face posture indicates face
Deflection angle absolute value is greater than the angle threshold, and the posture of blocking indicates that face is blocked by barrier, the appearance posture
Indicate that part face is located at except acquisition range;The no face posture indicates that face is not present in video frame.
In one embodiment, teaching evaluation unit 704 is used for:
The first sets of video frames septum reset posture is calculated to be continuously in the first duration and the second video flowing of abnormal posture
Second duration of facial pose persistent anomaly posture;Abnormal posture includes the side face posture, described blocks posture, the no face
Posture or the appearance posture;
In the case where first duration and/or the second duration are greater than duration threshold value, generate that quality of instruction is bad to be mentioned
Show information;Or
In the case where first duration and the second duration are both less than the duration threshold value, it is normal to generate quality of instruction
Prompt information.
In one embodiment, video extraction unit 702 is used for:
It extracts the key frame in first video flowing and forms the first sets of video frames, and extract second video flowing
In key frame form the second sets of video frames;
In one embodiment, video acquisition unit 701 is used for:
Periodically acquire the second video flowing of the first video flowing and second terminal equipment from first terminal equipment;Its
In, the length of first video flowing and second video flowing is preset duration.
It should be noted that device 7 provided by the above embodiment execute quality of instruction appraisal procedure when, only with above-mentioned
The division progress of each functional module can according to need and for example, in practical application by above-mentioned function distribution by different
Functional module is completed, i.e., the internal structure of equipment is divided into different functional modules, with complete it is described above whole or
Partial function.In addition, the appraisal procedure embodiment of quality of instruction provided by the above embodiment belongs to same design, embodies and realize
Process is detailed in embodiment of the method, and which is not described herein again.
Above-mentioned the embodiment of the present application serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The device 7 of the application obtains the student's video flowing and/or teacher's video flowing during attending class, and detects student's video flowing
With the face in teacher's video flowing, the facial pose of face is identified, quality of instruction is assessed according to facial pose.The application is on the one hand
It may be implemented to assess quality of instruction in real time, ginseng can be provided in real time according to the quality of instruction of Real-time Feedback for subsequent teaching
It examines, the problems in timely correction teaching process, improves the quality and efficiency of entire teaching process;On the other hand pass through face simultaneously
Posture assesses quality of instruction, and facial pose is able to achieve cordless and simple knows as a significant biological characteristic
Other algorithm identifies, the accuracy of identification process can be improved.
The embodiment of the present application also provides a kind of computer storage medium, the computer storage medium can store more
Item instruction, described instruction are suitable for being loaded by processor and being executed the method and step such as above-mentioned Fig. 2-Fig. 6 C illustrated embodiment, specifically
Implementation procedure may refer to illustrating for Fig. 2-Fig. 6 C illustrated embodiment, herein without repeating.
Present invention also provides a kind of computer program product, which is stored at least one instruction,
At least one instruction is loaded as the processor and is executed to realize commenting for quality of instruction described in as above each embodiment
Estimate method.
Fig. 8 is a kind of assessment apparatus structure schematic diagram of quality of instruction provided by the embodiments of the present application, hereinafter referred to as device
8, device 8 can integrate in server above-mentioned, as shown in figure 8, the device includes: memory 802, processor 801, input dress
Set 803, output device 804 and communication interface.
Memory 802 can be independent physical unit, can with processor 801, input unit 803 and output device 804
To be connected by bus.Memory 802, processor 801, transceiver 803 also can integrate together, pass through hardware realization etc..
Memory 802 is used to store the program for realizing above method embodiment or Installation practice modules, processing
Device 801 calls the program, executes the operation of above method embodiment.
Input unit 802 includes but is not limited to keyboard, mouse, touch panel, camera and microphone;Output device includes
But it is limited to display screen.
For receiving and dispatching various types of message, communication interface includes but is not limited to wireless interface or wired connects communication interface
Mouthful.
Optionally, when passing through software realization some or all of in the distributed task dispatching method of above-described embodiment,
Device can also only include processor.Memory for storing program is located at except device, processor by circuit/electric wire with
Memory connection, for reading and executing the program stored in memory.
Processor can be central processing unit (central processing unit, CPU), network processing unit
The combination of (networkprocessor, NP) or CPU and NP.
Processor can further include hardware chip.Above-mentioned hardware chip can be specific integrated circuit
(application-specific integrated circuit, ASIC), programmable logic device (programmable
Logic device, PLD) or combinations thereof.Above-mentioned PLD can be Complex Programmable Logic Devices (complex
Programmable logic device, CPLD), field programmable gate array (field-programmable gate
Array, FPGA), Universal Array Logic (generic array logic, GAL) or any combination thereof.
Memory may include volatile memory (volatile memory), such as access memory (random-
Access memory, RAM);Memory also may include nonvolatile memory (non-volatile memory), such as fastly
Flash memory (flashmemory), hard disk (hard disk drive, HDD) or solid state hard disk (solid-state drive,
SSD);Memory can also include the combination of the memory of mentioned kind.
Wherein, processor 801 calls the program code in memory 802 for executing following steps:
Obtain the first video flowing of first terminal equipment acquisition and the second video flowing of second terminal equipment acquisition;
The video frame in first video flowing is extracted to obtain the first sets of video frames and extract in second video flowing
Video frame obtain the second sets of video frames;
It identifies the facial pose of each video frame in first sets of video frames and identifies second sets of video frames
In each video frame facial pose;
According in the facial pose of each video frame in first sets of video frames and/or second sets of video frames
The facial pose of each video frame assesses quality of instruction.
In one embodiment, processor 801 executes each video frame in identification first sets of video frames
Facial pose and the facial pose for identifying each video frame in second sets of video frames, comprising:
The similarity value between video frame to be identified and preset facial pose template is calculated, is greater than phase in similarity value
In the case where like degree threshold value, determine that the facial pose of the video frame to be identified is the associated facial appearance of the facial pose template
State;Wherein, the video frame to be identified is any one view in first sets of video frames or the second sets of video frames
Frequency frame.
In one embodiment, processor 801 executes each video frame in identification first sets of video frames
Facial pose and the facial pose for identifying each video frame in second sets of video frames, comprising:
Feature extraction is carried out to video frame to be identified and obtains image;Wherein the video frame to be identified is described first
Any one video frame in sets of video frames or second sets of video frames;
Described image feature is input to preset facial pose identification model and obtains facial pose recognition result
Feature extraction is carried out to first video frame and obtains the first characteristics of image;
First characteristics of image is input to preset facial pose identification model and obtains facial pose recognition result.
In one embodiment, facial pose includes positive face posture, side face posture, blocks posture, without face posture or appearance
Posture;The positive face posture indicates that the absolute value of the deflection angle of face is less than angle threshold;The side face posture indicates face
Deflection angle absolute value is greater than the angle threshold, and the posture of blocking indicates that face is blocked by barrier, the appearance posture
Indicate that part face is located at except acquisition range;The no face posture indicates that face is not present in video frame.
In one embodiment, processor 801 executes the facial pose according to first video frame and/or described
The facial pose of second video frame assesses quality of instruction, comprising:
The first sets of video frames septum reset posture is calculated to be continuously in the first duration and the second video flowing of abnormal posture
Second duration of facial pose persistent anomaly posture;Abnormal posture includes the side face posture, described blocks posture, the no face
Posture or the appearance posture;
In the case where first duration and/or the second duration are greater than duration threshold value, generate that quality of instruction is bad to be mentioned
Show information;Or
In the case where first duration and the second duration are both less than the duration threshold value, it is normal to generate quality of instruction
Prompt information.
In one embodiment, processor 801 executes the video frame extracted in first video flowing and obtains first
Sets of video frames and the video frame extracted in second video flowing obtain the second sets of video frames, comprising:
It extracts the key frame in first video flowing and forms the first sets of video frames, and extract second video flowing
In key frame form the second sets of video frames;
In one embodiment, processor 801 executes the first video flowing and the of the acquisition first terminal equipment acquisition
Second video flowing of two terminal devices acquisition, comprising:
Periodically acquire the second video flowing of the first video flowing and second terminal equipment from first terminal equipment;Its
In, the length of first video flowing and second video flowing is preset duration.
The embodiment of the present application also provides a kind of computer storage mediums, are stored with computer program, the computer program
For executing the appraisal procedure of quality of instruction provided by the above embodiment.
The embodiment of the present application also provides a kind of computer program products comprising instruction, when it runs on computers
When, so that computer executes the appraisal procedure of quality of instruction provided by the above embodiment.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Claims (10)
1. a kind of appraisal procedure of quality of instruction, which is characterized in that the described method includes:
Obtain the first video flowing of first terminal equipment acquisition and the second video flowing of second terminal equipment acquisition;
The video frame in first video flowing is extracted to obtain the first sets of video frames and extract the view in second video flowing
Frequency frame obtains the second sets of video frames;
It identifies the facial pose of each video frame in first sets of video frames and identifies each in second sets of video frames
The facial pose of a video frame;
According to each in the facial pose of each video frame in first sets of video frames and/or second sets of video frames
The facial pose of video frame assesses quality of instruction.
2. the method according to claim 1, wherein each video in identification first sets of video frames
The facial pose of frame and the facial pose for identifying each video frame in second sets of video frames, comprising:
The similarity value between video frame to be identified and preset facial pose template is calculated, is greater than similarity in similarity value
In the case where threshold value, determine that the facial pose of the video frame to be identified is the associated facial pose of facial pose template;
Wherein, the video frame to be identified is any one video in first sets of video frames or the second sets of video frames
Frame.
3. the method according to claim 1, wherein each video in identification first sets of video frames
The facial pose of frame and the facial pose for identifying each video frame in second sets of video frames, comprising:
Feature extraction is carried out to video frame to be identified and obtains image;Wherein the video frame to be identified is first video
Any one video frame in frame set or second sets of video frames;
Described image feature is input to preset facial pose identification model and obtains facial pose recognition result;
Feature extraction is carried out to first video frame and obtains the first characteristics of image;
First characteristics of image is input to preset facial pose identification model and obtains facial pose recognition result.
4. method according to claim 1 to 3, which is characterized in that facial pose includes positive face posture, side face
Posture blocks posture, without face posture or appearance posture;The positive face posture indicates that the absolute value of the deflection angle of face is less than angle
Spend threshold value;The side face posture indicates that face deflection angle absolute value is greater than the angle threshold, and the posture of blocking indicates people
Face is blocked by barrier, and the appearance posture indicates that part face is located at except acquisition range;The no face posture indicates video
Face is not present in frame.
5. according to the method described in claim 4, it is characterized in that, the facial pose according to first video frame and/
Or the facial pose of second video frame assesses quality of instruction, comprising:
Calculate the first duration and the second video flowing septum reset that the first sets of video frames septum reset posture is continuously abnormal posture
Second duration of posture persistent anomaly posture;Abnormal posture includes the side face posture, described blocks posture, the no face posture
Or the appearance posture;
In the case where first duration and/or the second duration are greater than duration threshold value, the bad prompt letter of quality of instruction is generated
Breath;Or
In the case where first duration and the second duration are both less than the duration threshold value, generate quality of instruction and normally prompt
Information.
6. according to claim 1 to method described in 5 any one, which is characterized in that described to extract in first video flowing
The video frame video frame that obtains the first sets of video frames and extract in second video flowing obtain the second sets of video frames, wrap
It includes:
It extracts the key frame in first video flowing and forms the first sets of video frames, and extract in second video flowing
Key frame forms the second sets of video frames.
7. according to claim 1 to method described in 6 any one, which is characterized in that the acquisition first terminal equipment acquisition
The first video flowing and second terminal equipment acquisition the second video flowing, comprising:
Periodically obtain the second video flowing of the first video flowing and second terminal equipment from first terminal equipment;Wherein,
The length of first video flowing and second video flowing is preset duration.
8. a kind of assessment device of quality of instruction, which is characterized in that described device includes:
Video acquisition unit, for obtaining the second of the first video flowing that first terminal equipment acquires and the acquisition of second terminal equipment
Video flowing;
Video extraction unit obtains described in the first sets of video frames and extraction for extracting the video frame in first video flowing
Video frame in second video flowing obtains the second sets of video frames;
Gesture recognition unit the facial pose of each video frame and identifies described the in first sets of video frames for identification
The facial pose of each video frame in two sets of video frames;
Teaching evaluation unit, for according to the facial pose of each video frame in first sets of video frames and/or described
The facial pose of each video frame assesses quality of instruction in two sets of video frames.
9. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with a plurality of instruction, described instruction
Suitable for being loaded by processor and being executed the method and step such as claim 1~7 any one.
10. a kind of electronic equipment characterized by comprising processor and memory;Wherein, the memory is stored with calculating
Machine program, the computer program are suitable for being loaded by the processor and being executed the method step such as claim 1~7 any one
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910547359.3A CN110334620A (en) | 2019-06-24 | 2019-06-24 | Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910547359.3A CN110334620A (en) | 2019-06-24 | 2019-06-24 | Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110334620A true CN110334620A (en) | 2019-10-15 |
Family
ID=68142625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910547359.3A Pending CN110334620A (en) | 2019-06-24 | 2019-06-24 | Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110334620A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110837790A (en) * | 2019-11-01 | 2020-02-25 | 广州云蝶科技有限公司 | Identification method |
CN112861591A (en) * | 2019-11-28 | 2021-05-28 | 京东方科技集团股份有限公司 | Interactive identification method, interactive identification system, computer equipment and storage medium |
DE202022100887U1 (en) | 2022-02-16 | 2022-02-24 | Marggise Anusha Angel | System for improving online teaching and teaching evaluation using information and communication technology |
CN116757524A (en) * | 2023-05-08 | 2023-09-15 | 广东保伦电子股份有限公司 | Teacher teaching quality evaluation method and device |
CN112861591B (en) * | 2019-11-28 | 2025-02-25 | 京东方科技集团股份有限公司 | Interactive identification method, identification system, computer device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104517102A (en) * | 2014-12-26 | 2015-04-15 | 华中师范大学 | Method and system for detecting classroom attention of student |
CN108108684A (en) * | 2017-12-15 | 2018-06-01 | 杭州电子科技大学 | A kind of attention detection method for merging line-of-sight detection |
CN109359613A (en) * | 2018-10-29 | 2019-02-19 | 四川文轩教育科技有限公司 | A kind of teaching process analysis method based on artificial intelligence |
CN109614934A (en) * | 2018-12-12 | 2019-04-12 | 易视腾科技股份有限公司 | Online teaching quality assessment parameter generation method and device |
-
2019
- 2019-06-24 CN CN201910547359.3A patent/CN110334620A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104517102A (en) * | 2014-12-26 | 2015-04-15 | 华中师范大学 | Method and system for detecting classroom attention of student |
CN108108684A (en) * | 2017-12-15 | 2018-06-01 | 杭州电子科技大学 | A kind of attention detection method for merging line-of-sight detection |
CN109359613A (en) * | 2018-10-29 | 2019-02-19 | 四川文轩教育科技有限公司 | A kind of teaching process analysis method based on artificial intelligence |
CN109614934A (en) * | 2018-12-12 | 2019-04-12 | 易视腾科技股份有限公司 | Online teaching quality assessment parameter generation method and device |
Non-Patent Citations (4)
Title |
---|
孙元 等: "基于视频关键帧分析的课堂教学效果研究", 《教育教学论坛》 * |
张鸿: "《基于人工智能的多媒体数据挖掘和应用实例》", 31 January 2018, 武汉大学出版社 * |
王昆翔 等: "《智能理论与警用智能技术》", 28 February 1998 * |
田景熙: "《物联网概论 第2版》", 30 July 2017, 东南大学出版社 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110837790A (en) * | 2019-11-01 | 2020-02-25 | 广州云蝶科技有限公司 | Identification method |
CN110837790B (en) * | 2019-11-01 | 2022-03-18 | 广州云蝶科技有限公司 | Identification method |
CN112861591A (en) * | 2019-11-28 | 2021-05-28 | 京东方科技集团股份有限公司 | Interactive identification method, interactive identification system, computer equipment and storage medium |
CN112861591B (en) * | 2019-11-28 | 2025-02-25 | 京东方科技集团股份有限公司 | Interactive identification method, identification system, computer device and storage medium |
DE202022100887U1 (en) | 2022-02-16 | 2022-02-24 | Marggise Anusha Angel | System for improving online teaching and teaching evaluation using information and communication technology |
CN116757524A (en) * | 2023-05-08 | 2023-09-15 | 广东保伦电子股份有限公司 | Teacher teaching quality evaluation method and device |
CN116757524B (en) * | 2023-05-08 | 2024-02-06 | 广东保伦电子股份有限公司 | Teacher teaching quality evaluation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7464098B2 (en) | Electronic conference system | |
US10909386B2 (en) | Information push method, information push device and information push system | |
CN108197532B (en) | The method, apparatus and computer installation of recognition of face | |
US10748194B2 (en) | Collaboration group recommendations derived from request-action correlations | |
CN109614934B (en) | Online teaching quality assessment parameter generation method and device | |
CN110033659A (en) | A kind of remote teaching interactive approach, server, terminal and system | |
CN108038880A (en) | Method and apparatus for handling image | |
CN109034973B (en) | Commodity recommendation method, commodity recommendation device, commodity recommendation system and computer-readable storage medium | |
CN105975980A (en) | Method of monitoring image mark quality and apparatus thereof | |
CN110334620A (en) | Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction | |
CN109063587A (en) | data processing method, storage medium and electronic equipment | |
CN110070076B (en) | Method and device for selecting training samples | |
CN111510659B (en) | Online interaction method and device, storage medium and electronic equipment | |
CN110149265B (en) | Message display method and device and computer equipment | |
WO2021196708A1 (en) | Method and apparatus for online interaction, storage medium and electronic device | |
Dominguez et al. | Scaling and adopting a multimodal learning analytics application in an institution-wide setting | |
Zabulis et al. | Multicamera human detection and tracking supporting natural interaction with large-scale displays | |
CN110348328A (en) | Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction | |
CN108491881A (en) | Method and apparatus for generating detection model | |
CN113342439A (en) | Display method, display device, electronic equipment and storage medium | |
CN109543068A (en) | Method and apparatus for generating the comment information of video | |
WO2023040233A1 (en) | Service state analysis method and apparatus, and electronic device, storage medium and computer program product | |
US11521426B2 (en) | Cognitive enablement of presenters | |
TWI709915B (en) | Curriculum index detection warning method, device, electronic equipment, and storage medium | |
CN109493104A (en) | A kind of method and system of Intelligent visiting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191015 |