CN110209267A - Terminal, server and virtual scene method of adjustment, medium - Google Patents
Terminal, server and virtual scene method of adjustment, medium Download PDFInfo
- Publication number
- CN110209267A CN110209267A CN201910335027.9A CN201910335027A CN110209267A CN 110209267 A CN110209267 A CN 110209267A CN 201910335027 A CN201910335027 A CN 201910335027A CN 110209267 A CN110209267 A CN 110209267A
- Authority
- CN
- China
- Prior art keywords
- virtual scene
- facial expression
- server
- display interface
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 230000008921 facial expression Effects 0.000 claims abstract description 79
- 230000000694 effects Effects 0.000 claims abstract description 13
- 230000004044 response Effects 0.000 claims abstract description 13
- 230000008451 emotion Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000006854 communication Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241000208818 Helianthus Species 0.000 description 2
- 235000003222 Helianthus annuus Nutrition 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 238000009412 basement excavation Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of terminal, server and virtual scene method of adjustment, medium, which includes: the first sending module, and for the facial expression in response to capturing, facial expression is sent to server;Receiving module, for receiving the first virtual scene corresponding with facial expression of server passback;Module is adjusted, for the second virtual scene on display interface to be adjusted to the first virtual scene.Terminal provided by the embodiments of the present application carries out Judgment by emotion according to the user's face expression of the child or children or other age brackets captured, then corresponding virtual scene is obtained from server, and then it is shown in the display interface of terminal, it can be interacted with the user of child or children or other age brackets, it teaches through lively activities, greatly excites the interest of child or children or user's study of other age brackets.
Description
Technical field
The present invention relates generally to field of Educational Technology, and in particular to a kind of terminal, server and virtual scene adjustment side
Method, medium.
Background technique
With the fast development and extensive use of science and technology, multimedia teaching is by curtain projection PPT courseware, in aided education
The explanation of appearance, augmented reality and virtual reality applications teaching scene, the multimedia teaching gradually development to education internet.
In educational training field, the cognitive learning of young youngster is guided, augmented reality or virtual reality can be passed through
Carry out aided education, but the more difficult cognition difficult point and point of interest for finding child in teaching process
Summary of the invention
In view of drawbacks described above in the prior art or deficiency, it is intended to provide a kind of terminal, server and virtual scene tune
Adjusting method, medium, terminal carry out emotion according to the user's face expression of the child or children or other age brackets captured
Then judgement obtains corresponding virtual scene from server, and then is shown in the display interface of terminal, can with child or
Person children or the user of other age brackets interact, and teach through lively activities, greatly excite child or children or other
The interest of user's study of age bracket.
In a first aspect, the application provides a kind of terminal, the terminal includes:
The facial expression is sent to server for the facial expression in response to capturing by the first sending module;
Receiving module, for receiving the first virtual scene corresponding with the facial expression of the server passback;
Module is adjusted, for the second virtual scene on display interface to be adjusted to the first virtual scene.
Second aspect, the application provide a kind of server, which is characterized in that the server includes:
Identification module identifies the corresponding feature of the facial expression for facial expression and faceform based on the received
Type;
Second sending module, for extracting corresponding first virtual scene of the characteristic type, and it is virtual by described first
Scene is sent to terminal.
The third aspect, the application provide a kind of virtual scene method of adjustment, are applied to terminal side, which is characterized in that described
Method includes:
In response to the facial expression captured, the facial expression is sent to server;
Receive the first virtual scene corresponding with the facial expression of the server passback;
The second virtual scene on display interface is adjusted to first virtual scene.
Fourth aspect, the application provide a kind of virtual scene method of adjustment, are applied to server side, which is characterized in that institute
The method of stating includes:
Facial expression and faceform based on the received identify the corresponding characteristic type of the facial expression;
Corresponding first virtual scene of the characteristic type is extracted, and first virtual scene is sent to terminal.
5th aspect, the application provide a kind of computer-readable medium, the computer-readable medium storage have one or
The multiple programs of person can be executed by one or more processor, to realize the void as described in any one of the claims
The step of quasi- scene method of adjustment.
To sum up, terminal provided by the embodiments of the present application, server and virtual scene method of adjustment, medium, the terminal packet
The first sending module is included, for the facial expression in response to capturing, facial expression is sent to server;Receiving module is used
In the first virtual scene corresponding with facial expression for receiving server passback;Module is adjusted, for by the on display interface
Two virtual scenes are adjusted to the first virtual scene.Terminal provided by the embodiments of the present application according to the child or children captured or
The user's face expression of other age brackets of person carries out Judgment by emotion, then obtains corresponding virtual scene, Jin Er from server
The display interface of terminal is shown, and can be interacted with the user of child or children or other age brackets, residence religion in
It is happy, greatly excite the interest of child or children or user's study of other age brackets.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of terminal provided by the embodiments of the present application;
Fig. 2 is another terminal provided by the embodiments of the present application;
Fig. 3 is a kind of server provided by the embodiments of the present application;
Fig. 4 is a kind of virtual scene method of adjustment provided by the embodiments of the present application;
Fig. 5 is another virtual scene method of adjustment provided by the embodiments of the present application;
Fig. 6 is a kind of information exchange figure of terminal and server provided by the embodiments of the present application;
Fig. 7 is a kind of computer system provided by the embodiments of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
In order to facilitate understanding and explanation, below by Fig. 1 to Fig. 6 it is detailed illustrate terminal provided by the embodiments of the present application,
Server and virtual scene method of adjustment.
Referring to FIG. 1, it is a kind of terminal provided by the embodiments of the present application, which includes:
Facial expression is sent to server for the facial expression in response to capturing by the first sending module 11.
For example, when user's using terminal of child or children or other age brackets learn, in terminal
Camera can capture the facial expression of child or children or the user of other age brackets.Wherein, different facial expressions
Corresponding different affective style, such as glad, sad or grievance.
Receiving module 12, for receiving the first virtual scene corresponding with facial expression of server passback.
It should be noted that since different facial expressions correspond to different affective styles, and then affective style is also associated with
The first different virtual scenes.For example, the first virtual scene that glad affective style is associated with is the sun, sunflower, and
The first virtual scene that sad affective style is associated with is rainy day.
Module 13 is adjusted, for the second virtual scene on display interface to be adjusted to the first virtual scene.
For example, second virtual scene is when the expression of child or children or the user of other age brackets are difficult out-of-date
Violent storm;Later, the expression of child or children or the user of other age brackets are happiness, then are by the second virtual scene
Violent storm is adjusted to the sun of the first virtual scene, is achieved in virtual scene with child or children or other ages
The emotion of the user of section changes and dynamically changes, and can carry out with child or children or the user of other age brackets
Interaction, teaches through lively activities, and greatly excites the interest of child or children or user's study of other age brackets.
It should be noted that terminal involved in the embodiment of the present application can include but is not limited to personal computer
(Personal Computer, PC), personal digital assistant (Personal Digital Assistant, PDA), tablet computer
(Tablet Computer), radio hand-held equipment and mobile phone etc..
In the other embodiments of the application, as shown in Fig. 2, adjustment module 13 includes:
Determination unit 131, the second coordinate system of the first coordinate system and display interface for putting according to the observation determine observation
Put corresponding pixel in the display interface.
Specifically, determination unit 131 matches facial expression with faceform, point of observation is calculated in the first coordinate system
Under three-dimensional coordinate, and transformational relation, the Yi Jisan of the second coordinate system of the first coordinate system and display interface based on point of observation
Coordinate is tieed up, determines subpoint coordinate of the three-dimensional coordinate under the second coordinate system.
It should be noted that point of observation is the midpoint of the eyes connecting line in facial expression, and subpoint is that point of observation exists
Corresponding pixel on display interface.
Adjustment unit 132, for the second virtual scene to be adjusted to the first void by animation effect using pixel as the center of circle
Quasi- scene.
It should be noted that animation effect can include but is not limited to shutter, splitting and dizzy dye, the embodiment of the present application
To this without limiting.
Terminal provided by the embodiments of the present application, which includes the first sending module, for the face in response to capturing
Facial expression is sent to server by expression;Receiving module, for receiving corresponding with facial expression the first of server passback
Virtual scene;Module is adjusted, for the second virtual scene on display interface to be adjusted to the first virtual scene.The application is implemented
The terminal that example provides carries out Judgment by emotion according to the user's face expression of the child or children or other age brackets captured,
Then corresponding virtual scene is obtained from server, and then is shown in the display interface of terminal, it can be with child or youngster
Virgin or other age brackets users interact, and teach through lively activities, greatly excite child or children or other ages
The interest of user's study of section.
Based on previous embodiment, the embodiment of the present application provides a kind of server.As shown in figure 3, it is the embodiment of the present application
A kind of server provided, the server 3 include:
Identification module 31, for facial expression and faceform based on the received, the corresponding feature class of identification facial expression
Type.
For example, the corresponding characteristic type of difference facial expression can be glad, sad or grievance.
Second sending module 32 is sent out for extracting corresponding first virtual scene of characteristic type, and by the first virtual scene
It send to terminal.
In the other embodiments of the application, server 3 is big data platform.
It should be noted that big data refers in the form of polynary, the huge data group collected from many sources.Big number
According to characteristic be excavation and processing to mass data, be in particular in: the first, the data scale of construction is huge.From terabyte (TB)
Rank rises to petabyte (PB) rank;The second, data type is various.For example, network log, video, picture and geographical position
The data types such as confidence breath;Third, value density are low.By taking video as an example, during continuous uninterrupted monitoring, the number that comes in handy
According to only one two seconds;4th, processing speed is fast, also referred to as 1 second law, this is also to have with traditional data mining technology
The different advantage of essence.Internet of Things, cloud computing, mobile Internet, car networking, mobile phone, tablet computer and each throughout the earth
The various sensors in corner, none is not the mode of data source or carrying.
Server provided by the embodiments of the present application, the server include identification module, being capable of facial expression based on the received
And faceform, the corresponding characteristic type of identification facial expression;In turn, the second sending module extracts characteristic type corresponding first
Virtual scene, and the first virtual scene is sent to terminal.Terminal provided by the embodiments of the present application according to the child captured or
The user's face expression of person children or other age brackets carries out Judgment by emotion, then obtains corresponding virtual field from server
Scape, and then be shown in the display interface of terminal, it can be carried out with child or children or the user of other age brackets mutual
It is dynamic, it teaches through lively activities, greatly excites the interest of child or children or user's study of other age brackets.
Based on previous embodiment, the embodiment of the present application provides a kind of virtual scene method of adjustment, the virtual scene adjustment side
Method can be applied in the terminal that the corresponding embodiment of Fig. 1-2 provides.Referring to shown in Fig. 4, the virtual scene method of adjustment application
In terminal, include the following steps:
Facial expression is sent to server in response to the facial expression captured by S401.
For example, when user's using terminal of child or children or other age brackets learn, in terminal
Camera can capture the facial expression of child or children or the user of other age brackets.Wherein, different facial expressions
Corresponding different affective style, such as glad, sad or grievance.
S402 receives the first virtual scene corresponding with facial expression of server passback.
It should be noted that since different facial expressions correspond to different affective styles, and then affective style is also associated with
The first different virtual scenes.For example, the first virtual scene that glad affective style is associated with is the sun, sunflower, and
The first virtual scene that sad affective style is associated with is rainy day.
The second virtual scene on display interface is adjusted to the first virtual scene by S403.
Specifically, the second coordinate system of the first coordinate system and display interface put according to the observation in the embodiment of the present application, really
Determine point of observation corresponding pixel in the display interface;In turn, using pixel as the center of circle, by animation effect by the second virtual field
Scape is adjusted to the first virtual scene.
In the other embodiments of the application, the second coordinate system of the first coordinate system and display interface put according to the observation,
It determines point of observation corresponding pixel in the display interface, includes the following steps:
A: facial expression is matched with faceform, calculates three-dimensional coordinate of the point of observation under the first coordinate system.
Wherein, point of observation is the midpoint of the eyes connecting line in facial expression.
B: the transformational relation of the second coordinate system of the first coordinate system and display interface based on point of observation and three-dimensional seat
Mark, determines subpoint coordinate of the three-dimensional coordinate under second coordinate system.
Wherein, subpoint is point of observation corresponding pixel in the display interface.
It should be noted that in the present embodiment with the explanation of same steps in other embodiments and identical content, Ke Yican
According to the description in other embodiments, details are not described herein again.
Virtual scene method of adjustment provided by the embodiments of the present application is applied to terminal side, in response to the facial table captured
Facial expression is sent to server by feelings;Then, the first virtual scene corresponding with facial expression of server passback is received;
In turn, the second virtual scene on display interface is adjusted to the first virtual scene.Terminal provided by the embodiments of the present application according to
The user's face expression of the child or children or other age brackets that capture carry out Judgment by emotion, then obtain from server
Corresponding virtual scene, and then be shown in the display interface of terminal, it can be with child or children or other age brackets
User interact, teach through lively activities, greatly excite the emerging of user's study of child or children or other age brackets
Interest.
Based on previous embodiment, the embodiment of the present application provides a kind of virtual scene method of adjustment, the virtual scene adjustment side
Method can be applied in the server that the corresponding embodiment of Fig. 3 provides.Referring to Figure 5, the virtual scene method of adjustment application
In server, include the following steps:
S501, facial expression and faceform, identify the corresponding characteristic type of facial expression based on the received.
For example, the corresponding characteristic type of difference facial expression can be glad, sad or grievance.
S502 extracts corresponding first virtual scene of characteristic type, and the first virtual scene is sent to terminal.
In the other embodiments of the application, server is big data platform.
It should be noted that in the present embodiment with the explanation of same steps in other embodiments and identical content, Ke Yican
According to the description in other embodiments, details are not described herein again.
Virtual scene method of adjustment provided by the embodiments of the present application is applied to server side, based on the received facial expression
And faceform, the corresponding characteristic type of identification facial expression;In turn, corresponding first virtual scene of characteristic type is extracted, and
First virtual scene is sent to terminal.Terminal provided by the embodiments of the present application according to the child or children that capture or its
The user's face expression of his age bracket carries out Judgment by emotion, then obtains corresponding virtual scene from server, and then in terminal
Display interface be shown, can interact, teach through lively activities with the user of child or children or other age brackets, pole
The earth excites the interest of child or children or user's study of other age brackets.
Based on previous embodiment, the embodiment of the present application provides a kind of virtual scene adjustment system, virtual scene adjustment system
System includes the terminal of any one of Fig. 1-2 and the server of Fig. 3.Referring to FIG. 6, it is a kind of end provided by the embodiments of the present application
The information exchange figure at end and server, includes the following steps:
Facial expression is sent to server in response to the facial expression captured by S601, terminal.
For example, when user's using terminal of child or children or other age brackets learn, in terminal
Camera can capture the facial expression of child or children or the user of other age brackets.Wherein, different facial expressions
Corresponding different affective style, such as glad, sad or grievance.
S602, server facial expression and faceform based on the received, the corresponding characteristic type of identification facial expression.
For example, the corresponding characteristic type of difference facial expression can be glad, sad or grievance.
S603, server extracts corresponding first virtual scene of characteristic type, and the first virtual scene is sent to terminal.
S604, terminal receive the first virtual scene corresponding with facial expression of server passback.
The second virtual scene on display interface is adjusted to the first virtual scene by S605, terminal.
It should be noted that in the present embodiment with the explanation of same steps in other embodiments and identical content, Ke Yican
According to the description in other embodiments, details are not described herein again.
Terminal provided by the embodiments of the present application is according to the user face of the child or children or other age brackets captured
Portion's expression carries out Judgment by emotion, then obtains corresponding virtual scene from server, and then opened up in the display interface of terminal
Show, can interact, teach through lively activities with the user of child or children or other age brackets, greatly excite child or
The interest of person children or the user of other age brackets study.
Based on previous embodiment, the embodiment of the present application provides a kind of computer system.It please refers to shown in Fig. 7, the computer
System 700 includes central processing unit (CPU) 701, can according to the program being stored in read-only memory (ROM) 702 or
Person executes various movements appropriate and processing from the program that storage section is loaded into random access storage device (RAM) 703.?
In RAM703, it is also stored with various programs and data needed for system operatio.CPU701, ROM 702 and RAM 703 passes through total
Line 704 is connected with each other.Input/output (I/O) interface 705 is also connected to bus 704.
I/O interface 705 is connected to lower component: the importation 706 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 707 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 708 including hard disk etc.;
And the communications portion 709 of the network interface card including LAN card, modem etc..Communications portion 709 via such as because
The network of spy's net executes communication process.Driver 710 is also connected to I/O interface 705 as needed.Detachable media 711, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 710, in order to read from thereon
Computer program be mounted into storage section 708 as needed.
Particularly, according to an embodiment of the present application, it may be implemented as calculating above with reference to the process of flow chart 4-6 description
Machine software program.For example, embodiments herein 4 includes a kind of computer program product comprising be carried on computer-readable
Computer program on medium, the computer program are executed by CPU701, to realize following steps:
In response to the facial expression captured, facial expression is sent to server;
Receive the first virtual scene corresponding with facial expression of server passback;
The second virtual scene on display interface is adjusted to the first virtual scene.
In such embodiments, which can be downloaded and installed from network by communications portion 709,
And/or it is mounted from detachable media 711.
It should be noted that computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In application, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the application various adjustment of embodiment virtual scene systems, method
With the architecture, function and operation in the cards of computer program product.In this regard, each of flowchart or block diagram
Box can represent a part of a module, program segment or code, and a part of above-mentioned module, program segment or code includes
One or more executable instructions for implementing the specified logical function.It should also be noted that in some realizations as replacement
In, function marked in the box can also occur in a different order than that indicated in the drawings.For example, two succeedingly indicate
Box can actually be basically executed in parallel, they can also be executed in the opposite order sometimes, this is according to related function
Depending on energy.It is also noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart,
It can be realized with the dedicated hardware based system for executing defined functions or operations, or specialized hardware and meter can be used
The combination of calculation machine instruction is realized.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part realizes that described unit also can be set in the processor.Wherein, the title of these units is in certain situation
Under do not constitute restriction to the unit itself.Described unit or module also can be set in the processor, for example, can be with
Description are as follows: a kind of processor includes the first sending module, receiving module and adjustment module.Wherein, the name of these units or module
Claim not constituting the restriction to the unit or module itself under certain conditions.
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in terminal described in above-described embodiment;It is also possible to individualism, and without in the supplying terminal.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the terminal, makes
The terminal is obtained to realize such as the virtual scene method of adjustment in above-described embodiment.
For example, terminal may be implemented as shown in Figure 4: S401, in response to the facial expression captured, by facial expression
It is sent to server;S402 receives the first virtual scene corresponding with facial expression of server passback;S403 will show boundary
The second virtual scene on face is adjusted to the first virtual scene.For another example, each step as shown in figures 5-6 may be implemented in terminal
Suddenly.
It should be noted that although being referred to several modules or list for acting the terminal executed in the above detailed description
Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more
Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould
The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
In addition, although describing each step of method in the disclosure in the accompanying drawings with particular order, this does not really want
These steps must be executed in this particular order by asking or implying, or having to carry out step shown in whole could realize
Desired result.It is additional or it is alternatively possible to omit certain steps, multiple steps are merged into a step and are executed, and/
Or a step is decomposed into execution of multiple steps etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented
Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (10)
1. a kind of terminal, which is characterized in that the terminal includes:
The facial expression is sent to server for the facial expression in response to capturing by the first sending module;
Receiving module, for receiving the first virtual scene corresponding with the facial expression of the server passback;
Module is adjusted, for the second virtual scene on display interface to be adjusted to first virtual scene.
2. terminal according to claim 1, which is characterized in that the adjustment module includes:
Determination unit, the second coordinate system of the first coordinate system and the display interface for putting according to the observation, determines the sight
Examine a little corresponding pixel on the display interface;
Adjustment unit, for second virtual scene to be adjusted to described by animation effect using the pixel as the center of circle
First virtual scene.
3. terminal according to claim 2, which is characterized in that the determination unit is also used to the facial expression and people
Face model is matched, and three-dimensional coordinate of the point of observation under first coordinate system is calculated;
The transformational relation and described three of second coordinate system of the first coordinate system and the display interface based on the point of observation
Coordinate is tieed up, determines that subpoint coordinate of the three-dimensional coordinate under second coordinate system, the subpoint are the point of observation
The corresponding pixel on the display interface.
4. a kind of server, which is characterized in that the server includes:
Identification module identifies the corresponding characteristic type of the facial expression for facial expression and faceform based on the received;
Second sending module, for extracting corresponding first virtual scene of the characteristic type, and by first virtual scene
It is sent to terminal.
5. server according to claim 4, which is characterized in that the server is big data platform.
6. a kind of virtual scene method of adjustment is applied to terminal side, which is characterized in that the described method includes:
In response to the facial expression captured, the facial expression is sent to server;
Receive the first virtual scene corresponding with the facial expression of the server passback;
The second virtual scene on display interface is adjusted to first virtual scene.
7. virtual scene method of adjustment according to claim 6, which is characterized in that described that second on display interface is empty
Quasi- scene is adjusted to the first virtual scene, comprising:
Second coordinate system of the first coordinate system and the display interface put according to the observation, determines the point of observation in the display
Corresponding pixel on interface;
Using the pixel as the center of circle, second virtual scene is adjusted to by first virtual scene by animation effect.
8. virtual scene method of adjustment according to claim 7, which is characterized in that first coordinate put according to the observation
Second coordinate system of system and the display interface, determines the point of observation corresponding pixel on the display interface, comprising:
The facial expression is matched with faceform, calculates three-dimensional seat of the point of observation under first coordinate system
Mark;
The transformational relation and described three of second coordinate system of the first coordinate system and the display interface based on the point of observation
Coordinate is tieed up, determines that subpoint coordinate of the three-dimensional coordinate under second coordinate system, the subpoint are the point of observation
The corresponding pixel on the display interface.
9. a kind of virtual scene method of adjustment is applied to server side, which is characterized in that the described method includes:
Facial expression and faceform based on the received identify the corresponding characteristic type of the facial expression;
Corresponding first virtual scene of the characteristic type is extracted, and first virtual scene is sent to terminal.
10. a kind of computer-readable medium, which is characterized in that the computer-readable medium storage has one or more journey
Sequence can be executed by one or more processor, to realize the virtual scene tune as described in any one of claim 6 to 9
The step of adjusting method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910335027.9A CN110209267A (en) | 2019-04-24 | 2019-04-24 | Terminal, server and virtual scene method of adjustment, medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910335027.9A CN110209267A (en) | 2019-04-24 | 2019-04-24 | Terminal, server and virtual scene method of adjustment, medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110209267A true CN110209267A (en) | 2019-09-06 |
Family
ID=67786248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910335027.9A Pending CN110209267A (en) | 2019-04-24 | 2019-04-24 | Terminal, server and virtual scene method of adjustment, medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110209267A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111639613A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | Augmented reality AR special effect generation method and device and electronic equipment |
CN114237401A (en) * | 2021-12-28 | 2022-03-25 | 广州卓远虚拟现实科技有限公司 | A method and system for seamless linking of multiple virtual scenes |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6364770B1 (en) * | 1998-10-08 | 2002-04-02 | Konami Co., Ltd. | Image creating apparatus, displayed scene switching method for the image creating apparatus, computer-readable recording medium containing displayed scene switching program for the image creating apparatus, and video game machine |
KR20060129582A (en) * | 2005-06-07 | 2006-12-18 | 주식회사 헬스피아 | Mobile communication terminal and method for changing the configuration of the terminal according to the emotional state of the user |
US20090012846A1 (en) * | 2007-07-02 | 2009-01-08 | Borders Group, Inc. | Computerized book reviewing system |
CN104520854A (en) * | 2012-08-10 | 2015-04-15 | 微软公司 | Animation transitions and effects in a spreadsheet application |
CN105612478A (en) * | 2013-10-11 | 2016-05-25 | 微软技术许可有限责任公司 | User interface programmatic scaling |
CN108885555A (en) * | 2016-11-30 | 2018-11-23 | 微软技术许可有限责任公司 | Exchange method and device based on mood |
-
2019
- 2019-04-24 CN CN201910335027.9A patent/CN110209267A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6364770B1 (en) * | 1998-10-08 | 2002-04-02 | Konami Co., Ltd. | Image creating apparatus, displayed scene switching method for the image creating apparatus, computer-readable recording medium containing displayed scene switching program for the image creating apparatus, and video game machine |
KR20060129582A (en) * | 2005-06-07 | 2006-12-18 | 주식회사 헬스피아 | Mobile communication terminal and method for changing the configuration of the terminal according to the emotional state of the user |
US20090012846A1 (en) * | 2007-07-02 | 2009-01-08 | Borders Group, Inc. | Computerized book reviewing system |
CN104520854A (en) * | 2012-08-10 | 2015-04-15 | 微软公司 | Animation transitions and effects in a spreadsheet application |
CN105612478A (en) * | 2013-10-11 | 2016-05-25 | 微软技术许可有限责任公司 | User interface programmatic scaling |
CN108885555A (en) * | 2016-11-30 | 2018-11-23 | 微软技术许可有限责任公司 | Exchange method and device based on mood |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111639613A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | Augmented reality AR special effect generation method and device and electronic equipment |
CN111639613B (en) * | 2020-06-04 | 2024-04-16 | 上海商汤智能科技有限公司 | Augmented reality AR special effect generation method and device and electronic equipment |
CN114237401A (en) * | 2021-12-28 | 2022-03-25 | 广州卓远虚拟现实科技有限公司 | A method and system for seamless linking of multiple virtual scenes |
CN114237401B (en) * | 2021-12-28 | 2024-06-07 | 广州卓远虚拟现实科技有限公司 | A seamless linking method and system for multiple virtual scenes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111556278B (en) | Video processing method, video display device and storage medium | |
US10984595B2 (en) | Method and apparatus for providing guidance in a virtual environment | |
KR102507260B1 (en) | Service server for generating lecturer avatar of metaverse space and mehtod thereof | |
CN111476871B (en) | Method and device for generating video | |
KR20220008735A (en) | Animation interaction method, device, equipment and storage medium | |
CN109034907A (en) | Ad data put-on method and device, electronic equipment, storage medium | |
CN113010740B (en) | Word weight generation method, device, equipment and medium | |
CN109815365A (en) | Method and apparatus for processing video | |
CN109919244A (en) | Method and apparatus for generating scene Recognition model | |
CN109754464A (en) | Method and apparatus for generating information | |
CN108304067A (en) | System, method and apparatus for showing information | |
CN114501064B (en) | Video generation method, device, equipment, medium and product | |
CN111638791B (en) | Virtual character generation method and device, electronic equipment and storage medium | |
CN110232654A (en) | Image composition method, device, equipment and its storage medium | |
CN109600559B (en) | Video special effect adding method and device, terminal equipment and storage medium | |
CN111866483A (en) | Color reproduction method and apparatus, computer readable medium and electronic device | |
CN106908082A (en) | Method, apparatus and system for the gyroscope in calibrating terminal | |
CN113806306B (en) | Media file processing method, device, equipment, readable storage medium and product | |
CN111414506A (en) | Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium | |
CN108446658A (en) | The method and apparatus of facial image for identification | |
CN110209267A (en) | Terminal, server and virtual scene method of adjustment, medium | |
CN114092712B (en) | Image generation method, device, readable medium and electronic equipment | |
CN115130456A (en) | Sentence parsing and matching model training method, device, equipment and storage medium | |
CN110087122A (en) | For handling system, the method and apparatus of information | |
CN109949213A (en) | Method and apparatus for generating image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190906 |