CN107085495B - Information display method, electronic equipment and storage medium - Google Patents
Information display method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN107085495B CN107085495B CN201710369086.9A CN201710369086A CN107085495B CN 107085495 B CN107085495 B CN 107085495B CN 201710369086 A CN201710369086 A CN 201710369086A CN 107085495 B CN107085495 B CN 107085495B
- Authority
- CN
- China
- Prior art keywords
- information
- state
- client
- virtual image
- avatar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000008451 emotion Effects 0.000 claims abstract description 31
- 230000003213 activating effect Effects 0.000 claims abstract description 24
- 230000002452 interceptive effect Effects 0.000 claims description 24
- 230000008921 facial expression Effects 0.000 claims description 14
- 230000001960 triggered effect Effects 0.000 claims description 6
- 230000001815 facial effect Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000003993 interaction Effects 0.000 abstract description 15
- 230000004044 response Effects 0.000 abstract description 10
- 230000014509 gene expression Effects 0.000 abstract description 3
- 230000006854 communication Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000002996 emotional effect Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003340 mental effect Effects 0.000 description 2
- 230000006996 mental state Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
- H04L51/043—Real-time or near real-time messaging, e.g. instant messaging [IM] using or handling presence information
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention provides an information display method, electronic equipment and a storage medium, which are used for solving the problem that the emotion expression of a user is weak during information interaction. Wherein the method comprises: receiving an instruction to activate a first state of the first avatar; activating the first state of the first avatar; in response to the first state being activated, the first avatar presents in a display interface at least one animation segment corresponding to the first state.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information displaying method, an electronic device, and a storage medium.
Background
With the development of network technology and the wide application of intelligent devices, more and more users start to perform instant messaging through mobile terminals, and the number of contacts of the users is increasing, so that the users often need to communicate with multiple people on instant messaging software at the same time.
One common instant messaging application is the use of, present and use of an avatar on a mobile terminal for interaction between users. In the prior art, based on an instant messaging function provided by an application, a second client logged with the application may send information to a first client logged with the application. However, after the user a of the second client sends a message to the user B of the first client, if the user B of the first client does not reply online or temporarily, the user a of the second client can only see the avatar of the user B of the second client standing in place or walking emotionally. Therefore, the user is difficult to fully utilize the virtual image to express own information such as emotion and emotion, communication quality and interestingness are reduced, and user experience is poor.
Disclosure of Invention
The invention provides an information display method, electronic equipment and a storage medium, which are used for solving the problem that the emotion expression of a user is weak when information is interacted in the prior art.
The embodiment of the invention adopts the following technical scheme:
in a first aspect, the present technical solution provides an information displaying method, including:
triggering an instruction to activate a first state of the first avatar;
displaying the first avatar in response to the instruction and activating the first state of the first avatar;
the first avatar presents in a display interface at least one animation segment corresponding to the first state.
Optionally, if an instruction to cancel the first state is not received and/or an instruction to switch to another state is not received, the first avatar is continuously in the first state.
Optionally, the first avatar is controlled by a first client, and the instruction for triggering and activating the first state of the first avatar specifically includes at least one of the following:
receiving a first client-triggered button that activates the first state of the first avatar;
and acquiring interactive information between the first client and at least one second client, judging whether the acquired interactive information meets a preset condition or not according to the acquired interactive information, and if so, triggering an instruction for activating the first state of the first virtual image.
Optionally, the first avatar is controlled by a first client, the first client receives first information sent by at least one second client, and the first avatar displays at least one animation segment corresponding to the first state in the display interface.
Optionally, the first avatar is controlled by the first client, receives second information sent by the first client to the at least one second client, converts the second information into target information according to an attribute of the second information, and displays the target information on a display interface of the at least one second client.
Optionally, the converting the second information into the target information according to the attribute of the second information specifically includes at least one of the following:
the second information is text information, the font and/or the color of the text information are/is converted according to a first preset rule, and the text information after the font and/or the color are converted according to the first preset rule is determined as the target information;
the second information is voice information, the tone of the voice information is changed according to a second preset rule, and the voice information with the tone changed according to the second preset rule is determined as the target information;
the second information is animation information, the animation information comprises the first virtual image, the emotion of the first virtual image is changed according to a third preset rule, and the animation information after the emotion of the first virtual image is changed according to the third preset rule is determined as the target information.
Optionally, it is determined whether the at least one animation segment contains a facial expression animation and meets a fourth preset rule, and if yes, facial feature close-up processing is performed on the facial expression animation.
In a second aspect, the present technical solution further provides an electronic device, including: a display screen, a processor and a memory;
the display screen is used for displaying images;
the memory is used for storing programs;
the processor is configured to execute the program to perform operations comprising: triggering an instruction to activate a first state of the first avatar; displaying the first avatar in response to the instruction and activating the first state of the first avatar; the first avatar presents in a display interface at least one animation segment corresponding to the first state.
In a third aspect, the present technical solution further provides a storage medium for storing a program, where the program, when executed, causes a mobile device to:
receiving an instruction to activate a first state of a first avatar; activating the first state of the first avatar; in response to the first state being activated, the first avatar presenting in the display interface at least one animation segment corresponding to the first state; the first avatar is controlled by a first client in the display interface.
The invention has the following beneficial effects:
(1) according to the first state of the first client activation, the first avatar will present a related animation in the display interface to better express the current mood of the user of the first client. After the second client sends information to the first client, if the user of the first client is not online or temporarily does not reply, the user of the second client can see the avatar of the user of the first client to display the animation segment related to the first state. Therefore, the virtual images can be utilized more fully among the users to express the information of emotion, emotion and the like, and the communication quality and interestingness are improved.
(2) According to the first state activated by the first client, when the first client sends information to the second client, the information is converted into target information which is more in line with the first state and then displayed on a display interface of the second client. Therefore, communication among users is more vivid, and the same information can be presented in different ways according to different states among users.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow chart of an information display method according to the present invention.
Fig. 2 is a schematic flow chart of another information display method provided by the present invention.
Fig. 3 is a schematic flow chart of another information display method provided by the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention.
It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings, and all of the aspects are in a very simplified form and are not to be taken in a precise ratio for the purpose of facilitating and clearly assisting in the description of the embodiments of the present invention. For convenience of description, the terms "left", "right", "up" and "down" used hereinafter are the same as the left, right, up and down directions of the drawings themselves, but do not limit the structure of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The information display method provided by the embodiment of the invention is suitable for electronic devices, such as Personal Digital Assistants (PDAs), smart phones, Mobile phones, tablet computers, Mobile Internet Devices (MIDs), notebook computers, automotive electronic devices, Digital cameras, multimedia players (multi-media players), game machines (game consoles), tablet computers or other Mobile devices. The invention is not limited in this regard.
For convenience of description, the following description will be made of an embodiment of the method taking a social APP running on a mobile phone as an execution subject of the method as an example. It is understood that the social APP of the mobile phone, which is the execution subject of the method, is only an exemplary illustration and should not be construed as a limitation on the method.
The information presentation method provided in the embodiment of the present invention is used for an interaction process between a first client and at least one second client, and as described in detail below, only the first client and one second client are taken as an example, in other embodiments, two or more second clients may exist at the same time, for example, in a group chat situation. The first avatar may be a 3D avatar having user features built from a real photograph of a user, or may be other cartoon avatars, which is not limited by the present invention.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of an information displaying method according to the present invention.
Step S101: triggering an instruction to activate a first state of the first avatar.
A first state, which may refer to an emotional state in which the first avatar is, such as "happy", "angry", "sad", etc.; alternatively, the present invention may also refer to a dynamic internal state of the first avatar, such as "sadness", "suspicion", and other states showing the current internal activity, which is not limited in the present invention.
It should be noted that the first state is activated by the user through the client, which may be a true state of the user's mind presented by the user through the avatar; or a state that the user wants to present to other clients through the virtual image, but not a user's mental real state; the invention is not limited in this regard. The first states form a state group which can be displayed to other clients by the first virtual image, and only one first state can be activated each time.
In this embodiment of the present invention, the first avatar is controlled by a first client, and the instruction for triggering activation of the first state of the first avatar may be: receiving a button activated the first state of the first avatar triggered by the first client, thereby triggering an instruction to activate the first state of the first avatar. Specifically, the operation body may be an operation performed on a preset button by the first client, and at this time, the operation body may be a part of a user's body, such as a finger, or may be a device such as a stylus. The triggering mode may also be an operation performed on the button through an input device, in which case the input device may be a mouse, a keyboard, or the like.
Alternatively, the instruction triggering activation of the first state of the first avatar may be: and acquiring interactive information between the first client and at least one second client, judging whether the acquired interactive information meets a preset condition or not according to the acquired interactive information, and if so, triggering an instruction for activating the first state of the first virtual image.
It should be noted that, when the users have group chat, for example, ten user group chat, at this time, the client of one of the users may be determined as the first client, and then the user is the first user; in the other nine users, the client where each user is located is determined as a second client; that is to say that at this time one first client interacts with nine second clients. In the following, a first client and a second client are taken as an example for explanation.
When a first user interacts with a second user through a second client by a first client, the system stores interaction information between the clients in a local or server. In this embodiment of the application, the obtaining of the interaction information between the first client and the second client may be performed in real time, or may be performed in non-real time. For the latter case, the related information may be acquired periodically or aperiodically. In this embodiment, the interaction information may be information used or generated when the first client and the second client interact with each other in a communication process, and may be input by the first user or the second user or selected according to an option provided by the system. For example, the first user and the second user may interact with each other by inputting voice or text through the client, or may interact with each other by selecting an animation or a picture, and so on. For a user and a client, the interaction information may be visual information or non-visual information, wherein the visual information may be dialog sentences, expressions, pictures, videos, audios, animations, files and the like; the non-visual information may be instructions, signaling, etc. (e.g., request information and reply information having a predetermined format, etc.).
Further, the method comprises the steps of judging whether the acquired interactive information meets a preset condition or not according to the acquired interactive information, and if so, triggering an instruction for activating the first state of the first virtual image. A corresponding predetermined condition may be set in advance for each first state so that the corresponding first state is activated for the first avatar when it is determined that the acquired interactive information satisfies any predetermined condition. The predetermined condition may be: whether preset vocabularies appear in the interactive information, whether preset animation appears in the interactive information, and/or whether the preset vocabularies appear in the interactive information reach preset frequency, and the like. The above predetermined is only an exemplary illustration, and in practice, the present invention may be specifically configured according to the service requirement, and the present invention is not limited thereto. For example, the first state is a gaseous state and the predetermined conditions are: if the word of 'anger' appears three times or more in the text information sent to the second client by the first client, activating the 'anger' state of the first virtual image; when the first client sends information to the second client, the information comprises: "I really good vitality. "," you are too angry. "baby well generates qi. ", when a predetermined condition is satisfied, triggering an instruction to activate the" angry "state of the first avatar.
Step S102: in response to the instruction, displaying the first avatar and activating the first state of the first avatar.
Based on the instruction in step S101, a first state of the first avatar is activated, and the first avatar is displayed in a display interface of a display screen of the electronic device and is in the first state. It should be noted that, when step S101 is not executed, if the first avatar is not displayed in the display interface at this time, after step S101 is executed, the first avatar appears in the display interface and activates the first state of the first avatar in response to an instruction triggered by S101; wherein the first avatar may appear in the display interface in a fly-in, fade-in or other form, which is not limited by the present invention. When the step S101 is not executed, if the first avatar is displayed in the display interface at this time, after the step S101 is executed, responding to an instruction triggered by S101, the first avatar does not reappear, but the first state of the first avatar is directly activated.
It should be noted that, in other embodiments, there may be other client-controlled avatars in the display page besides the first avatar, and the present invention is not limited thereto.
Step S103: the first avatar presents in a display interface at least one animation segment corresponding to the first state.
In an embodiment of the invention, after the first state of the first avatar is activated, at least one animation segment corresponding to the first state will be presented in the display page. For example, the first client activates the "angry state" of the first avatar through steps S101 to S102, and then displays the animation a of the first avatar "throw cup" in the display page; and after the animation A is displayed, displaying the animation B of the first avatar 'stomping' in the page according to a preset rule, or not playing other animations.
In this embodiment, it is determined whether the at least one animation segment includes a facial expression animation and whether a fourth preset rule is satisfied, and if so, facial feature close-up processing is performed on the facial expression animation. Specifically, in this embodiment, the facial expression animation may be feature data labeled in advance, and the fourth preset rule may be a duration of the facial expression animation, for example, when the facial expression animation exceeds 1 s; and if the feature data of the facial expression animation are read and the fourth preset rule is met, performing facial feature processing on the facial expression animation. Specifically, the above process may be completed by executing a command for switching lenses. Therefore, the emotion can be better transmitted through the facial expression of the virtual image, and emotional communication among users can be better carried out.
It should be noted that the animation displayed above is synthesized through animation data, and the animation data may be stored in a server or locally, which is not limited by the present invention.
According to the information display method provided by the embodiment of the invention, the first state of the first virtual image is activated, and the animation corresponding to the first state is displayed in the display interface; therefore, through the interaction between users at the client, the emotion, emotion and mental state of the users can be better expressed, the process of the interaction through the virtual world is closer to the real world, and the user experience is better.
It is noted that the first state of the first avatar may be for only one or more second clients currently interacting; it is also possible for all clients, including clients that are interacting and have not yet interacted. Specifically, when a first user performs information interaction with a second user through a second client through a first client, it is assumed that information sent by the second user irritates the first user, so that the first user generates an angry emotion to the second user, and at this time, the first user can activate an angry state of a first avatar through the first client; if the first user also performs information interaction with the third user through the third client via the first client, the activated 'gas generation' state is not delayed to this point, that is, when the first client interacts with the third client, a new state can be additionally activated. When the first state of activation is for all clients, the first avatar is in an "angry" state no matter when interacting with the second, third or other clients, assuming that the first user has activated the "angry" state. By the arrangement, the states of the user and the client are more flexible, different states can be set for different clients, and the specificity in man-machine interaction is embodied; the same state can be set for all the clients, and the current overall state of the user can be displayed quickly and conveniently; compared with the prior art, the method has better flexibility and better user experience.
Optionally, each state in the state group corresponds to a state I D, and when a certain state is activated by the client, the resource corresponding to the state ID is read, where the resource includes animation data and/or predetermined rules corresponding to the state. It should be noted that the resource may be stored in a local server and/or a server, and this embodiment is described by taking an example in which the resource is stored in both the local server and the server; in other embodiments, the resource may be stored only in the server, and the resource is read by the server when the client needs to read the resource; or, the data is only stored locally, the local client can be read freely, and when other clients need to read, corresponding requests are sent through the server; the invention is not limited thereto.
In this embodiment, if the instruction to cancel the first state and/or the instruction to switch to another state is not received, the first avatar is continuously in the first state. And the first user interacts with the second user through the second client by the first client, the first virtual image is set to be in a 'suspicion' state by the first client, and if the first virtual image is not cancelled and/or switched all the time, the first user is always in the state for the second user, and the system cannot be automatically cancelled. Cancel means that the user no longer sets any special state, when the avatar is in a default state. Switching refers to a user transitioning from one first state to another, such as from an "happy" state to a "hard-to-go" state.
Example two
In an optional embodiment, on the basis of the embodiment shown in fig. 1, another implementation flowchart of the information presentation method provided in the embodiment of the present invention is shown in fig. 2, and may include:
step S201: triggering an instruction to activate a first state of the first avatar.
Step S202: in response to the instruction, displaying the first avatar and activating the first state of the first avatar.
Step S203: the first avatar presents in a display interface at least one animation segment corresponding to the first state.
For the specific implementation process of steps S101 to S103, reference may be made to the embodiment shown in fig. 1, which is not described herein again.
Step S204: and the first client receives first information sent by at least one second client, and the first virtual image displays at least one animation segment corresponding to the first state in the display interface.
In an embodiment of the invention, the first avatar is controlled by a first user through a first client. After a first user activates a first state of the first virtual image through a first client, the first client receives information sent by a second client, and the first virtual image displays at least one animation segment corresponding to the first state in a display interface. It should be noted that, when the users have group chat, for example, ten user group chat, at this time, the client of one of the users may be determined as the first client, and then the user is the first user; in the other nine users, the client where each user is located is determined as a second client; that is to say that at this time one first client interacts with nine second clients. In the following, a first client and a second client are taken as an example for explanation.
For example, user a activates the "angry" state of avatar a through client a ', and user B sends a message to client a ' through client B '; when the client A ' receives the information, the avatar a displays an animation segment related to the ' angry ' state in the display page, such as ' anger rush to the crown '. In an optional implementation manner of the present invention, each state in the state group corresponds to a plurality of related animations, the animation resources are stored in the server or a local resource library, and according to a preset rule, the animation segment resources are called from the resource library and are combined into a segment animation to be displayed in the display interface. The information may be text information, voice information, or animation information, and the present invention is not limited thereto. When the selected state contains a plurality of animation segment resources, each kind of information can be set to correspond to the same or different animation segments; one or more animation segments corresponding to each type of information may also be set. Continuing to use the above example, assuming that three animation segments can be displayed correspondingly to the received animation information, after the client B 'sends an animation information to the client a', the avatar a continuously plays three animation segments in an "angry" state in the display interface, it should be noted that the playing sequence of the three animation segments may be random or preset, which is not limited by the present invention.
According to the information display method provided by the embodiment of the invention, the first state of the first virtual image is activated, and the animation corresponding to the first state is displayed in the display interface; after the second client sends information to the first client, if the user of the first client is not online or temporarily does not reply, the user of the second client can see the avatar of the user of the first client to display the animation segment related to the first state. Therefore, the virtual images can be utilized more fully among the users to express the information of emotion, emotion and the like, and the communication quality and interestingness are improved.
EXAMPLE III
In an optional embodiment, on the basis of the embodiment shown in fig. 1, another implementation flowchart of the information presentation method provided in the embodiment of the present invention is shown in fig. 3, and may include:
step S301: triggering an instruction to activate a first state of the first avatar.
Step S302: in response to the instruction, displaying the first avatar and activating the first state of the first avatar.
Step S303: the first avatar presents in a display interface at least one animation segment corresponding to the first state.
For the specific implementation process of steps S101 to S103, reference may be made to the embodiment shown in fig. 1, which is not described herein again.
Step S304: and receiving second information sent by the first client to at least one second client.
Optionally, in an implementation process of the information presentation method provided in the embodiment of the present invention, the receiving of the message sent by the first client to the second client may be applied to a scenario where any client sends the same message to at least one other client. When the user is in group chat, each of the other clients may be regarded as at least one second client, and the content of this part is the same as that in the above embodiment, and is not described herein again.
In this embodiment, the second information is the same as the interaction information in the embodiment shown in fig. 1, and may be visual information or non-visual information, which is not described herein again.
Step S305: and converting the second information into target information according to the attribute of the second information.
Specifically, the second information is text information, the font and/or the color of the text information is converted according to a first preset rule, and the text information after the font and/or the color is converted according to the first preset rule is determined as the target information. By using the above example, the client a ' interacts with the client B ', and the client a ' activates the "live" state. Before that, both the client a 'and the client B' have read the state ID of the state, and have acquired the related resource corresponding to the ID. At this time, the client a' sends a text message "what do you want? When the system reads that the attribute of the information is text information, according to a preset rule, the text font corresponding to the 'angry' state is bold and thick, the color of the text font is gray, and the text information is converted into corresponding target information; the text content of the target information is "what do you want? ", the font is bold and bold, and the font color is gray.
It should be noted that the foregoing is only an exemplary illustration, and in other embodiments, a first preset rule may be set according to actual requirements, and a font and/or a color of the text information suitable for the first state may be configured. The first preset rule may include a font of the text information corresponding to the first state and/or a font color of the text information.
Specifically, the second information is voice information, the tone of the voice information is changed according to a second preset rule, and the voice information of which the tone is changed according to the second preset rule is determined as the target information. Continuing to use the above example, the client a ' interacts with the client B ', and the client a ' activates the "live air" state; the client a 'sends a piece of voice information to the client B', and when the system reads that the attribute of the information is voice information, the tone of the second information is changed into 'deep male' or 'deep female' according to a preset rule. In the embodiment of the present invention, the voice information may be processed by audio processing software to extract ap, sp, and f0 features, and input to a pre-trained neural network to obtain new ap, sp, and f0 feature information and convert the new ap, sp, and f0 feature information into new audio, where the new audio is the target information.
It should be noted that the above is only an exemplary illustration, and in other embodiments, a preset rule may be set according to actual requirements, and an appropriate tone color may be configured.
Specifically, the second information is animation information, the animation information includes the first avatar, the emotion of the first avatar is changed according to a third preset rule, and the animation information after the emotion of the first avatar is changed according to the third preset rule is determined as the target information. Continuing to use the above example, the client a ' interacts with the client B ', and the client a ' activates the "live air" state; the client A 'sends a section of' basketball playing 'animation information to the client B', the animation data is stored in the local or the server, if the animation data is stored in the local, the animation data is directly read, and if the animation data is stored in the server, the animation data is downloaded to the local and then read. In the embodiment of the present invention, the animation data is stored locally, when the client a' sends a "basketball playing" animation, the system first reads the animation data of "basketball playing", then performs short-time fourier transform on the animation data to obtain the spectrum data R1, then extracts the spectrum number R2 of the emotion corresponding to the "angry" state ID from the emotion database, and obtains the spectrum feature of the animation segment after the emotion of the first avatar is changed by: re ═ R1+ R2. And then the Re is converted into time domain data of the animation segment after the emotion of the first virtual image is changed through short-time Fourier inverse transformation, and the time domain data is determined as the target information.
It should be noted that the above is only an exemplary illustration, and in other embodiments, a preset rule may be set according to actual requirements, so as to configure a suitable emotion.
Step S306: and displaying the target information on a display interface of the at least one second client.
Step 305 may be performed locally at the first client sending the information, may be performed by the server, or may be performed locally at the second client, which is not limited in the present invention. Specifically, when step 305 is executed locally at the first client that sends the information, after step 305 is executed, the target information is obtained and the data of the target information is transmitted to the at least one second client through the server and is displayed on the display interface of the at least one second client; when the step 305 is executed in the server, the target information obtained after the step 305 is executed is directly transmitted to the at least one second client and is displayed on the display interface of the at least one second client; or, when the step 305 is executed locally at the second client sending the information, the target information obtained after the step 305 is executed is directly displayed on the display interface of at least one second client.
According to the information display method provided by the embodiment of the invention, the first state of the first virtual image is activated, and the animation corresponding to the first state is displayed in the display interface; when the first client sends information to the second client, the information is converted into target information which is more in line with the first state and then displayed on a display interface of the second client. Therefore, communication among users is more vivid, and the same information can be presented in different ways according to different states among users.
Example four
Corresponding to the method embodiment, an embodiment of the present invention further provides an electronic device, and a schematic structural diagram of the electronic device provided in the embodiment of the present invention is shown in fig. 4, and may include:
a display 41, a processor 42 and a memory 43; wherein,
the display screen 41 is used for displaying images; the display screen 41 may be a touch screen.
The memory 43 is used for storing programs; the memory 43 may be Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The processor 42 is configured to execute the stored program to perform the following operations: triggering an instruction to activate a first state of the first avatar; displaying the first avatar in response to the instruction and activating the first state of the first avatar; the first avatar presents in a display interface at least one animation segment corresponding to the first state.
In the embodiment of the present invention, the first state may refer to an emotional state of the first avatar, such as "happy", "angry", "sad", and the like; alternatively, the present invention may also refer to a dynamic internal state of the first avatar, such as "sadness", "suspicion", and other states showing the current internal activity, which is not limited in the present invention.
It should be noted that the first state is activated by the user through the client, which may be a true state of the user's mind presented by the user through the avatar; or a state that the user wants to present to other clients through the virtual image, but not a user's mental real state; the invention is not limited in this regard. The first states form a state group which can be displayed to other clients by the first virtual image, and only one first state can be activated each time.
In this embodiment of the present invention, the first avatar is controlled by a first client, and the instruction for triggering activation of the first state of the first avatar may be: receiving a button triggered by the first client to activate the first state of the first avatar, thereby triggering an instruction to activate the first state of the first avatar; alternatively, the instruction triggering activation of the first state of the first avatar may be: and acquiring interactive information between the first client and at least one second client, judging whether the acquired interactive information meets a preset condition or not according to the acquired interactive information, and if so, triggering an instruction for activating the first state of the first virtual image. Specifically, the specific content of the instruction for triggering and activating the first state of the first avatar is the same as the method embodiment, and is not described herein again.
It should be noted that, when the users have group chat, for example, ten user group chat, at this time, the client of one of the users may be determined as the first client, and then the user is the first user; in the other nine users, the client where each user is located is determined as a second client; that is to say that at this time one first client interacts with nine second clients. In the following, a first client and a second client are taken as an example for explanation.
Further, the method comprises the steps of judging whether the acquired interactive information meets a preset condition or not according to the acquired interactive information, and if so, triggering an instruction for activating the first state of the first virtual image. A corresponding predetermined condition may be set in advance for each first state so that the corresponding first state is activated for the first avatar when it is determined that the acquired interactive information satisfies any predetermined condition. The predetermined condition may be: whether preset vocabularies appear in the interactive information, whether preset animation appears in the interactive information, and/or whether the preset vocabularies appear in the interactive information reach preset frequency, and the like. The above predetermined is only an exemplary illustration, and in practice, the present invention may be specifically configured according to the service requirement, and the present invention is not limited thereto. For example, the first state is a gaseous state and the predetermined conditions are: if the word of 'anger' appears three times or more in the text information sent to the second client by the first client, activating the 'anger' state of the first virtual image; when the first client sends information to the second client, the information comprises: "I really good vitality. "," you are too angry. "baby well generates qi. ", when a predetermined condition is satisfied, triggering an instruction to activate the" angry "state of the first avatar.
The first state of the first avatar is activated while the first avatar is displayed in a display interface of a display screen of the electronic device in the first state. The first avatar presents in a display interface at least one animation segment corresponding to the first state. For example, the first client activates the "angry state" of the first avatar, at which time animation a of the first avatar "falling cup" is presented in the display page; and after the animation A is displayed, displaying the animation B of the first avatar 'stomping' in the page according to a preset rule, or not playing other animations.
It should be noted that, in other embodiments, there may be other client-controlled avatars in the display page besides the first avatar, and the present invention is not limited thereto.
According to the electronic device provided by the embodiment of the invention, the first state of the first virtual image can be activated by executing a corresponding program, and the animation corresponding to the first state is displayed in a display interface; therefore, through the interaction between users at the client, the emotion, emotion and mental state of the users can be better expressed, the process of the interaction through the virtual world is closer to the real world, and the user experience is better.
In an alternative embodiment, processor 42 may also be configured to perform the following operations: and judging whether the at least one animation segment contains facial expression animation or not and whether the at least one animation segment meets a fourth preset rule or not, and if so, performing facial feature processing on the facial expression animation. The specific contents and beneficial effects are the same as those of the method, and are not described again here.
In an alternative embodiment, processor 42 may also be configured to perform the following operations: and receiving first information sent by at least one second client, wherein the first avatar displays at least one animation segment corresponding to the first state in the display interface. The specific contents and beneficial effects are the same as those of the method, and are not described again here.
In an alternative embodiment, processor 42 may also be configured to perform the following operations: receiving second information sent by the first client to at least one second client, converting the second information into target information according to the attribute of the second information, and displaying the target information on a display interface of the at least one second client. The specific contents and beneficial effects are the same as those of the method, and are not described again here.
Specifically, the second information is text information, the font and/or the color of the text information are/is converted according to a first preset rule, and the text information after the font and/or the color are converted according to the first preset rule is determined as the target information; the second information is voice information, the tone of the voice information is changed according to a second preset rule, and the voice information with the tone changed according to the second preset rule is determined as the target information; the second information is animation information, the animation information comprises the first virtual image, the emotion of the first virtual image is changed according to a third preset rule, and the animation information after the emotion of the first virtual image is changed according to the third preset rule is determined as the target information. The specific contents and beneficial effects are the same as those of the method, and are not described again here.
In an alternative embodiment, processor 42 may also be configured to perform the following operations: and if the command for canceling the first state and/or the command for switching to other states are not received, the first virtual image is continuously in the first state. The specific contents and beneficial effects are the same as those of the method, and are not described again here.
EXAMPLE five
Embodiments of the present application also provide a storage medium for storing programs, including but not limited to disk storage, CD-ROM, optical storage, and the like.
The program, when executed, causes an electronic device to: receiving an instruction to activate a first state of a first avatar; activating the first state of the first avatar; in response to the first state being activated, the first avatar presenting in the display interface at least one animation segment corresponding to the first state; the first avatar is controlled by a first client in the display interface.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. Especially, for the second to fifth embodiments, since they are substantially similar to the first embodiment, the description is simple, and the relevant points can be referred to the partial description of the first embodiment. The above-described apparatus embodiments are merely illustrative, and the units and modules described as separate components may or may not be physically separate. In addition, some or all of the units and modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (7)
1. An information display method, comprising:
triggering an instruction for activating a first state of a first avatar;
responding to the instruction, displaying the first virtual image, activating the first state of the first virtual image, and if an instruction for canceling the first state and/or an instruction for switching to other states are not received, the first virtual image is continuously in the first state;
the first virtual image displays at least one animation segment corresponding to the first state in a display interface, the first virtual image is controlled by a first client, second information sent by the first client to at least one second client is received, the second information is converted into target information according to the attribute of the second information and the first state of the first virtual image, and the target information is displayed on the display interface of the at least one second client;
the second information is text information, the font and/or the color of the text information are/is converted according to a first preset rule, and the text information after the font and/or the color are converted according to the first preset rule is determined as the target information;
the second information is voice information, the tone of the voice information is changed according to a second preset rule, and the voice information with the tone changed according to the second preset rule is determined as the target information;
the second information is animation information, the animation information comprises the first virtual image, the emotion of the first virtual image is changed according to a third preset rule, and the animation information after the emotion of the first virtual image is changed according to the third preset rule is determined as the target information.
2. The method of claim 1, wherein the first avatar is controlled by a first client, and wherein the triggering of the instruction to activate the first state of the first avatar includes at least one of:
receiving a first client-triggered button that activates the first state of the first avatar;
and acquiring interactive information between the first client and at least one second client, judging whether the acquired interactive information meets a preset condition or not according to the acquired interactive information, and if so, triggering an instruction for activating the first state of the first virtual image.
3. The method of claim 1, wherein the first avatar is controlled by a first client that receives first information sent by at least one second client, the first avatar presenting at least one animated segment in the display interface corresponding to the first state.
4. The method of claim 1, wherein the first avatar presents at least one animated segment in a display interface corresponding to the first state, including in particular:
and judging whether the at least one animation segment contains facial expression animation or not and whether the at least one animation segment meets a fourth preset rule or not, and if so, performing facial feature processing on the facial expression animation.
5. An electronic device, comprising: a display screen, a processor and a memory; wherein,
the display screen is used for displaying images;
the memory is used for storing programs;
the processor is configured to execute the program to perform operations comprising:
triggering an instruction for activating a first state of a first avatar;
responding to the instruction, displaying the first virtual image, activating the first state of the first virtual image, and if an instruction for canceling the first state and/or an instruction for switching to other states are not received, the first virtual image is continuously in the first state;
the first virtual image displays at least one animation segment corresponding to the first state in a display interface, the first virtual image is controlled by a first client, second information sent by the first client to at least one second client is received, the second information is converted into target information according to the attribute of the second information and the first state of the first virtual image, and the target information is displayed on the display interface of the at least one second client;
the second information is text information, the font and/or the color of the text information are/is converted according to a first preset rule, and the text information after the font and/or the color are converted according to the first preset rule is determined as the target information;
the second information is voice information, the tone of the voice information is changed according to a second preset rule, and the voice information with the tone changed according to the second preset rule is determined as the target information;
the second information is animation information, the animation information comprises the first virtual image, the emotion of the first virtual image is changed according to a third preset rule, and the animation information after the emotion of the first virtual image is changed according to the third preset rule is determined as the target information.
6. The electronic device of claim 5, wherein the processor is further configured to receive first information sent by at least one second client, and wherein the first avatar presents at least one animated segment in the display interface corresponding to the first state.
7. A storage medium storing a program, wherein the program, when executed, causes a mobile device to:
receiving an instruction to activate a first state of a first avatar;
activating the first state of the first avatar;
responding to the activated first state, and if an instruction for canceling the first state and/or an instruction for switching to other states are not received, the first virtual image is continuously in the first state;
the first virtual image displays at least one animation segment corresponding to the first state in a display interface, the first virtual image is controlled by a first client, second information sent by the first client to at least one second client is received, the second information is converted into target information according to the attribute of the second information and the first state of the first virtual image, and the target information is displayed on the display interface of the at least one second client;
the second information is text information, the font and/or the color of the text information are/is converted according to a first preset rule, and the text information after the font and/or the color are converted according to the first preset rule is determined as the target information;
the second information is voice information, the tone of the voice information is changed according to a second preset rule, and the voice information with the tone changed according to the second preset rule is determined as the target information;
the second information is animation information, the animation information comprises the first virtual image, the emotion of the first virtual image is changed according to a third preset rule, and the animation information after the emotion of the first virtual image is changed according to the third preset rule is determined as the target information; the first avatar is controlled by a first client in the display interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369086.9A CN107085495B (en) | 2017-05-23 | 2017-05-23 | Information display method, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369086.9A CN107085495B (en) | 2017-05-23 | 2017-05-23 | Information display method, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107085495A CN107085495A (en) | 2017-08-22 |
CN107085495B true CN107085495B (en) | 2020-02-07 |
Family
ID=59609198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710369086.9A Active CN107085495B (en) | 2017-05-23 | 2017-05-23 | Information display method, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107085495B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108322832B (en) * | 2018-01-22 | 2022-05-17 | 阿里巴巴(中国)有限公司 | Comment method and device and electronic equipment |
CN109033423A (en) * | 2018-08-10 | 2018-12-18 | 北京搜狗科技发展有限公司 | Simultaneous interpretation caption presentation method and device, intelligent meeting method, apparatus and system |
CN109432773A (en) * | 2018-08-30 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | Processing method, device, electronic equipment and the storage medium of scene of game |
CN111190744B (en) * | 2018-11-15 | 2023-08-22 | 青岛海信移动通信技术股份有限公司 | Virtual character control method and device and mobile terminal |
CN110941954B (en) * | 2019-12-04 | 2021-03-23 | 深圳追一科技有限公司 | Text broadcasting method and device, electronic equipment and storage medium |
CN111246225B (en) * | 2019-12-25 | 2022-02-08 | 北京达佳互联信息技术有限公司 | Information interaction method and device, electronic equipment and computer readable storage medium |
CN111488090A (en) * | 2020-04-13 | 2020-08-04 | 北京市商汤科技开发有限公司 | Interaction method, interaction device, interaction system, electronic equipment and storage medium |
CN112182194A (en) * | 2020-10-21 | 2021-01-05 | 南京创维信息技术研究院有限公司 | Method, system and readable storage medium for expressing emotional actions of television avatar |
CN114327205B (en) * | 2021-12-30 | 2024-06-21 | 广州繁星互娱信息科技有限公司 | Picture display method, storage medium and electronic device |
CN118689569A (en) * | 2023-03-21 | 2024-09-24 | 华为技术有限公司 | Method, device and electronic device for displaying virtual image |
CN117319758B (en) * | 2023-10-13 | 2024-03-12 | 南京霍巴信息科技有限公司 | Live broadcast method and live broadcast system based on cloud platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2043086A1 (en) * | 2006-06-30 | 2009-04-01 | Sharp Kabushiki Kaisha | Image data providing device, image display device, image display system, control method for image data providing device, control method for image display device, control program and recording medium |
CN101931621A (en) * | 2010-06-07 | 2010-12-29 | 上海那里网络科技有限公司 | Device and method for carrying out emotional communication in virtue of fictional character |
CN102571633A (en) * | 2012-01-09 | 2012-07-11 | 华为技术有限公司 | Method for demonstrating user state, demonstration terminal and server |
CN103797761A (en) * | 2013-08-22 | 2014-05-14 | 华为技术有限公司 | Communication method, client, and terminal |
-
2017
- 2017-05-23 CN CN201710369086.9A patent/CN107085495B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2043086A1 (en) * | 2006-06-30 | 2009-04-01 | Sharp Kabushiki Kaisha | Image data providing device, image display device, image display system, control method for image data providing device, control method for image display device, control program and recording medium |
CN101931621A (en) * | 2010-06-07 | 2010-12-29 | 上海那里网络科技有限公司 | Device and method for carrying out emotional communication in virtue of fictional character |
CN102571633A (en) * | 2012-01-09 | 2012-07-11 | 华为技术有限公司 | Method for demonstrating user state, demonstration terminal and server |
CN103797761A (en) * | 2013-08-22 | 2014-05-14 | 华为技术有限公司 | Communication method, client, and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107085495A (en) | 2017-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107085495B (en) | Information display method, electronic equipment and storage medium | |
US10210002B2 (en) | Method and apparatus of processing expression information in instant communication | |
CN107632706B (en) | Application data processing method and system of multi-modal virtual human | |
CN113691829B (en) | Virtual object interaction method, device, storage medium and computer program product | |
US11537279B2 (en) | System and method for enhancing an expression of a digital pictorial image | |
CN112035046B (en) | List information display method, device, electronic equipment and storage medium | |
CN110278140B (en) | Communication method and device | |
CN111464430B (en) | Dynamic expression display method, dynamic expression creation method and device | |
CN106774852B (en) | Message processing method and device based on virtual reality | |
CN110288703A (en) | Image processing method, device, equipment and storage medium | |
CN111124668A (en) | Memory release method, device, storage medium and terminal | |
JP2021005768A (en) | Computer program, information processing method and video distribution system | |
WO2023071556A1 (en) | Virtual image-based data processing method and apparatus, computer device, and storage medium | |
CN105744338B (en) | A kind of method for processing video frequency and its equipment | |
WO2025098269A1 (en) | Video processing method and apparatus, and electronic device and storage medium | |
CN114390017B (en) | Session reminding method, device and equipment | |
CN107450905A (en) | Session interface rendering intent and client | |
JP2018156183A (en) | Bot control management program, method, device, and system | |
CN117033599A (en) | Digital content generation method and related equipment | |
CN104485122A (en) | Communication information export method and device and terminal equipment | |
CN105278833B (en) | The processing method and terminal of information | |
CN117319340A (en) | Voice message playing method, device, terminal and storage medium | |
CN105262676A (en) | Method and apparatus for transmitting message in instant messaging | |
US9384013B2 (en) | Launch surface control | |
WO2020235346A1 (en) | Computer program, server device, terminal device, system, and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190221 Address after: 361000 Fujian Xiamen Torch High-tech Zone Software Park Innovation Building Area C 3F-A193 Applicant after: Xiamen Black Mirror Technology Co., Ltd. Address before: 9th Floor, Maritime Building, 16 Haishan Road, Huli District, Xiamen City, Fujian Province, 361000 Applicant before: XIAMEN HUANSHI NETWORK TECHNOLOGY CO., LTD. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |