CN112565806A - Virtual gift presenting method, device, computer equipment and medium - Google Patents
Virtual gift presenting method, device, computer equipment and medium Download PDFInfo
- Publication number
- CN112565806A CN112565806A CN202011403675.2A CN202011403675A CN112565806A CN 112565806 A CN112565806 A CN 112565806A CN 202011403675 A CN202011403675 A CN 202011403675A CN 112565806 A CN112565806 A CN 112565806A
- Authority
- CN
- China
- Prior art keywords
- face
- target
- virtual gift
- target object
- key point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 93
- 230000000977 initiatory effect Effects 0.000 claims abstract description 25
- 230000000694 effects Effects 0.000 claims description 101
- 230000004044 response Effects 0.000 claims description 71
- 210000001847 jaw Anatomy 0.000 claims description 36
- 210000000746 body region Anatomy 0.000 claims description 35
- 230000015654 memory Effects 0.000 claims description 18
- 210000004373 mandible Anatomy 0.000 claims description 15
- 210000000056 organ Anatomy 0.000 claims description 13
- 210000001061 forehead Anatomy 0.000 claims description 10
- 230000001815 facial effect Effects 0.000 claims description 9
- 210000000744 eyelid Anatomy 0.000 claims description 5
- 230000003993 interaction Effects 0.000 abstract description 4
- 210000001508 eye Anatomy 0.000 description 25
- 230000006870 function Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 210000000887 face Anatomy 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25875—Management of end-user data involving end-user authentication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a virtual gift presenting method and device, computer equipment and a medium, and belongs to the technical field of live broadcast. The method comprises the following steps: displaying a live interface of a live broadcast room, wherein the live interface comprises a plurality of object areas; responding to the selection operation of a target object region, and acquiring a target object characteristic corresponding to the target object region; and responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object. The method for presenting the virtual gifts to the specific anchor increases the interactivity between the user and the anchor and improves the flexibility of the interaction between the user and the anchor.
Description
Technical Field
The embodiment of the application relates to the technical field of live broadcast, in particular to a virtual gift presenting method and device, computer equipment and a medium.
Background
The user can give a virtual gift to the main broadcast in the live broadcast process, and the virtual gift special effect corresponding to the virtual gift is displayed in the live broadcast interface. For example, if the user presents a virtual gift "yacht" to the main broadcast, a special effect corresponding to the "yacht" is displayed in the live broadcast interface.
However, if a plurality of anchor broadcasters live through the same live broadcast room, that is, if a live broadcast interface of the live broadcast room includes a plurality of anchor broadcasters, how to present a virtual gift to one of the anchor broadcasters becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a virtual gift giving method, a virtual gift giving device, computer equipment and a medium, and improves the flexibility of interaction between a user and a main broadcast. The technical scheme is as follows:
in one aspect, a virtual gift giving method is provided, the method comprising:
displaying a live interface of a live broadcast room, wherein the live interface comprises a plurality of object areas;
responding to the selection operation of a target object region, and acquiring a target object characteristic corresponding to the target object region;
and responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object.
In one possible implementation manner, after the virtual gift giving request is initiated to the target object corresponding to the target object area in response to the virtual gift giving operation, the method further includes:
and displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
In another possible implementation manner, the live broadcasting interface includes a live broadcasting screen, where the live broadcasting screen includes the multiple object areas, and the obtaining, in response to a selection operation on a target object area, a target object feature corresponding to the target object area includes:
responding to the trigger operation of the live broadcast picture, and determining a target position corresponding to the trigger operation;
identifying the live broadcast picture, and determining an object area comprising the target position;
and determining the object area as the target object area, and acquiring the target object characteristics corresponding to the target object area.
In another possible implementation manner, the initiating a virtual gift giving request in response to the virtual gift giving operation includes:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or, switching from the live interface to the virtual gift giving interface;
the virtual gift giving request is initiated in response to a selection operation of any virtual gift in the virtual gift giving interface.
In another possible implementation manner, the virtual gift giving interface includes special effect thumbnails corresponding to the plurality of virtual gifts; the initiating the virtual gift presentation request in response to a selection operation of any virtual gift in the virtual gift presentation interface includes:
and responding to the selection operation of the special effect thumbnail corresponding to any virtual gift, and initiating the virtual gift giving request.
In another possible implementation manner, the virtual gift giving interface includes a giving control, and the initiating the virtual gift giving request in response to a selection operation of any virtual gift in the virtual gift giving interface includes:
setting the selected virtual gift to a selected state in response to a selection operation of the any virtual gift;
and initiating a gifting request of the virtual gift in the selected state in response to the triggering operation of the gifting control.
In another possible implementation manner, the acquiring, by responding to a selection operation on a target object region, a target object feature corresponding to the target object region includes:
responding to the selection operation of the target face region, and acquiring a plurality of face key points of the target face region, wherein the face key points comprise at least one of face edge points or face organ edge points of the target face region;
and determining the target face features of the target face area according to the positions of the face key points.
In another possible implementation manner, the determining, according to the positions of the plurality of face key points, a target face feature of the target face region includes at least one of:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring a second face sub-feature of the target face area according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points.
In another possible implementation manner, the determining, according to the positions of the plurality of face key points, a target face feature of the target face region includes:
determining a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the positions of the first face key point, the second face key point and the third face key point;
and determining a first ratio between the first distance and the second distance as the target face feature.
In another possible implementation manner, the determining the target face feature of the target face region according to the positions of the plurality of face key points includes:
selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points;
and determining the target face features of the target face area according to the positions of the selected face key points.
In another possible implementation manner, the selecting, from the plurality of face key points, a face key point located in a first face sub-region or a second face sub-region includes:
selecting a first canthus key point, a second canthus key point and a face edge key point which is at the same height as a lower eyelid key point from the plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In another possible implementation manner, the acquiring, by responding to a selection operation on a target object region, a target object feature corresponding to the target object region includes:
acquiring a face shape parameter of the target face region in response to a selection operation of the target face region, wherein the face shape parameter comprises at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter or a chin angle parameter;
determining the face parameters as the target facial features.
In another possible implementation manner, the obtaining the facial form parameter of the target face region in response to the selection operation of the target face region includes:
in response to a selection operation on the target face region, determining a first line segment corresponding to a first jaw key point and a second line segment corresponding to a second jaw key point and a third jaw key point according to the position of the first jaw key point, the position of the second jaw key point and the position of the third jaw key point, wherein the first jaw key point and the second jaw key point are located at the same height, and the third jaw key point is a vertex in a plurality of jaw key points;
and determining the mandible angle parameter according to the included angle between the first line segment and the second line segment.
In another possible implementation manner, the obtaining the face parameters of the target face region in response to the selection operation of the target face region includes:
responding to the selection operation of the target face area, and according to the position of a first chin key point, the position of a second chin key point and the position of a third chin key point, determining a third line segment corresponding to the first chin key point and the second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point, wherein the first chin key point and the second chin key point are located at the same height, and the third chin key point is a vertex of a plurality of chin key points;
and determining the chin angle parameter according to an included angle between the third line segment and the fourth line segment.
In another possible implementation manner, the acquiring a target object feature corresponding to the target object region in response to the selection operation on the target object region includes:
responding to the selection operation of the target human body region, acquiring a first human body length of a first human body subregion and a second human body length of a second human body subregion in the target human body region, and determining a ratio of the first human body length to the second human body length as a target human body characteristic of the target human body region; or,
responding to the selection operation of the target human body area, acquiring the total human body length and the total human body width of the target human body area, and determining the ratio of the total human body length to the total human body width as the target human body characteristic of the target human body area; or,
in response to the selection operation of the target human body area, acquiring a clothing feature in the target human body area, and determining the clothing feature as the target human body feature of the target human body area.
In another possible implementation manner, a live view in the live view interface includes a plurality of view areas, and the obtaining a target object feature corresponding to a target object area in response to a selection operation on the target object area includes:
and responding to the selection operation of the target picture area, determining a target background area in the target picture area, and acquiring the target background feature of the target background area.
In another aspect, there is provided a virtual gift giving method, the method including:
receiving a virtual gift giving request, wherein the virtual gift giving request carries target object characteristics;
determining a plurality of object areas included in a live interface of a live broadcast room;
determining object features matched with the target object features in the object features corresponding to the object regions, and determining the object regions corresponding to the object features matched with the target object features as target object regions;
and displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
In one possible implementation manner, the determining, as the target object region, an object feature that matches the target object feature among the object features corresponding to the plurality of object regions includes:
and respectively acquiring difference values between the object features of the plurality of object regions and the target object features, and determining the object region corresponding to the minimum difference value as the target object region.
In another possible implementation manner, the displaying, in the target object region, a virtual gift special effect corresponding to the virtual gift includes:
displaying the virtual gift special effect at a target site in the target object region.
In another possible implementation manner, the displaying a virtual gift special effect corresponding to the virtual gift in the target object area, where the number of the virtual gifts is carried in the virtual gift giving request, includes:
in response to the number of the virtual gifts being greater than a first reference number and less than a second reference number, displaying the number of the virtual gift special effects in the target object area in an overlapping manner; or,
in response to the number of the virtual gifts being greater than a third reference number, displaying text information corresponding to the virtual gift special effect and the virtual gift in the target object area, wherein the text information includes the number of the virtual gifts.
In another possible implementation manner, the determining the object regions included in the live interface of the live broadcast room includes:
and carrying out face recognition on the live broadcast picture to determine the plurality of face areas.
In another possible implementation manner, after the target face area displays a virtual gift special effect corresponding to the virtual gift, the method further includes:
and sending the live broadcast picture added with the virtual gift special effect to a live broadcast server, wherein the live broadcast server is used for releasing the live broadcast picture in the live broadcast room.
In another aspect, there is provided a virtual gift giving apparatus including:
the display module is used for displaying a live broadcast interface of a live broadcast room, and the live broadcast interface comprises a plurality of object areas;
the characteristic acquisition module is used for responding to the selection operation of the target object area and acquiring the target object characteristic corresponding to the target object area;
and the request initiating module is used for responding to the virtual gift giving operation and initiating a virtual gift giving request to the target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object.
In one possible implementation, the apparatus further includes:
and the display module is used for displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area.
In one possible implementation manner, the live interface includes a live frame, the live frame includes the plurality of object regions, and the feature obtaining module includes:
the position determining unit is used for responding to the trigger operation of the live broadcast picture and determining a target position corresponding to the trigger operation;
the area determining unit is used for identifying the live broadcast picture and determining an object area comprising the target position;
and the feature acquisition unit is used for determining the object area as the target object area and acquiring the target object feature corresponding to the target object area.
In another possible implementation manner, the request initiation module is configured to:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or, switching from the live interface to the virtual gift giving interface;
the virtual gift giving request is initiated in response to a selection operation of any virtual gift in the virtual gift giving interface.
In another possible implementation manner, the virtual gift giving interface includes special effect thumbnails corresponding to the plurality of virtual gifts; the request initiating module is configured to initiate the virtual gift giving request in response to a selection operation of the special effect thumbnail corresponding to the any virtual gift.
In another possible implementation manner, the virtual gift giving interface includes a giving control, and the request initiation module is configured to:
setting the selected virtual gift to a selected state in response to a selection operation of the any virtual gift;
and initiating a gifting request of the virtual gift in the selected state in response to the triggering operation of the gifting control.
In another possible implementation manner, the target object region includes a target face region, and the feature obtaining module includes:
a key point obtaining unit, configured to obtain, in response to a selection operation on the target face region, a plurality of face key points of the target face region, where the face key points include at least one of face edge points or face organ edge points of the target face region;
and the characteristic acquisition unit is used for determining the target face characteristics of the target face area according to the positions of the face key points.
In another possible implementation manner, the feature obtaining unit is configured to:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring a second face sub-feature of the target face area according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points.
In another possible implementation manner, the feature obtaining unit is configured to:
determining a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the positions of the first face key point, the second face key point and the third face key point;
and determining a first ratio between the first distance and the second distance as the target face feature.
In another possible implementation manner, the plurality of face key points include face edge points and face organ edge points, and the feature obtaining unit is configured to:
selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points;
and determining the target face features of the target face area according to the positions of the selected face key points.
In another possible implementation manner, the feature obtaining unit is configured to:
selecting a first canthus key point, a second canthus key point and a face edge key point which is at the same height as a lower eyelid key point from the plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In another possible implementation manner, the target object region includes a target face region, and the feature obtaining module includes:
a parameter acquisition unit configured to acquire a face shape parameter of the target face region in response to a selection operation on the target face region, the face shape parameter including at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter, or a chin angle parameter;
and the characteristic acquisition unit is also used for determining the face parameters as the target human face characteristics.
In another possible implementation manner, the facial form parameter includes the chin angle parameter, and the parameter obtaining unit is configured to:
in response to a selection operation on the target face region, determining a first line segment corresponding to a first jaw key point and a second line segment corresponding to a second jaw key point and a third jaw key point according to the position of the first jaw key point, the position of the second jaw key point and the position of the third jaw key point, wherein the first jaw key point and the second jaw key point are located at the same height, and the third jaw key point is a vertex in a plurality of jaw key points;
and determining the mandible angle parameter according to the included angle between the first line segment and the second line segment.
In another possible implementation manner, the face type parameter includes the chin angle parameter, and the parameter obtaining unit is configured to:
responding to the selection operation of the target face area, and according to the position of a first chin key point, the position of a second chin key point and the position of a third chin key point, determining a third line segment corresponding to the first chin key point and the second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point, wherein the first chin key point and the second chin key point are located at the same height, and the third chin key point is a vertex of a plurality of chin key points;
and determining the chin angle parameter according to an included angle between the third line segment and the fourth line segment.
In another possible implementation manner, the target object region includes a target human body region, and the feature obtaining module includes:
a feature obtaining unit, configured to obtain, in response to a selection operation on the target human body region, a first human body length of a first human body subregion and a second human body length of a second human body subregion in the target human body region, and determine a ratio between the first human body length and the second human body length as a target human body feature of the target human body region; or,
the characteristic acquiring unit is further configured to acquire a total human body length and a total human body width of the target human body region in response to a selection operation on the target human body region, and determine a ratio between the total human body length and the total human body width as a target human body characteristic of the target human body region; or,
the feature obtaining unit is further configured to obtain a clothing feature in the target human body region in response to a selection operation on the target human body region, and determine the clothing feature as a target human body feature of the target human body region.
In another possible implementation manner, a live view in the live interface includes a plurality of view areas, and the feature obtaining module includes:
and the characteristic acquisition unit is also used for responding to the selection operation of the target picture area, determining a target background area in the target picture area and acquiring the target background characteristic of the target background area.
In another aspect, there is provided a virtual gift giving apparatus including:
the request receiving module is used for receiving a virtual gift giving request, and the virtual gift giving request carries target object characteristics;
the area determining module is used for determining a plurality of object areas included in a live interface of a live broadcast room;
the characteristic matching module is used for determining object characteristics matched with the target object characteristics in the object characteristics corresponding to the object areas and determining the object areas corresponding to the object characteristics matched with the target object characteristics as target object areas;
and the special effect display module is used for displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area.
In a possible implementation manner, the feature matching module is configured to obtain difference values between object features of the multiple object regions and the target object feature, and determine an object region corresponding to a minimum difference value as the target object region.
In another possible implementation manner, the special effect display module is configured to display the virtual gift special effect at a target portion in the target object region.
In another possible implementation manner, the number of the virtual gifts is carried in the virtual gift giving request, and the special effect display module is configured to:
in response to the number of the virtual gifts being greater than a first reference number and less than a second reference number, displaying the number of the virtual gift special effects in the target object area in an overlapping manner; or,
in response to the number of the virtual gifts being greater than a third reference number, displaying text information corresponding to the virtual gift special effect and the virtual gift in the target object area, wherein the text information includes the number of the virtual gifts.
In another possible implementation manner, the object region includes a face region, the live broadcast interface includes a live broadcast picture, and the region determination module is configured to perform face recognition on the live broadcast picture and determine a plurality of face regions.
In another possible implementation manner, the apparatus further includes:
and the picture sending module is used for sending the live pictures added with the virtual gift special effects to a live broadcast server, and the live broadcast server is used for releasing the live broadcast pictures in the live broadcast room.
In another aspect, there is provided a computer apparatus comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded and executed by the processor to carry out the operations performed in the virtual gift giving method according to the above aspect.
In another aspect, there is provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to implement the operations performed in the virtual gift giving method according to the above aspect.
In another aspect, there is provided a computer program product or a computer program comprising computer program code stored in a computer readable storage medium, the computer program code being loaded and executed by a processor to implement the operations performed in the virtual gift giving method according to the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method, the device, the computer equipment and the medium provided by the embodiment of the application can select the target face area to be presented with the virtual gift from the plurality of face areas, present the virtual gift to a specific anchor, and carry the target face characteristics in the presentation request, so that the virtual gift special effect can be conveniently displayed in the corresponding target face area according to the target face characteristics subsequently.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flow chart of a virtual gift giving method provided by an embodiment of the present application;
FIG. 3 is a flow chart of another virtual gift giving method provided by an embodiment of the present application;
FIG. 4 is a flow chart of another virtual gift giving method provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a face key point provided in an embodiment of the present application;
fig. 6 is a schematic diagram of another face key point provided in the embodiment of the present application;
fig. 7 is a schematic view of a human face provided in an embodiment of the present application;
fig. 8 is a schematic view of another human face provided in the embodiment of the present application;
FIG. 9 is a schematic view of another human face provided in the embodiments of the present application;
FIG. 10 is a schematic view of another human face provided in the embodiments of the present application;
FIG. 11 is a schematic structural diagram of a virtual gift-giving device according to an embodiment of the present application;
FIG. 12 is a schematic structural diagram of another virtual gift-giving device provided in an embodiment of the present application;
FIG. 13 is a schematic structural diagram of another virtual gift-giving device provided in an embodiment of the present application;
FIG. 14 is a schematic structural diagram of another virtual gift-giving device provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
It will be understood that the terms "first," "second," and the like as used herein may be used herein to describe various concepts, which are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, first face keypoints may be referred to as second face keypoints, and second face keypoints may be referred to as first face keypoints, without departing from the scope of the present application.
As used herein, the terms "at least one," "a plurality," "each," "any," and the like, at least one comprises one, two, or more than two, and a plurality comprises two or more than two, each referring to each of the corresponding plurality, and any referring to any one of the plurality. For example, the plurality of virtual gifts includes 3 virtual gifts, each virtual gift refers to each of the 3 virtual gifts, and any one of the 3 virtual gifts refers to any one of the 3 virtual gifts, which may be a first one, a second one, or a third one.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes at least one viewer terminal 101 (1 is taken as an example in fig. 1), a anchor terminal 102, and a live server 102. At least one viewer terminal 101 is connected to the live server 103 via a wireless or wired network, and the anchor terminal 102 is connected to the live server 103 via a wireless or wired network.
The viewer terminal 101 and the anchor terminal 102 have installed thereon a target application that the live server 103 provides services, and the viewer terminal 101 and the anchor terminal 102 can implement functions such as data transmission, message interaction, and the like through the target application. Optionally, the spectator terminal 101 and the anchor terminal 102 are computers, cell phones, tablets, or other terminals. Optionally, the target application is a target application in an operating system, or a target application provided by a third party. For example, the target application is a live application having a live function, a virtual gift giving function, and the like, but the live application may also have other functions, such as a microphone connecting function, a comment function, and the like. Optionally, the live broadcast server 103 is a server, or a server cluster composed of several servers, or a cloud computing service center.
The method provided by the embodiment of the application is applied to a scene that the virtual gift is presented to the anchor broadcast in the live broadcast process. For example, in the process of watching a live broadcast, if a plurality of anchor broadcasters exist in a live broadcast picture and a user wants to present a virtual gift to one of the anchor broadcasters, the method for presenting a virtual gift provided in the embodiment of the present application can be used to trigger an object area corresponding to the anchor broadcasters to which the virtual gift is to be presented, then select the presented virtual gift to present the virtual gift to the anchor broadcasters, and display a virtual gift special effect corresponding to the virtual gift in the object area of the anchor broadcasters.
Fig. 2 is a flowchart of a virtual gift giving method according to an embodiment of the present application. The execution subject of the embodiment of the application is the audience terminal. Referring to fig. 2, the method comprises the steps of:
201. and displaying a live interface of the live broadcast room, wherein the live broadcast interface comprises a plurality of object areas.
The live interface comprises a plurality of object areas, and the object areas refer to areas where the objects are located. Optionally, the object region includes a face region, a body region, a background region, or other regions. Optionally, the live interface includes a live screen, and the live screen includes the plurality of object regions.
202. And responding to the selection operation of the target object area, and acquiring the target object characteristics corresponding to the target object area.
The operation of selecting the target object area refers to a single-click operation, a double-click operation, a long-press operation, a frame selection operation, or other operations performed on the target object area. The target object characteristics are used for describing the target object, and the object characteristics of different objects are different, so that different object areas can be distinguished based on the object characteristics.
In one possible implementation manner, in response to a trigger operation of a live broadcast picture, a viewer terminal determines a target position corresponding to the trigger operation, identifies the live broadcast picture, determines an object region including the target position, determines the object region as a target object region, and acquires a target object feature corresponding to the target object region. The trigger operation comprises a single-click operation, a double-click operation, a long-time pressing operation, a frame selection operation or other operations.
203. And responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area.
The target objects are target faces, target human bodies, target backgrounds and the like, and the virtual gift giving request carries target object characteristics.
The method comprises the steps that a spectator terminal initiates a virtual gift presentation request, namely the virtual gift presentation request is sent to a live broadcast server, the live broadcast server sends the virtual gift presentation request to a main broadcast terminal, the main broadcast terminal displays a virtual gift special effect corresponding to a virtual gift in a target object area according to the received virtual gift presentation request, a live broadcast picture with the virtual gift special effect added is sent to the spectator terminal through the direct broadcast server, and the spectator terminal displays the live broadcast picture, namely the spectator terminal displays the virtual gift special effect corresponding to the virtual gift in the target object area.
The method comprises the steps that after target object characteristics of a target object area are determined by an audience terminal, a plurality of virtual gifts selected by a user are displayed, the user performs selection operation on the virtual gifts to give the selected virtual gifts to a main broadcast, and the audience terminal initiates presentation requests of the virtual gifts in response to the virtual gift presentation operation, namely sends the virtual gift presentation requests to a live broadcast server.
The method provided by the embodiment of the application can select the target object area to be presented with the virtual gift from the plurality of object areas, present the virtual gift to the specific anchor, and the presentation request carries the target object characteristics, so that the virtual gift special effect can be displayed in the corresponding target object area according to the target object characteristics in the following process.
Fig. 3 is a flowchart of another virtual gift giving method provided in an embodiment of the present application. The execution main body of the embodiment of the application is the anchor terminal. Referring to fig. 3, the method comprises the steps of:
301. a virtual gift-giving request is received, the virtual gift-giving request carrying target object characteristics.
The anchor terminal determines a target object characteristic from the received virtual gift-presentation request, and subsequently determines an anchor to which the virtual gift is presented based on the target object characteristic.
302. A plurality of object regions included in a live interface of a live room is determined.
Compared with the live broadcast picture of the audience terminal during the trigger operation, the live broadcast picture of the anchor terminal may be changed, and a plurality of object areas may move relative to previous positions, so that a plurality of object areas in the current live broadcast interface of the anchor terminal need to be determined.
303. And determining the object features matched with the target object features from the object features corresponding to the plurality of object regions, and determining the object regions corresponding to the object features matched with the target object features as the target object regions.
And matching the object features with the target object features to determine the object features with the highest similarity with the target object features in the plurality of object features, determining the object area corresponding to the object features with the highest similarity as the target object area, and determining that the anchor corresponding to the target object area is the anchor for the virtual gift to be presented.
304. And displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
After the anchor terminal determines the target object area, the given virtual gift is determined according to the virtual gift information in the virtual gift giving request, the virtual gift special effect rendering is carried out in the target object area to display the virtual gift special effect corresponding to the virtual gift, and the virtual gift special effect is not displayed in other object areas in the live broadcast picture, so that the virtual gift is given to the specific anchor.
According to the method provided by the embodiment of the application, the target object area corresponding to the target feature can be determined from the displayed multiple object areas according to the target object feature carried in the presentation request, so that the virtual gift special effect is displayed in the target object area, and the virtual gift can be presented to a specific anchor broadcast.
In the embodiment of the present application, the display process of the special effect of the virtual gift is described by using the following embodiment shown in fig. 4, taking the target area as the face area as an example.
Fig. 4 is a flowchart of another virtual gift giving method provided in an embodiment of the present application. The interaction subject of the embodiment of the application is a spectator terminal, a live broadcast server and a main broadcast terminal. Referring to fig. 4, the method includes the steps of:
401. and the audience terminal displays a live broadcast interface of the live broadcast room.
In one possible implementation manner, the live interface includes a live frame, and the live frame includes a plurality of face regions. Optionally, the live broadcast picture includes a plurality of divided regions, and each region displays a different face, for example, in a scene of multiple anchor continuous microphones, the live broadcast picture is divided into a plurality of regions, and each region displays a corresponding live broadcast picture of the anchor; or, the live broadcast picture includes an area in which a plurality of faces are all displayed, for example, a plurality of anchor broadcasters live in the same live broadcast room, and in the live broadcast process, images of the anchor broadcasters are acquired through the same anchor terminal.
In one possible implementation, the viewer terminal is installed with a live application, through which a live interface of the live room is displayed.
402. And the audience terminal responds to the selection operation of the target face area to obtain the target face characteristics corresponding to the target face area.
In the embodiment of the application, the virtual gift is given to the anchor broadcast through the audience terminal in the live broadcast watching process, and for a plurality of anchor broadcasts, the user can select any anchor broadcast from the anchor broadcasts and give the virtual gift to the selected anchor broadcast.
In one possible implementation manner, a user performs a trigger operation on a live broadcast picture, and a viewer terminal determines a target position corresponding to the trigger operation in response to the trigger operation on the live broadcast picture; carrying out face recognition on the live broadcast picture, determining a face area comprising a target position, namely carrying out face recognition on the live broadcast picture, recognizing a plurality of face areas in the live broadcast picture, and determining the face area comprising the target position from the plurality of face areas; and determining the face area as a target face area, and acquiring target face characteristics corresponding to the target face area. In addition, if the plurality of face areas in the live broadcast picture do not include the target position, the trigger operation is considered to be generated by the misoperation of the user, and the subsequent virtual gift giving process cannot be executed.
The target position is the position of the trigger operation performed by the finger of the user, the target face area is the area where the face of the user to be presented with the virtual gift is located in the live broadcast picture, and the target face features are used for representing the features of the face.
In one possible implementation manner, the manner in which the audience terminal determines the face region includes at least the following two manners:
the first method comprises the following steps: the audience terminal detects face key points in a live broadcast picture, and determines a face area based on a plurality of detected face key points. For example, face key point detection is performed on a live broadcast picture to obtain a plurality of face key points in the live broadcast picture, and a region where the plurality of face key points belonging to the same face are located is determined as a face region. Optionally, a face key point detection algorithm is adopted to detect face key points; or, detecting the face key points by adopting a face key point detection model; alternatively, the detection is performed in other ways.
And the second method comprises the following steps: and the audience terminal calls the face recognition model to recognize the live broadcast picture and determines the face area where each face in the live broadcast picture is located.
After determining the selected target face area, the audience terminal acquires a plurality of face key points of the target face area in a possible implementation mode; and determining the target face characteristics of the target face area according to the positions of the plurality of face key points. The face key points include at least one of face edge points or face organ edge points of a target face region, the number of the acquired face key points is 5, 21, 49, 68 or 100, and the like, and the number of the face key points of each face is not limited in the application. For example, referring to fig. 5, the number of face keypoints is 68.
The audience terminal determines the target face characteristics of the target face area according to the positions of the face key points, and the method comprises the following possible implementation modes:
in one possible implementation, since the shapes of different faces may be different, the different shapes of faces include: the shape of the face is different, the shape of the face organ is different, the relative position of the face organ is different, or the relative position of the face organ and the edge of the face is different, and the difference can be represented by the relative position among a plurality of face key points of the face. For example, the shapes of human faces are different, and the relative positions of a plurality of face edge points of the human faces are also different; the shapes of the facial organs are different, and the relative positions of the edge points of the facial organs are also different. The relative positions between the face key points are used to represent the target face features.
Optionally, determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points; or acquiring a second face sub-feature of the target face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the vertical relative positions of the face key points.
The embodiments of the present application are described only by taking an example that the target face features include a first face sub-feature and a second face sub-feature, where the target face features include at least one first face sub-feature or at least one second face sub-feature. For example, according to the abscissa of the face key point 1, the face key point 2 and the face key point 3, obtaining a first face sub-feature 1 of a target face region; and acquiring a first face sub-feature 2 of the target face area according to the abscissa of the face key point 4, the face key point 5 and the face key point 6. The first face features include a first face sub-feature 1 and a first face sub-feature 2.
In another possible implementation manner, the audience terminal determines a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point; and determining a first ratio between the first distance and the second distance as the target human face characteristic. The first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
It should be noted that, in the embodiment of the present application, only the target face features are determined according to the positions of 3 face key points as an example for description, in another embodiment, other numbers of face key points can be used to determine the target face features, for example, the target face features are determined according to the positions of 4 face key points; or determining the target human face characteristics according to the positions of the 6 human face key points. The number of key points of the face is not limited in the embodiment of the application.
In another possible implementation manner, the angle of the face displayed in the live view may change, and the detected face key points may also be different for faces with different angles, for example, if the face in the live view is the left side face of the anchor, the face key point of the right side face of the anchor cannot be detected. But at any angle, the face region includes eyes and a nose bridge. For example, the left face includes the left eye, the right face includes the right eye, and both the left and right faces include the nose bridge. Wherein, the bridge of the nose is located face mid-line, no matter detect left side face or right side face, the homoenergetic detects the people's face key point on the face mid-line.
In order to reduce the influence of the change of the face angle on the determination of the target face features, the face key points of the region to which the eyes belong or the face key points of the region to which the face midline belongs are adopted to determine the target face features. The method comprises the following steps that a spectator terminal selects face key points positioned in a first face subarea or a second face subarea from a plurality of face key points; and determining the target face characteristics of the target face area according to the positions of the selected face key points. The first face sub-region is a region to which the eyes belong, and the second face sub-region is a region to which the midline of the face belongs, for example, the first face region includes the eyes and the region around the eyes, and the second face region includes the region to which the nose bridge is located.
In one possible implementation, the viewer terminal selects a first corner key point, a second corner key point, and a face edge key point at the same height as the lower eyelid key point from a plurality of face key points. Wherein the first corner of the eye key point and the second corner of the eye key point belong to the same eye, or belong to different eyes.
In order to more accurately acquire the target face features, optionally, the first canthus key point and the second canthus key point belong to the same eye, the first canthus key point, the second canthus key point and the face edge key point are at the same height, and the face edge key point and the first canthus key point and the second canthus key point are located on the same side, that is, the first canthus key point and the second canthus key point belong to the left eye, and then the face edge key point is the left face edge point; and if the first eye corner key point and the second eye corner key point belong to the right eye, the face edge key point is the right face edge point. For example, referring to fig. 6, if the first canthus keypoint is C and the second canthus keypoint is B, the face edge keypoint is a; or, if the first eye corner key point is D, the second eye corner key point is E, and the face edge key point is F.
In one possible implementation manner, after the audience terminal selects the first corner key point, the second corner key point and the face edge key point, the target face feature is determined by combining the possible implementation manner in which the ratio between any two distances is determined as the target face feature. Optionally, the first distance is a transverse distance between a face edge key point and a second canthus key point, the second distance is a transverse distance between the second canthus key point and the first canthus key point, and a ratio of the first distance to the second distance is determined as a target face feature; or the first distance is a transverse distance between the face edge key point and the first eye corner key point, the second distance is a transverse distance between the second eye corner key point and the first eye corner key point, and the ratio of the first distance to the second distance is determined as the target face feature.
For example, referring to fig. 6, if the selected face key points are A, B and C, r1 is determined to be AB/BC, and r1 is determined to be the target face feature. Wherein AB is the lateral distance between the face edge keypoint a and the second corner key point B, BC is the lateral distance between the second corner key point B and the first corner key point C, and r1 is the ratio between AB and BC. And if the selected face key points are D, E and F, determining r1 as EF/DE, and determining r1 as the target face feature. Wherein EF is the lateral distance between the face edge key point F and the second eye corner key point E, DE is the lateral distance between the second eye corner key point E and the first eye corner key point D, and r1 is the ratio between EF and DE.
In another possible implementation, the viewer terminal selects a first nose bridge keypoint, a second nose bridge keypoint, and a third nose bridge keypoint from the plurality of face keypoints. For example, referring to fig. 6, the first nasal bridge keypoint is G, the second nasal bridge keypoint is H, and the third nasal bridge keypoint is I.
In one possible implementation manner, after the viewer terminal selects the first nose bridge key point, the second nose bridge key point, and the third nose bridge key point, the target face feature is determined by combining the possible implementation manner in which the ratio between any two distances is determined as the target face feature. Optionally, the first distance is a longitudinal distance between the first nose bridge key point and the second nose bridge key point, the second distance is a longitudinal distance between the second nose bridge key point and the third nose bridge key point, and a ratio of the first distance to the second distance is determined as the target face feature; or the first distance is a longitudinal distance between the first nose bridge key point and the third nose bridge key point, the second distance is a longitudinal distance between the second nose bridge key point and the third nose bridge key point, and the ratio of the first distance to the second distance is determined as the target face feature.
For example, referring to fig. 6, if the selected face key points are G, H and I, r2 is determined to be GH/HI, and r2 is determined to be the target face feature. The GH is the longitudinal distance between the first nose bridge key point G and the second nose bridge key point H, the HI is the longitudinal distance between the second nose bridge key point H and the third nose bridge key point I, and the r2 is the ratio of the GH to the HI.
In another possible implementation manner, the audience terminal acquires the face parameters of the target face area in response to the selection operation of the target face area, and determines the face parameters as the target face features. Wherein the face shape parameter includes at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter, or a chin angle parameter.
In one possible implementation manner, under the condition that the face shape parameters include the length-width ratio of the face length and the face width, the audience terminal identifies the target face area to obtain the face length and the face width of the target face area; and determining a second ratio between the face length and the face width as the target face characteristic. The face length refers to the longest length in the face region, and the face width refers to the widest width in the face region. For example, referring to fig. 7, if the face width is W and the face length is H, then r3 is H/W, and r3 is determined as the target face feature.
In another possible implementation manner, in the case that the face type parameter includes a width ratio of the forehead width to the chin width, the viewer terminal identifies the target face area to obtain the forehead width and the chin width of the target face area; and determining a third ratio between the forehead width and the chin width as the target human face characteristic. For example, referring to fig. 8, if the forehead width is L1 and the chin width is L2, r4 is L1/L2, and r4 is determined as the target face feature.
In another possible implementation manner, in the case that the face type parameter includes a chin angle parameter, the audience terminal determines a first line segment corresponding to the first chin key point and the second chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, and determines the chin angle parameter according to an included angle between the first line segment and the second line segment according to a second line segment corresponding to the second chin key point and the third chin key point. The first and second jaw key points are located at the same height, and the third jaw key point is a vertex of the plurality of jaw key points, that is, the third jaw key point is a lowermost jaw key point of the plurality of jaw key points. For example, referring to fig. 9, if the first lower jaw key point is a, the second lower jaw key point is B, the third lower jaw key point is C, the first line segment is AB, and the second line segment is BC, then r5 ═ tan ABC, the tangent value r5 of ═ ABC is determined as the target face feature, or the cosine value or sine value of ═ ABC is determined as the target face feature, or the angle corresponding to ^ ABC is directly determined as the target face feature.
In another possible implementation manner, in a case that the face type parameter includes a chin angle parameter, the audience terminal determines a third line segment corresponding to the first chin key point and the second chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, and determines a fourth line segment corresponding to the second chin key point and the third chin key point according to an included angle between the third line segment and the fourth line segment. The first chin key point and the second chin key point are located at the same height, and the third chin key point is a vertex of the plurality of chin key points, that is, the third chin key point is a lowermost chin key point of the plurality of chin key points. For example, referring to fig. 10, if the first chin key point is F, the second chin key point is G, the third chin key point is C, the third line segment is FG, and the fourth line segment is CG, then r6 ═ tan ═ FGC, the tangent value r6 of ═ FGC is determined as the target face feature, or the cosine value or sine value of ═ FGC is determined as the target face feature, or the angle corresponding to ═ FGC is directly determined as the target face feature.
It should be noted that the target face features of the target face region determined by the audience terminal include one or more of the above target face features. For example, the target face features are r1, or the target face features are r1 and r2, or the target face features are r3, r4, r5, and r 6.
It should be noted that, in the above embodiment, the description is given only by taking the example that the audience terminal identifies the live broadcast picture and acquires the target face features, in another embodiment, the audience terminal sends the live broadcast picture when the user performs the trigger operation to the live broadcast server, and the live broadcast server identifies the live broadcast picture to acquire the target face features corresponding to the target face regions, and then sends the acquired target face features to the audience terminal.
403. The audience terminal sends a virtual gift presentation request to the live broadcast server in response to the virtual gift presentation operation.
The virtual gift giving request carries the target face characteristics, so that the terminal in the subsequent live broadcast room displays the virtual gift special effect corresponding to the virtual gift in the target face area according to the target face characteristics. Optionally, the virtual gift-presentation request carries virtual gift information to enable a subsequent anchor terminal to determine which virtual gift is presented. Optionally, the virtual gift-presentation request carries the number of virtual gifts so that a subsequent anchor terminal can determine the number of virtual gifts presented.
In one possible implementation, the audience terminal displays a virtual gift giving interface on the upper layer of the live broadcast interface; or, switching from the live interface to the virtual gift giving interface; the virtual gift giving request is initiated in response to a selection operation of any virtual gift in the virtual gift giving interface. For example, the gift-giving interface is a floating window popped up on the upper layer of the live broadcast interface, or the gift-giving interface is a new interface different from the live broadcast interface, or the virtual gift interface is an interface in another form, which is not limited in this application.
In one possible implementation manner, the virtual gift giving interface comprises a plurality of special effect thumbnails corresponding to the virtual gifts, and the audience terminal initiates a virtual gift giving request in response to the selection operation of the special effect thumbnail corresponding to any virtual gift. The special effect thumbnail is a thumbnail of a special effect of the virtual gift corresponding to the virtual gift, and the selection operation is performed on the special effect thumbnail to represent the selection of the corresponding virtual gift. Optionally, the virtual gift special effect is a dynamic image and the special effect thumbnail is a static image.
In addition, the virtual gift special effect is a character, a figure or other forms of special effects.
In one possible implementation manner, the virtual gift giving interface comprises a giving control, and the audience terminal responds to the selection operation of any virtual gift and sets the selected virtual gift to be in a selected state; and initiating a gifting request of the virtual gift in the selected state in response to the triggering operation of the gifting control. And the virtual gift in the selected state is the virtual gift to be presented. Alternatively, if the virtual gift giving interface displays a special effect thumbnail corresponding to the virtual gift, the viewer terminal sets the selected special effect thumbnail to a selected state in response to a selection operation of any one of the special effect thumbnails.
404. The live broadcast server transmits a virtual gift presentation request to the anchor terminal.
405. The anchor terminal receives the virtual gift-giving request.
The audience terminal sends the virtual gift presentation request to the anchor terminal through the live broadcast server, the anchor terminal obtains the virtual gift presentation request, the target face feature is obtained from the virtual gift presentation request, and subsequently the anchor of the presented virtual gift is determined based on the target face feature.
In one possible implementation, the virtual gift presentation request carries gift information, and the live broadcast server further issues the virtual gift to the anchor terminal.
406. The anchor terminal determines a plurality of face regions included in a live interface of a live broadcast room.
Since the live broadcast picture of the anchor terminal may be changed compared with the live broadcast picture of the audience terminal when the trigger operation is performed, it is necessary to determine a plurality of face regions in the current live broadcast interface of the anchor terminal.
In one possible implementation, the anchor terminal performs face recognition on the live broadcast picture to determine a plurality of face regions. The embodiment of determining a plurality of face regions by face recognition is similar to the embodiment of determining face regions by the audience terminal in step 402, and is not described herein again.
407. The anchor terminal determines face features matched with the target face features in the face features corresponding to the face regions, and determines the face regions corresponding to the face features matched with the target face features as target face regions.
The matching of the face features and the target face features is to determine the face features with the highest similarity to the target face features in the plurality of face features, determine a face area corresponding to the face features with the highest similarity as a target face area, and consider that an anchor corresponding to the target face area is an anchor to which a virtual gift is given.
In a possible implementation manner, difference values between the face features of the plurality of face regions and the target face feature are respectively obtained, that is, a plurality of difference values are obtained, and a face region corresponding to the minimum difference value among the plurality of difference values is determined as the target face region. Wherein the difference value comprises a difference value, a squared difference, a standard deviation, or other numerical value representing the difference. For example, if the target face features are r1 and r2, and any one of the face features is r1 ' and r2 ', the difference value is (r1-r1 ')2+(r2-r2′)2Or a difference value of (r1-r1 ') + (r2-r 2').
It should be noted that the implementation manner of determining the face features corresponding to each face region by the anchor terminal is similar to the implementation manner of determining the target face features of the target face region in step 402, and details are not repeated here.
408. And the anchor terminal displays the virtual gift special effect corresponding to the virtual gift in the target face area.
After the anchor terminal determines the target face area, the given virtual gift is determined according to the virtual gift information in the virtual gift giving request, the virtual gift special effect rendering is carried out in the target face area so as to display the virtual gift special effect corresponding to the virtual gift, and the other face areas in the live broadcast picture do not display the virtual gift special effect.
In one possible implementation manner, the virtual gift giving request carries the number of virtual gifts, and the anchor terminal obtains the number of virtual gifts after receiving the virtual gift giving request and displays the special effect of the virtual gifts according to the number of the virtual gifts.
Optionally, the anchor terminal displays the number of virtual gift special effects in the target face area in an overlapping manner in response to the number of virtual gifts being greater than a first reference number and smaller than a second reference number, wherein the first reference number is smaller than the second reference number. For example, if the first reference number is 1, the second reference number is 10, the virtual gift to be presented is "ear", and the number carried in the virtual gift-presentation request is 5, 5 pairs of "ears" can be superimposed, and the superimposed "ears" are displayed. For another example, the virtual gift special effect corresponding to the presented virtual gift is "getting fat", and the target face is increased by 10% on the original basis every time a virtual gift is received, and if 5 virtual gifts are received, the target face is increased by 50% on the original basis.
Optionally, the anchor terminal displays the virtual gift special effect and text information corresponding to the virtual gift in the target face area in response to the number of virtual gifts being greater than the third reference number. The text information comprises the number of the virtual gifts, and the third reference number is not less than the second reference number. For example, if the third reference number is 10, the number carried in the virtual gift-presentation request is 20, and the presented virtual gift is "yacht", the "yacht" effect is displayed, and "20 x" is displayed above or in other surrounding areas of the "yacht" effect, indicating that 20 "yachts" are presented.
In one possible implementation, a virtual gift special effect is displayed at a target site in the target face region. Wherein the target part is eye, face, forehead, etc.
In one possible implementation, the virtual gift special effect is provided with a corresponding display duration, and the virtual gift special effect is not displayed after the display duration is reached.
It should be noted that the present embodiment is described only by taking an example in which one viewer terminal transmits a virtual gift-giving request. In another embodiment, a plurality of audience terminals in the live broadcast room can all send virtual gift giving requests to the anchor terminal in the above manner, and if the plurality of audience terminals send virtual gift giving requests to the anchor terminal at the same time, the anchor terminal displays corresponding special effects of the patrol gift according to the plurality of virtual gift giving requests. Optionally, if a plurality of audience terminals present the same virtual gift to the anchor terminal, the anchor terminal displays the virtual gift according to the number of the virtual gift and the special effect display mode of the virtual gift; if a plurality of audience terminals present different virtual gifts to the anchor terminal, the anchor terminal respectively displays the virtual gift special effects corresponding to each virtual gift, and the virtual gift special effects corresponding to different virtual gifts can be displayed simultaneously or sequentially according to a preset display sequence.
409. And the anchor terminal sends the live broadcast picture added with the special effect of the virtual gift to a live broadcast server.
410. The live broadcast server releases live broadcast pictures in the live broadcast room.
The live broadcast terminal combines the live broadcast picture and the virtual gift special effect to obtain a live broadcast picture added with the virtual gift special effect, then the live broadcast picture is sent to a live broadcast server, the live broadcast picture is released in a live broadcast room by the live broadcast server, and audience terminals in the live broadcast room display the live broadcast picture to realize the presentation of the virtual gift to a specific live broadcast.
The method provided by the embodiment of the application can select the target face area to be presented with the virtual gift from the plurality of face areas, present the virtual gift to the specific anchor, and carry the target face characteristics in the presentation request, so that the virtual gift special effect can be displayed in the corresponding target face area according to the target face characteristics in the following process.
Moreover, when the face features are obtained, the ratio corresponding to the distances between different face key points is determined as the face features, or the face features are determined according to the angles formed by the different face key points, so that the corresponding face regions can be accurately represented, and the accuracy of the face features is improved. And moreover, when the accurate target face features are adopted and the corresponding target face areas are matched, the determined target face areas can be ensured to be the face areas which the user wants to present the virtual gift.
In another embodiment, the target area is taken as a human body area, and the difference between the target area being a human body area and the human body area being a human face area is that the audience terminal acquires a target human body feature, the virtual gift-giving request sent to the anchor terminal carries the target human body feature, the anchor terminal also determines a matched human body feature according to the target human body feature, determines a human body area corresponding to the human body feature as the target human body area, and the manner that the audience terminal sends the virtual gift-giving request and the anchor terminal displays a special effect of the corresponding virtual gift is the same as that in the above-mentioned manner in fig. 4.
The method for acquiring the target human body features comprises at least one of the following steps:
the first method comprises the following steps: the audience terminal responds to the selection operation of the target human body region, obtains a first human body length of a first human body subregion and a second human body length of a second human body subregion in the target human body region, and determines a ratio of the first human body length to the second human body length as a target human body characteristic of the target human body region, wherein the target human body characteristic can represent a human body proportion characteristic. The first human body subregion and the second human body subregion are two different regions in the target human body, for example, the first human body subregion is a waist region and a region above the waist region, and the second human body subregion is a region below the waist region.
And the second method comprises the following steps: and the audience terminal responds to the selection operation of the target human body area, acquires the total human body length and the total human body width of the target human body area, and determines the ratio of the total human body length to the total human body width as the target human body characteristic of the target human body area. Wherein, the total length of the human body is the height of the target human body, and the target human body characteristic can represent the body type characteristic.
And the third is that: and the audience terminal responds to the selection operation of the target human body area, acquires the clothing characteristics in the target human body area, and determines the clothing characteristics as the target human body characteristics of the target human body area. The clothing features comprise matching features of the clothing, color features of the clothing, ornament features or other external features of the target human body.
In one possible implementation mode, the human body area comprises a human face area, if the audience terminal detects the selection operation of the human face area, the target human face characteristics are obtained, and the subsequent display process of the special effect of the virtual gift is executed according to the target human face characteristics; and if the audience terminal detects that the human body region is a region except the human face region, acquiring the target human body feature, and executing a subsequent display process of the special effect of the virtual gift according to the target human body feature.
In addition, the manner in which the anchor terminal determines the human body features corresponding to the plurality of human body regions is the same as the manner in which the target human body feature is determined.
In another embodiment, when a live broadcast screen in a live broadcast interface includes a plurality of screen areas, taking an object area as a background area as an example, and the object area is a background area, and compared with the object area being a face area, the difference is that a target background feature is acquired by an audience terminal, a virtual gift-giving request sent to a anchor terminal carries the target background feature, the anchor terminal also determines a matched background feature according to the target background feature, determines a background area corresponding to the background feature as the target background area, and the manner in which the audience terminal sends the virtual gift-giving request and the anchor terminal displays a corresponding virtual gift special effect is the same as that in fig. 4.
The method for obtaining the target background features comprises the following steps: and the audience terminal responds to the selection operation of the target picture area, determines a target background area in the target picture area and acquires the target background characteristics of the target background area. Wherein the object background feature is used to describe the background in the picture area.
In a possible implementation manner, in a case that each anchor corresponds to one screen region, the audience terminal performs a selection operation on a target screen region to give a given virtual gift to the anchor, and the anchor terminal displays a corresponding virtual gift special effect in a background region in the target screen region, or displays a corresponding virtual gift special effect in a human body region in the target screen region, or displays a corresponding virtual gift special effect in a human face region in the target screen region.
The manner in which the anchor terminal identifies the background features corresponding to the plurality of background regions is the same as the manner in which the target background features are identified.
Fig. 11 is a schematic structural diagram of a virtual gift-giving device according to an embodiment of the present application. Referring to fig. 11, the apparatus includes:
a display module 1101, configured to display a live interface of a live broadcast room, where the live broadcast interface includes a plurality of object areas;
a feature obtaining module 1102, configured to obtain, in response to a selection operation on a target object region, a target object feature corresponding to the target object region;
the request initiating module 1103 is configured to initiate a virtual gift giving request to a target object corresponding to the target object area in response to the virtual gift giving operation, where the virtual gift giving request carries characteristics of the target object.
In one possible implementation, referring to fig. 12, the apparatus further includes:
the display module 1101 is further configured to display a virtual gift special effect corresponding to the virtual gift in the target object area.
In one possible implementation manner, the live interface includes a live screen, and the live screen includes a plurality of object regions, referring to fig. 12, the feature obtaining module 1102 includes:
a position determining unit 1112, configured to determine, in response to a trigger operation on the live view, a target position corresponding to the trigger operation;
a region determination unit 1122 for identifying a live view and determining an object region including a target position;
the feature acquiring unit 1132 is configured to determine the object region as a target object region, and acquire a target object feature corresponding to the target object region.
In another possible implementation manner, the request initiation module 1103 is configured to:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or, switching from the live interface to the virtual gift giving interface;
the virtual gift giving request is initiated in response to a selection operation of any virtual gift in the virtual gift giving interface.
In another possible implementation manner, the virtual gift giving interface comprises special effect thumbnails corresponding to a plurality of virtual gifts; a request initiating module 1103, configured to initiate a virtual gift giving request in response to a selection operation of a special effect thumbnail corresponding to any virtual gift.
In another possible implementation, the virtual gift giving interface includes a giving control, and the request initiation module 1103 is configured to:
setting the selected virtual gift to a selected state in response to a selection operation of any virtual gift;
and initiating a gifting request of the virtual gift in the selected state in response to the triggering operation of the gifting control.
In another possible implementation manner, the target object region includes a target face region, and referring to fig. 12, the feature obtaining module 1102 includes:
a key point obtaining unit 1142, configured to obtain, in response to a selection operation on a target face region, a plurality of face key points of the target face region, where the face key points include at least one of face edge points or face organ edge points of the target face region;
the feature obtaining unit 1132 is configured to determine a target face feature of the target face region according to the positions of the plurality of face key points.
In another possible implementation manner, referring to fig. 12, the feature obtaining unit 1132 is configured to:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring a second face sub-feature of the target face area according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the vertical relative positions of the face key points.
In another possible implementation manner, referring to fig. 12, the feature obtaining unit 1132 is configured to:
determining a first distance between the first face key point and the second face key point and a second distance between the first face key point and the third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point;
and determining a first ratio between the first distance and the second distance as the target human face characteristic.
In another possible implementation manner, the plurality of face key points include face edge points and face organ edge points, see fig. 12, and the feature obtaining unit 1132 is configured to:
selecting face key points positioned in a first face sub-area or a second face sub-area from the plurality of face key points;
and determining the target face characteristics of the target face area according to the positions of the selected face key points.
In another possible implementation manner, referring to fig. 12, the feature obtaining unit 1132 is configured to:
selecting a first canthus key point, a second canthus key point and a face edge key point which is at the same height as a lower eyelid key point from a plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In another possible implementation manner, the target object region includes a target face region, and referring to fig. 12, the feature obtaining module 1102 includes:
a parameter acquiring unit 1152 for acquiring a face type parameter of the target face region in response to a selection operation on the target face region, the face type parameter including at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter, or a chin angle parameter;
the feature obtaining unit 1132 is further configured to determine the face parameters as the target face features.
In another possible implementation, the facial shape parameters include a chin angle parameter, see fig. 12, and the parameter obtaining unit 1152 is configured to:
in response to the selection operation of the target human face region, according to the position of a first mandible key point, the position of a second mandible key point and the position of a third mandible key point, determining a first line segment corresponding to the first mandible key point and the second mandible key point and a second line segment corresponding to the second mandible key point and the third mandible key point, wherein the first mandible key point and the second mandible key point are located at the same height, and the third mandible key point is a vertex in a plurality of mandible key points;
and determining a mandible angle parameter according to an included angle between the first line segment and the second line segment.
In another possible implementation, the face parameters include a chin angle parameter, see fig. 12, and the parameter obtaining unit 1152 is configured to:
responding to the selection operation of the target face area, and according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, determining a third line segment corresponding to the first chin key point and the second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point, wherein the first chin key point and the second chin key point are located at the same height, and the third chin key point is a top point in the plurality of chin key points;
and determining a chin angle parameter according to an included angle between the third line segment and the fourth line segment.
In another possible implementation manner, the target object region includes a target human body region, and the feature obtaining module 1102 includes:
the feature obtaining unit 1132 is further configured to, in response to a selection operation on the target human body region, obtain a first human body length of a first human body subregion in the target human body region and a second human body length of a second human body subregion in the target human body region, and determine a ratio between the first human body length and the second human body length as a target human body feature of the target human body region; or,
the characteristic acquiring unit 1132 is further configured to, in response to a selection operation on the target human body region, acquire a total human body length and a total human body width of the target human body region, and determine a ratio between the total human body length and the total human body width as a target human body characteristic of the target human body region; or,
the feature obtaining unit 1132 is further configured to, in response to the selection operation on the target human body region, obtain a clothing feature in the target human body region, and determine the clothing feature as the target human body feature of the target human body region.
In another possible implementation manner, a live view in a live interface includes a plurality of view areas, and the feature obtaining module 1102 includes:
the feature obtaining unit 1132 is further configured to, in response to a selection operation on the target screen region, determine a target background region in the target screen region, and obtain a target background feature of the target background region.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
It should be noted that: the virtual gift giving device provided in the above embodiment is illustrated by only dividing the functional modules when displaying the special effect of the virtual gift, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above. In addition, the virtual gift giving device provided by the above embodiment and the virtual gift giving method embodiment belong to the same concept, and the specific implementation process thereof is described in the method embodiment and will not be described herein again.
Fig. 13 is a schematic structural view of another virtual gift-giving device according to an embodiment of the present application. Referring to fig. 13, the apparatus includes:
a request receiving module 1301, configured to receive a virtual gift giving request, where the virtual gift giving request carries target object characteristics;
an area determining module 1302, configured to determine a plurality of object areas included in a live interface of a live broadcast room;
a feature matching module 1303, configured to determine an object feature matching the target object feature from among object features corresponding to the multiple object regions, and determine an object region corresponding to the object feature matching the target object feature as a target object region;
and the special effect display module 1304 is used for displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area.
In a possible implementation manner, the feature matching module 1303 is configured to obtain difference values between object features of a plurality of object regions and target object features, and determine an object region corresponding to a minimum difference value as a target object region.
In another possible implementation manner, the special effect display module 1304 is configured to display a special effect of the virtual gift at a target portion in the target object region.
In another possible implementation manner, the number of virtual gifts is carried in the virtual gift-giving request, and the special effects display module 1304 is configured to:
in response to the number of the virtual gifts being larger than the first reference number and smaller than the second reference number, displaying the number of virtual gift special effects in the target object area in an overlapping manner; or,
and in response to the number of the virtual gifts being greater than the third reference number, displaying the virtual gift special effect and text information corresponding to the virtual gifts in the target object area, wherein the text information comprises the number of the virtual gifts.
In another possible implementation manner, the object region includes a face region, the live interface includes a live view, and the region determining module 1302 is configured to perform face recognition on the live view to determine a plurality of face regions.
In another possible implementation, referring to fig. 14, the apparatus further includes:
the frame sending module 1305 is configured to send the live frame to which the virtual gift special effect has been added to a live server, where the live server is configured to publish the live frame in a live room.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
It should be noted that: the virtual gift giving device provided in the above embodiment is illustrated by only dividing the functional modules when displaying the special effect of the virtual gift, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above. In addition, the virtual gift giving device provided by the above embodiment and the virtual gift giving method embodiment belong to the same concept, and the specific implementation process thereof is described in the method embodiment and will not be described herein again.
The embodiment of the present application further provides a computer device, which includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor to implement the operations executed in the virtual gift giving method of the above embodiment.
In one possible implementation, the computer device is provided as a terminal. Fig. 15 is a schematic structural diagram of a terminal 1500 according to an embodiment of the present disclosure. The terminal 1500 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
The terminal 1500 includes: a processor 1501 and memory 1502.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1502 is used to store at least one program code for execution by the processor 1501 to implement the virtual gift giving method provided by the method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera assembly 1506, an audio circuit 1507, a positioning assembly 1508, and a power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, provided on the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate the current geographic position of the terminal 1500 for navigation or LBS (Location Based Service). The Positioning component 1508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian glonass Positioning System, or the european union galileo Positioning System.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side frame of terminal 1500 and/or underneath display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the display screen 1505, the processor 1501 controls the operability control on the UI interface in accordance with the pressure operation of the user on the display screen 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the display screen 1505 is adjusted down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also called a distance sensor, is provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
In another possible implementation, the computer device is provided as a server. Fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1600 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1601 and one or more memories 1602, where the memory 1602 stores at least one program code, and the at least one program code is loaded and executed by the processors 1601 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The present application also provides a computer-readable storage medium, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the operations performed in the virtual gift giving method of the above embodiments.
The present application embodiment also provides a computer program product or a computer program, which includes computer program code stored in a computer-readable storage medium, and the computer program code is loaded and executed by a processor to implement the operations performed in the virtual gift giving method of the above-mentioned embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and is not intended to limit the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (24)
1. A virtual gift giving method, characterized in that the method comprises:
displaying a live interface of a live broadcast room, wherein the live interface comprises a plurality of object areas;
responding to the selection operation of a target object region, and acquiring a target object characteristic corresponding to the target object region;
and responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object.
2. The method of claim 1, wherein after initiating a virtual gift giving request to a target object corresponding to the target object zone in response to the virtual gift giving operation, the method further comprises:
and displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
3. The method according to claim 1, wherein the live interface includes a live screen, the live screen includes the plurality of object regions, and the obtaining a target object feature corresponding to a target object region in response to a selection operation on the target object region includes:
responding to the trigger operation of the live broadcast picture, and determining a target position corresponding to the trigger operation;
identifying the live broadcast picture, and determining an object area comprising the target position;
and determining the object area as the target object area, and acquiring the target object characteristics corresponding to the target object area.
4. The method of claim 1, wherein initiating a virtual gift giving request in response to the virtual gift giving operation comprises:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or, switching from the live interface to the virtual gift giving interface;
the virtual gift giving request is initiated in response to a selection operation of any virtual gift in the virtual gift giving interface.
5. The method according to claim 1, wherein the target object region includes a target face region, and the obtaining a target object feature corresponding to the target object region in response to the selection operation on the target object region includes:
responding to the selection operation of the target face region, and acquiring a plurality of face key points of the target face region, wherein the face key points comprise at least one of face edge points or face organ edge points of the target face region;
and determining the target face features of the target face area according to the positions of the face key points.
6. The method of claim 5, wherein determining the target face features of the target face region according to the locations of the plurality of face key points comprises at least one of:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring a second face sub-feature of the target face area according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points.
7. The method of claim 5, wherein determining the target face features of the target face region according to the positions of the face key points comprises:
determining a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the positions of the first face key point, the second face key point and the third face key point;
and determining a first ratio between the first distance and the second distance as the target face feature.
8. The method of claim 5, wherein the plurality of face key points comprise face edge points and facial organ edge points, and wherein determining the target face feature of the target face region according to the positions of the plurality of face key points comprises:
selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points;
and determining the target face features of the target face area according to the positions of the selected face key points.
9. The method of claim 8, wherein selecting face keypoints from the plurality of face keypoints that are located in either a first face sub-region or a second face sub-region comprises:
selecting a first canthus key point, a second canthus key point and a face edge key point which is at the same height as a lower eyelid key point from the plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
10. The method according to claim 1, wherein the target object region includes a target face region, and the obtaining a target object feature corresponding to the target object region in response to the selection operation on the target object region includes:
acquiring a face shape parameter of the target face region in response to a selection operation of the target face region, wherein the face shape parameter comprises at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter or a chin angle parameter;
and determining the face parameters as target human face features.
11. The method of claim 10, wherein the facial parameters include the chin angle parameter, and wherein the obtaining facial parameters for the target face region in response to the selecting operation on the target face region comprises:
in response to a selection operation on the target face region, determining a first line segment corresponding to a first jaw key point and a second line segment corresponding to a second jaw key point and a third jaw key point according to the position of the first jaw key point, the position of the second jaw key point and the position of the third jaw key point, wherein the first jaw key point and the second jaw key point are located at the same height, and the third jaw key point is a vertex in a plurality of jaw key points;
and determining the mandible angle parameter according to the included angle between the first line segment and the second line segment.
12. The method of claim 10, wherein the face parameters include the chin angle parameter, and wherein the obtaining the face parameters for the target face region in response to the selecting operation on the target face region comprises:
responding to the selection operation of the target face area, and according to the position of a first chin key point, the position of a second chin key point and the position of a third chin key point, determining a third line segment corresponding to the first chin key point and the second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point, wherein the first chin key point and the second chin key point are located at the same height, and the third chin key point is a vertex of a plurality of chin key points;
and determining the chin angle parameter according to an included angle between the third line segment and the fourth line segment.
13. The method according to claim 1, wherein the target object region comprises a target human body region, and the obtaining of the target object feature corresponding to the target object region in response to the selection operation of the target object region comprises:
responding to the selection operation of the target human body region, acquiring a first human body length of a first human body subregion and a second human body length of a second human body subregion in the target human body region, and determining a ratio of the first human body length to the second human body length as a target human body characteristic of the target human body region; or,
responding to the selection operation of the target human body area, acquiring the total human body length and the total human body width of the target human body area, and determining the ratio of the total human body length to the total human body width as the target human body characteristic of the target human body area; or,
in response to the selection operation of the target human body area, acquiring a clothing feature in the target human body area, and determining the clothing feature as the target human body feature of the target human body area.
14. The method according to claim 1, wherein a live screen in the live interface includes a plurality of screen areas, and the obtaining a target object feature corresponding to a target object area in response to a selection operation on the target object area includes:
and responding to the selection operation of the target picture area, determining a target background area in the target picture area, and acquiring the target background feature of the target background area.
15. A virtual gift giving method, characterized in that the method comprises:
receiving a virtual gift giving request, wherein the virtual gift giving request carries target object characteristics;
determining a plurality of object areas included in a live interface of a live broadcast room;
determining object features matched with the target object features in the object features corresponding to the object regions, and determining the object regions corresponding to the object features matched with the target object features as target object regions;
and displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
16. The method according to claim 15, wherein the determining an object feature matching the target object feature from among the object features corresponding to the plurality of object regions, and determining the object region corresponding to the object feature as the target object region comprises:
and respectively acquiring difference values between the object features of the plurality of object regions and the target object features, and determining the object region corresponding to the minimum difference value as the target object region.
17. The method of claim 15, wherein the displaying the virtual gift special effect corresponding to the virtual gift in the target object region comprises:
displaying the virtual gift special effect at a target site in the target object region.
18. The method of claim 15, wherein the virtual gift giving request carries the number of the virtual gifts, and wherein displaying the virtual gift special effect corresponding to the virtual gift in the target object area comprises:
in response to the number of the virtual gifts being greater than a first reference number and less than a second reference number, displaying the number of the virtual gift special effects in the target object area in an overlapping manner; or,
in response to the number of the virtual gifts being greater than a third reference number, displaying text information corresponding to the virtual gift special effect and the virtual gift in the target object area, wherein the text information includes the number of the virtual gifts.
19. The method of claim 15, wherein the object region comprises a face region, wherein the live interface comprises a live view, and wherein determining the plurality of object regions included in the live interface of the live room comprises:
and carrying out face recognition on the live broadcast picture to determine a plurality of face areas.
20. The method of claim 15, wherein after the target object region displays the virtual gift special effect corresponding to the virtual gift, the method further comprises:
and sending the live broadcast picture added with the virtual gift special effect to a live broadcast server, wherein the live broadcast server is used for releasing the live broadcast picture in the live broadcast room.
21. A virtual gift giving apparatus, comprising:
the display module is used for displaying a live broadcast interface of a live broadcast room, and the live broadcast interface comprises a plurality of object areas;
the characteristic acquisition module is used for responding to the selection operation of the target object area and acquiring the target object characteristic corresponding to the target object area;
and the request initiating module is used for responding to the virtual gift giving operation and initiating a virtual gift giving request, and the virtual gift giving request carries the target object characteristics.
22. A virtual gift giving apparatus, comprising:
the request receiving module is used for receiving a virtual gift giving request, and the virtual gift giving request carries target object characteristics;
the area determining module is used for determining a plurality of object areas included in a live interface of a live broadcast room;
the characteristic matching module is used for determining object characteristics matched with the target object characteristics in the object characteristics corresponding to the object areas and determining the object areas corresponding to the object characteristics as target object areas;
and the special effect display module is used for displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area.
23. A computer device comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded into and executed by the processor to carry out operations carried out in the virtual gift giving method of any one of claims 1 to 14 or to carry out operations carried out in the virtual gift giving method of any one of claims 15 to 20.
24. A computer-readable storage medium having stored therein at least one program code, which is loaded and executed by a processor, to implement the operations performed in the virtual gift giving method of any one of claims 1 through 14 or to implement the operations performed in the virtual gift giving method of any one of claims 15 through 20.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011403675.2A CN112565806B (en) | 2020-12-02 | 2020-12-02 | Virtual gift giving method, device, computer equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011403675.2A CN112565806B (en) | 2020-12-02 | 2020-12-02 | Virtual gift giving method, device, computer equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112565806A true CN112565806A (en) | 2021-03-26 |
CN112565806B CN112565806B (en) | 2023-08-29 |
Family
ID=75048455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011403675.2A Active CN112565806B (en) | 2020-12-02 | 2020-12-02 | Virtual gift giving method, device, computer equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112565806B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112933598A (en) * | 2021-04-06 | 2021-06-11 | 腾讯科技(深圳)有限公司 | Interaction method, device, equipment and storage medium based on virtual gift |
CN114268808A (en) * | 2021-12-29 | 2022-04-01 | 广州方硅信息技术有限公司 | Live broadcast interactive information pushing method, system, device, equipment and storage medium |
CN114449305A (en) * | 2022-01-29 | 2022-05-06 | 上海哔哩哔哩科技有限公司 | Gift animation playing method and device in live broadcast room |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295476A (en) * | 2015-05-29 | 2017-01-04 | 腾讯科技(深圳)有限公司 | Face key point localization method and device |
CN108391153A (en) * | 2018-01-29 | 2018-08-10 | 北京潘达互娱科技有限公司 | Virtual present display methods, device and electronic equipment |
CN109889858A (en) * | 2019-02-15 | 2019-06-14 | 广州酷狗计算机科技有限公司 | Information processing method, device and the computer readable storage medium of virtual objects |
CN110493630A (en) * | 2019-09-11 | 2019-11-22 | 广州华多网络科技有限公司 | The treating method and apparatus of virtual present special efficacy, live broadcast system |
WO2020021319A1 (en) * | 2018-07-27 | 2020-01-30 | Yogesh Chunilal Rathod | Augmented reality scanning of real world object or enter into geofence to display virtual objects and displaying real world activities in virtual world having corresponding real world geography |
CN111147877A (en) * | 2019-12-27 | 2020-05-12 | 广州华多网络科技有限公司 | Virtual gift presenting method, device, equipment and storage medium |
CN111246232A (en) * | 2020-01-17 | 2020-06-05 | 广州华多网络科技有限公司 | Live broadcast interaction method and device, electronic equipment and storage medium |
CN111444860A (en) * | 2020-03-30 | 2020-07-24 | 东华大学 | Expression recognition method and system |
-
2020
- 2020-12-02 CN CN202011403675.2A patent/CN112565806B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295476A (en) * | 2015-05-29 | 2017-01-04 | 腾讯科技(深圳)有限公司 | Face key point localization method and device |
CN108391153A (en) * | 2018-01-29 | 2018-08-10 | 北京潘达互娱科技有限公司 | Virtual present display methods, device and electronic equipment |
WO2020021319A1 (en) * | 2018-07-27 | 2020-01-30 | Yogesh Chunilal Rathod | Augmented reality scanning of real world object or enter into geofence to display virtual objects and displaying real world activities in virtual world having corresponding real world geography |
CN109889858A (en) * | 2019-02-15 | 2019-06-14 | 广州酷狗计算机科技有限公司 | Information processing method, device and the computer readable storage medium of virtual objects |
CN110493630A (en) * | 2019-09-11 | 2019-11-22 | 广州华多网络科技有限公司 | The treating method and apparatus of virtual present special efficacy, live broadcast system |
CN111147877A (en) * | 2019-12-27 | 2020-05-12 | 广州华多网络科技有限公司 | Virtual gift presenting method, device, equipment and storage medium |
CN111246232A (en) * | 2020-01-17 | 2020-06-05 | 广州华多网络科技有限公司 | Live broadcast interaction method and device, electronic equipment and storage medium |
CN111444860A (en) * | 2020-03-30 | 2020-07-24 | 东华大学 | Expression recognition method and system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112933598A (en) * | 2021-04-06 | 2021-06-11 | 腾讯科技(深圳)有限公司 | Interaction method, device, equipment and storage medium based on virtual gift |
CN114268808A (en) * | 2021-12-29 | 2022-04-01 | 广州方硅信息技术有限公司 | Live broadcast interactive information pushing method, system, device, equipment and storage medium |
CN114449305A (en) * | 2022-01-29 | 2022-05-06 | 上海哔哩哔哩科技有限公司 | Gift animation playing method and device in live broadcast room |
Also Published As
Publication number | Publication date |
---|---|
CN112565806B (en) | 2023-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108595239B (en) | Picture processing method, device, terminal and computer readable storage medium | |
CN108965922B (en) | Video cover generation method and device and storage medium | |
CN111464749B (en) | Method, device, equipment and storage medium for image synthesis | |
CN108710525A (en) | Map methods of exhibiting, device, equipment and storage medium in virtual scene | |
CN109862412B (en) | Method and device for video co-shooting and storage medium | |
CN112667835B (en) | Works processing method, device, electronic device and storage medium | |
CN109947338B (en) | Image switching display method and device, electronic equipment and storage medium | |
CN111753784A (en) | Video special effect processing method and device, terminal and storage medium | |
CN109302632B (en) | Method, device, terminal and storage medium for acquiring live video picture | |
CN110839174A (en) | Image processing method and device, computer equipment and storage medium | |
CN111083526B (en) | Video transition method and device, computer equipment and storage medium | |
CN109634688B (en) | Session interface display method, device, terminal and storage medium | |
CN109886208B (en) | Object detection method and device, computer equipment and storage medium | |
CN112565806B (en) | Virtual gift giving method, device, computer equipment and medium | |
CN110941375A (en) | Method and device for locally amplifying image and storage medium | |
CN112419143A (en) | Image processing method, special effect parameter setting method, device, equipment and medium | |
CN112381729B (en) | Image processing method, device, terminal and storage medium | |
CN112396076A (en) | License plate image generation method and device and computer storage medium | |
CN113592874B (en) | Image display method, device and computer equipment | |
CN111083513B (en) | Live broadcast picture processing method and device, terminal and computer readable storage medium | |
CN109660876B (en) | Method and device for displaying list | |
CN111586279B (en) | Method, device and equipment for determining shooting state and storage medium | |
CN112637624B (en) | Live stream processing method, device, equipment and storage medium | |
CN111369434B (en) | Method, device, equipment and storage medium for generating spliced video covers | |
CN109819308B (en) | Virtual resource acquisition method, device, terminal, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |