CN109035180A - Video broadcasting method, device, equipment and storage medium - Google Patents
Video broadcasting method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109035180A CN109035180A CN201811131394.9A CN201811131394A CN109035180A CN 109035180 A CN109035180 A CN 109035180A CN 201811131394 A CN201811131394 A CN 201811131394A CN 109035180 A CN109035180 A CN 109035180A
- Authority
- CN
- China
- Prior art keywords
- video
- user
- beautification
- preference
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012545 processing Methods 0.000 claims description 38
- 230000003796 beauty Effects 0.000 claims description 22
- 230000006399 behavior Effects 0.000 claims description 22
- 230000002452 interceptive effect Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 16
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 27
- 230000000875 corresponding effect Effects 0.000 description 19
- 239000011159 matrix material Substances 0.000 description 11
- 238000013507 mapping Methods 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000002087 whitening effect Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 210000001508 eye Anatomy 0.000 description 5
- 230000001815 facial effect Effects 0.000 description 5
- 210000000158 ommatidium Anatomy 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 210000000697 sensory organ Anatomy 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 241000700112 Chinchilla Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000012907 honey Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
This application discloses a kind of video broadcasting method, device, equipment and storage mediums, belong to multimedia technology field.This application provides the schemes that the beautification preference of combination spectators user a kind of carries out video playing.Pass through the beautification preference parameter according to spectators user, landscaping treatment is carried out to video, and then play the video after landscaping treatment, the beautification parameter that can guarantee video is bonded with the beautification preference of spectators user, to improve the accuracy of beautification parameter, landscaping treatment is carried out to video according to accuracy higher beautification parameter, can be improved the result of broadcast of video, guarantees that the video played meets the hobby of spectators user.And, same video can carry out landscaping treatment according to different beautification preference parameters in the terminal of different spectators users, to fully take into account the beautification preference of each spectators' individual subscriber, meets the customized demand of spectators user, improve the flexibility of video playing.
Description
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a video playing method, apparatus, device, and storage medium.
Background
With the development of multimedia technology and the diversification of terminal functions, a terminal can play videos in various scenes, for example, a live broadcast video can be played in a live broadcast room through a live broadcast application, a movie and television play can be played in a movie and television play application, and a short video can be played in a short video application.
Currently, video can be beautified at the recorder's terminal. Taking a live scene as an example, the recorder may be an anchor user. In the video recording process, a main broadcasting user can set beautifying parameters on a terminal, the terminal of the main broadcasting user can beautify the video according to the beautifying parameters set by the main broadcasting user to obtain the beautified video, and the beautified video is sent to a server. The server may send the beautified processed video to the viewer user's terminal. The terminals of the audience users can receive the beautified video and play the beautified video.
The accuracy of beautification parameters set by a recorder is poor, so that the playing effect of the video is poor.
Disclosure of Invention
The embodiment of the application provides a video playing method, a video playing device, video playing equipment and a storage medium, and can solve the problem of poor video playing effect in the related art. The technical scheme is as follows:
in a first aspect, a video playing method is provided, where the method includes:
acquiring beautification preference parameters of a first user;
when a first video recorded by a second user is received, beautifying the first video according to the beautifying preference parameter to obtain a second video, wherein the second user is any user except the first user;
and playing the second video.
Optionally, the acquiring of the beautification preference parameter includes:
acquiring beautification preference parameters according to image characteristics of at least one third video and historical watching records of the first user on the at least one third video; or,
inputting the user portrait data of the first user into a beautification preference prediction model and outputting the beautification preference parameters, wherein the beautification preference prediction model is used for predicting the beautification preference parameters of the user according to the user portrait data of the user; or,
acquiring beautification preference parameters according to the face preference information input by the first user; or,
acquiring the beautification preference parameter according to the video background preference information input by the first user; or,
the beautification parameters set by the second user are used as the beautification preference parameters; or,
receiving the beautification preference parameter input by the first user.
Optionally, the obtaining the beautification preference parameter according to the image feature of the at least one third video and the historical viewing record of the first user on the at least one third video includes:
acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the watching duration of the at least one third video; or,
and acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the interactive behavior data of the at least one third video.
Optionally, the training process of the beautification preference prediction model includes:
acquiring beautification preference parameters of at least one sample user according to image characteristics of the at least one sample video and historical watching records of the at least one sample user on the at least one sample video;
and training to obtain the beautifying preference prediction model based on the user portrait data of the at least one sample user and the beautifying preference parameters of the at least one sample user.
Optionally, the beautifying processing the first video according to the beautifying preference parameter to obtain a second video includes at least one of the following steps:
beautifying the background image of the first video according to the background beautifying preference parameter to obtain the second video;
and performing beauty treatment on the face image of the first video according to the beauty preference parameter to obtain the second video.
Optionally, after the playing of the second video, the method further includes:
and updating the beautification preference parameter according to the image characteristics of the second video and the watching record of the first user on the second video.
In a second aspect, a video playing apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring beautification preference parameters of a first user;
the beautifying module is used for beautifying the first video recorded by a second user according to the beautifying preference parameter to obtain a second video, wherein the second user is any user except the first user;
and the playing module is used for playing the second video.
Optionally, the acquiring of the beautification preference parameter includes:
acquiring beautification preference parameters according to image characteristics of at least one third video and historical watching records of the first user on the at least one third video; or,
inputting the user portrait data of the first user into a beautification preference prediction model and outputting the beautification preference parameters, wherein the beautification preference prediction model is used for predicting the beautification preference parameters of the user according to the user portrait data of the user; or,
acquiring beautification preference parameters according to the face preference information input by the first user; or,
acquiring the beautification preference parameter according to the video background preference information input by the first user; or,
the beautification parameters set by the second user are used as the beautification preference parameters; or,
receiving the beautification preference parameter input by the first user.
Optionally, the acquiring of the beautification preference parameter includes:
acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the watching duration of the at least one third video; or,
and acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the interactive behavior data of the at least one third video.
Optionally, the training process of the beautification preference prediction model includes:
acquiring beautification preference parameters of at least one sample user according to image characteristics of the at least one sample video and historical watching records of the at least one sample user on the at least one sample video;
and training to obtain the beautifying preference prediction model based on the user portrait data of the at least one sample user and the beautifying preference parameters of the at least one sample user.
Optionally, the beautification module comprises at least one of:
the background beautification submodule is used for beautifying the background image of the first video according to the background beautification preference parameter to obtain the second video;
and the beautifying sub-module is used for performing beautifying processing on the face image of the first video according to the beautifying preference parameter to obtain the second video.
Optionally, the apparatus further comprises:
and the updating module is used for updating the beautification preference parameter according to the image characteristics of the second video and the watching record of the first user on the second video.
In a third aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the video playing method according to the first aspect or any one of the optional manners of the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium, wherein at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to implement the video playing method according to the first aspect or any one of the optional manners of the first aspect.
The method and the device provided by the embodiment of the application provide a scheme for playing the video in combination with beautification preference of audience users. The beautification processing is carried out on the video according to the beautification preference parameters of the audience users, and then the beautified video is played, so that the beautification parameters of the video can be attached to the beautification preference of the audience users, the accuracy of the beautification parameters is improved, the beautification processing is carried out on the video according to the beautification parameters with higher accuracy, the playing effect of the video can be improved, and the played video is guaranteed to meet the favor of the audience users. And the same video can be beautified on the terminals of different audience users according to different beautifying preference parameters, so that the individual beautifying preference of each audience user is fully considered, the user-defined requirement of the audience user is met, and the flexibility of video playing is improved. Especially, when the method is applied to a live broadcast scene, terminals of different audience users entering a live broadcast room can beautify live broadcast videos according to beautifying preference parameters of local users, and beautifying effects of live broadcast videos watched by different audience users entering the live broadcast room can be different, so that the live broadcast videos watched by all the audience users are enabled to be fit with personal beautifying preferences, each audience user is attracted to stay in the live broadcast room to watch live broadcast, and the residence rate of the live broadcast room is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram of an implementation environment architecture provided by an embodiment of the present application;
fig. 2 is a flowchart of a video playing method provided in an embodiment of the present application;
fig. 3 is a flowchart of a video playing method provided in an embodiment of the present application;
fig. 4 is a schematic logic architecture diagram of a video playing method according to an embodiment of the present application;
fig. 5 is a schematic logic architecture diagram of a video playing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a block diagram of an implementation environment in which the following method embodiments may be applied. The implementation environment includes a viewer user's terminal, a recorder's terminal, and at least one server. The viewer user's terminal may be provided as the first terminal in the embodiments described below. The viewer user's terminal, the recorder's terminal and the at least one server may be interconnected by a network. In the video playing process, the terminal of the recorder can record the video, the recorded video is sent to the server, the server can send the video to the terminal of the audience user, and the terminal of the audience user can receive the video and play the video.
Fig. 2 is a flowchart of a video playing method provided in an embodiment of the present application, and referring to fig. 2, the method includes:
201. and acquiring beautification preference parameters of the first user.
202. And when a first video recorded by a second user is received, beautifying the first video according to the beautifying preference parameter to obtain a second video, wherein the second user is any user except the first user.
203. And playing the second video.
The method provided by the embodiment of the application provides a scheme for playing the video in combination with beautification preference of audience users. The beautification processing is carried out on the video according to the beautification preference parameters of the audience users, and then the beautified video is played, so that the beautification parameters of the video can be attached to the beautification preference of the audience users, the accuracy of the beautification parameters is improved, the beautification processing is carried out on the video according to the beautification parameters with higher accuracy, the playing effect of the video can be improved, and the played video is guaranteed to meet the favor of the audience users. And the same video can be beautified on the terminals of different audience users according to different beautifying preference parameters, so that the individual beautifying preference of each audience user is fully considered, the user-defined requirement of the audience user is met, and the flexibility of video playing is improved. Especially, when the method is applied to a live broadcast scene, terminals of different audience users entering a live broadcast room can beautify live broadcast videos according to beautifying preference parameters of local users, and beautifying effects of live broadcast videos watched by different audience users entering the live broadcast room can be different, so that the live broadcast videos watched by all the audience users are enabled to be fit with personal beautifying preferences, each audience user is attracted to stay in the live broadcast room to watch live broadcast, and the residence rate of the live broadcast room is greatly improved.
Optionally, the acquiring of the beautification preference parameter includes:
acquiring the beautification preference parameter according to the image characteristics of at least one third video and the historical watching record of the first user on the at least one third video; or,
inputting the user portrait data of the first user into a beautification preference prediction model and outputting the beautification preference parameters, wherein the beautification preference prediction model is used for predicting the beautification preference parameters of the user according to the user portrait data of the user; or,
acquiring beautification preference parameters according to the face preference information input by the first user; or,
acquiring the beautifying preference parameter according to the video background preference information input by the first user; or,
the beautification parameters set by the second user are used as the beautification preference parameters; or,
the beautification preference parameter input by the first user is received.
Optionally, the obtaining the beautification preference parameter according to the image feature of the at least one third video and the historical viewing record of the first user on the at least one third video includes:
acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the watching duration of the at least one third video; or,
and acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the interactive behavior data of the at least one third video.
Optionally, the training process of the beautification preference prediction model includes:
acquiring beautification preference parameters of at least one sample user according to image characteristics of the at least one sample video and historical watching records of the at least one sample user on the at least one sample video;
and training to obtain the beautifying preference prediction model based on the user portrait data of the at least one sample user and the beautifying preference parameters of the at least one sample user.
Optionally, the beautifying processing the first video according to the beautifying preference parameter to obtain a second video includes at least one of the following steps:
beautifying the background image of the first video according to the background beautifying preference parameter to obtain a second video;
and performing beauty treatment on the face image of the first video according to the beauty preference parameter to obtain the second video.
Optionally, after the playing of the second video, the method further includes:
and updating the beautification preference parameter according to the image characteristics of the second video and the watching record of the first user on the second video.
Fig. 3 is a flowchart of a video playing method according to an embodiment of the present application. The execution subject of the embodiment of the present invention is a terminal, and referring to fig. 3, the method includes:
301. the terminal obtains beautifying preference parameters of the first user.
The first user may be any user who wants to view the video. For example, as applied to a live scene, the first user may be any audience user of a live room. As another example, applied to a scene in which a short video is played, the first user may be any user who wants to watch the short video. As another example, in a scenario where a movie is played, the first user may be any user who wants to watch the movie.
The terminal may be a terminal used by the first user. The terminal can be a mobile phone, a personal computer, a notebook computer, a wearable device and the like. Alternatively, the terminal may install a client of a video application, and may play a video through the client of the video application. Wherein the video application may comprise at least one of a client of a live application, a client of a short video application, and a client of a movie application.
The beautification preference parameter is to indicate a preference of the first user for a video beautification process. From the perspective of the type of beautification preference parameter, the beautification preference parameter may include at least one of a background beautification preference parameter and a beauty preference parameter. The beautification preference parameter may include at least one beautification attribute and at least one beautification parameter from the dimension of the beautification preference parameter. After any video is beautified through the beautification preference parameters, the beautified video can be ensured to accord with the individual beautification preference of the first user, and the demand of the first user in the aspect of beautification preference is met. In an implementation, the beautification preference parameter may be an image processing parameter that may be used to image process at least one image frame in the video.
The background beautification preference parameter is to indicate a preference of a first user for beautification processing of a background image of a video. For example, assuming that a user prefers a live room for warm tones, the background beautification preference parameter may include warm tones. Optionally, the beautification attributes in the background beautification preference parameter may include at least one of hue, brightness, contrast, exposure, white balance, tone scale of the background image. Accordingly, the beautification parameters in the background beautification preference parameters may include at least one of a hue parameter, a brightness parameter, a contrast parameter, an exposure parameter, a white balance parameter, and a tone scale parameter of the background image.
The beauty preference parameter is used to indicate a preference of the first user for beautification processing of a face image of the video. For example, assuming that a certain user prefers a face image of white skin and sharp chin, the beauty preference parameters may include high brightness and high face thinning parameters. Optionally, the beautification attributes in the beauty preference parameter may include at least one of a face shape, a skin color, five sense organs, a hair style, and a makeup of the face image. Accordingly, the beauty parameters of the beauty preference parameters may include at least one of a face shape parameter, a skin color parameter, a five sense organs parameter, a hair style parameter, a makeup parameter.
Optionally, the acquiring manner of the beautification preference parameter may include any one or more of the following manners, i.e., one to seven.
The method comprises the steps of obtaining beautification preference parameters according to image characteristics of at least one third video and historical watching records of a first user on the at least one third video.
The third video refers to a video that the first user has historically viewed. For example, the third video may be a video of a live room that the first user has entered. The image feature of the third video may include at least one of a beauty parameter of the third video, a background image feature of the third video, and a face image feature of the third video. The history watching record of the third video can be generated according to the process of playing the third video by the terminal history.
The historical viewing record of the third video includes at least one of a viewing duration of the third video and interactive behavior data of the third video. The interactive behavior may include at least one of chat, gift delivery, barrage delivery and comment, and the interactive behavior data may include at least one of the number of times of the interactive behavior, the frequency of the interactive behavior and the amount of money of the interactive behavior.
Alternatively, the first mode may include at least the following mode (1.1) and mode (1.2):
the method (1.1) obtains the beautification preference parameter according to the image characteristics of the at least one third video and the watching time length of the at least one third video.
Optionally, for the historical viewing record of any third video, the historical viewing record may be parsed to obtain a starting time point and an ending time point of the third video carried by the historical viewing record. A time difference between the start time point and the end time point may be obtained as a viewing duration of the third video by the first user.
For example, when the method is applied to a live broadcast scene, the third video may be a live broadcast video, a historical watching record of the live broadcast video may be analyzed, a time point when the first user enters the live broadcast room and a time point when the first user leaves the live broadcast room are obtained, and a time difference between the time point when the first user enters the live broadcast room and the time point when the first user leaves the live broadcast room may be obtained according to the time point when the first user enters the live broadcast room and the time point when the first user leaves the live broadcast room, so as to obtain a stay duration of the first user in the live broadcast room. The dwell time of the first user in the live broadcast room can be used as the watching time of the first user on the video of the live broadcast room.
Optionally, the manner (1.1) may include any one or a combination of the following manners (1.1.1) to (1.1.4):
and (1.1.1) the terminal carries out weighted summation on the image characteristics of at least one third video according to the watching duration of the at least one third video to obtain the beautifying preference parameter of the first user.
Specifically, the terminal may obtain the weight of the at least one third video according to the watching duration of the at least one third video, and perform weighted summation on the image features of the at least one third video according to the weight of the at least one third video to obtain the beautification preference parameter of the first user. Illustratively, the beauty parameters of at least one third video may be weighted and summed, and the weighted and summed beauty parameters may be used as the beautification preference parameters of the first user. Illustratively, the facial image features of at least one third video may be weighted and summed, and the weighted and summed facial image features may be used as the beautification preference parameter of the first user.
Alternatively, the weight of each third video may be positively correlated with the viewing duration of the corresponding third video. For example, the viewing duration of the third video may be taken as the weight of the third video. For another example, a mapping relationship between the viewing duration and the weight may be established, and the mapping relationship between the viewing duration and the weight may be queried according to the viewing duration of the third video to obtain the weight mapped to the viewing duration of the third video.
And (1.1.2) selecting a fourth video with the longest watching time from the at least one third video by the terminal, and taking the image characteristics of the fourth video as beautifying preference parameters of the first user.
Illustratively, the at least one third video may be sorted according to the watching duration of the at least one third video in the order of the length of the watching duration. According to the sorting result, a fourth video with the longest watching time can be selected from at least one third video, the fourth video is the video which is watched by the first user for the longest time, the fourth video can be regarded as the video which is watched by the first user for the longest time, and therefore the image characteristics of the fourth video can be used as the beautifying preference parameter of the first user.
And (1.1.3) selecting at least one fifth video with the watching duration exceeding the watching duration threshold from the at least one third video by the terminal, and obtaining an average value of image characteristics of the at least one fifth video to obtain beautifying preference parameters of the first user.
For each third video in at least one third video, the terminal may determine whether the watching duration of the third video exceeds a watching duration threshold according to the watching duration of the third video, and select the third video as a fifth video when the watching duration of the third video exceeds the watching duration threshold. For the selected at least one fifth video, an average value of the image features of the at least one fifth video may be obtained according to the image features of each fifth video, and the average value is used as the beautification preference parameter of the first user.
The viewing duration threshold may be set according to experience or service requirements, and the viewing duration threshold may be stored in the terminal in advance.
And the mode (1.1.4) terminal selects at least one fifth video with the watching duration exceeding the watching duration threshold from the at least one third video, and performs weighted summation on the image characteristics of the at least one fifth video according to the watching duration of the at least one fifth video to obtain the beautifying preference parameter of the first user.
And (1.2) acquiring beautification preference parameters according to the image characteristics of the at least one third video and the interactive behavior data of the at least one third video.
Optionally, according to the interactive behavior data of the at least one third video, the activity of the at least one third video may be obtained, where the activity is used to indicate the frequency of triggering the interactive behavior on the corresponding third video. Optionally, for each third video in the at least one third video, the historical watching record of the third video may be analyzed to obtain the number of times of triggering the interaction behavior of the first user on the third video, which is carried in the historical watching record, and the activity of the third video may be obtained according to the number of times of triggering the interaction behavior of the first user on the third video.
For example, the number of times the first user triggers an interactive behavior with respect to the third video may be directly taken as the liveness of the third video. For another example, a corresponding relationship between the number of times of the interactive behavior and the activity may also be established, and the activity of the third video in the corresponding relationship is obtained by querying the corresponding relationship according to the number of times of the interactive behavior.
Optionally, the manner (1.2) may include at least any one or a combination of the following manners (1.2.1) to (1.2.4):
and (1.2.1) the terminal carries out weighted summation on the image characteristics of the at least one third video according to the liveness of the at least one third video to obtain the beautification preference parameter of the first user.
Specifically, the terminal may obtain the weight of the at least one third video according to the liveness of the at least one third video, and may perform weighted summation on the image features of the at least one third video according to the weight of the at least one third video to obtain the beautification preference parameter of the first user.
Optionally, the weight of each third video is positively correlated with the liveness of the corresponding third video. For example, the liveness of the third video may be taken as a weight of the third video. For another example, a mapping relationship between the activity level and the weight may be established, and the mapping relationship between the activity level and the weight may be queried according to the activity level of the third video to obtain the weight of the activity level mapping of the third video.
Taking a live broadcast scene as an example, if the number of gift sending times of a first user in a certain live broadcast room is more, the weight of the third video in the live broadcast room is larger, and the proportion of the image features of the third video in the beautification preference parameters of the first user is larger.
And (1.2.2) selecting a sixth video with highest liveness from the at least one third video by the terminal, and taking the image characteristics of the sixth video as beautifying preference parameters of the first user.
Illustratively, the at least one third video may be sorted in order of high or low liveness according to the liveness of the at least one third video. According to the sorting result, a sixth video with the highest liveness is selected from at least one third video, the sixth video is the video with the most frequent triggering interaction behaviors of the first user, the sixth video can be considered as the video which is the favorite video of the first user, and therefore the image characteristics of the sixth video can be used as the beautifying preference parameter of the first user.
The terminal selects at least one seventh video with the liveness exceeding the liveness threshold from the at least one third video, obtains the average value of the image characteristics of the at least one seventh video, and obtains the beautification preference parameter of the first user.
The terminal selects at least one seventh video with the liveness exceeding the liveness threshold from the at least one third video, and performs weighted summation on the image characteristics of the at least one seventh video according to the liveness of the at least one seventh video to obtain the beautification preference parameter of the first user.
Optionally, during the history playing process, the terminal may record the image characteristics of the third video and the history watching record of the third video. The mode of recording the image features of the third video may include any one or a combination of the following recording modes (1) to (3).
The recording mode (1) records the beauty parameters of the live broadcast room as the image characteristics of the third video corresponding to the live broadcast room.
The recording method (2) records the background image feature of the third video as the image feature of the third video. Taking the live broadcast service as an example, the hue, brightness, and the like of the picture in the live broadcast room can be recorded as the image features of the video corresponding to the live broadcast room.
And the recording mode (3) records the facial image characteristics of the third video as the image characteristics of the third video. Taking the live broadcast service as an example, the skin color, hair style, face shape, etc. of the anchor user can be recorded as the image characteristics of the video corresponding to the live broadcast room.
Optionally, referring to fig. 4, the first mode may be implemented by the modules in fig. 4, a series of beautification attributes and beautification parameters may be predefined, for each anchor user, a background image feature of a live broadcast room of the anchor user and a face image feature of the anchor user may be recorded, a dwell time and an activity of an audience user in each live broadcast room may be recorded, and a user portrait of the audience user may be sketched according to the background image feature of the live broadcast room of the anchor user, the face image feature of the anchor user, the dwell time and the activity of the audience user in the live broadcast room, so as to predict beautification preference parameters of the audience user. The beautification module can beautify the live video of the live broadcast room according to the beautification preference parameters of the audience users and present the beautification operation to the audience users.
And secondly, inputting the user portrait data of the first user into the beautification preference prediction model and outputting beautification preference parameters.
The user representation data may be attributes of the user, and may include age, gender, occupation, location, academic calendar, and the like.
The beautification preference prediction model is used for predicting beautification preference parameters of any user according to user portrait data of the user. Optionally, the beautification preference prediction model may be stored in the server, and the server may call the beautification preference prediction model, input the user portrait data of the first user into the beautification preference prediction model, obtain the beautification preference parameter of the first user output by the beautification preference prediction model, and send the beautification preference parameter to the terminal. Optionally, the beautification preference prediction model may also be pre-stored in the terminal, and the terminal may call the beautification preference prediction model, and input the user portrait data of the first user into the beautification preference prediction model to obtain the beautification preference parameter of the first user output by the beautification preference prediction model.
Alternatively, the beautification preference prediction model may be a relationship matrix indicating a relationship between user representation data of the user and beautification preference parameters of the user. For example, the relationship matrix may indicate that a first user having attribute a prefers live video with beauty parameter B.
Alternatively, each row in the relationship matrix may refer to a user representation data, each column in the relationship matrix may refer to a beautification attribute, and each element in the relationship matrix may refer to a beautification parameter. Illustratively, after the terminal inputs the user portrait data of the first user into the relationship matrix, a row of the user portrait data in the relationship matrix may be queried, and an element on the row in the relationship matrix is obtained as a beautifying preference parameter of the first user.
Optionally, the training process of the beautification preference prediction model may include the following steps one to two:
the method comprises the steps of firstly, acquiring beautification preference parameters of at least one sample user according to image characteristics of the at least one sample video and historical watching records of the at least one sample user on the at least one sample video.
A large number of sample users may be counted in advance. For each sample user, at least one sample video watched by the sample user can be obtained, and the beautification preference parameters of the sample user are obtained according to the image characteristics of the at least one sample video and the historical watching records of the sample user on the at least one sample video. The specific steps for obtaining the beautifying preference parameters of the sample user are described in the first embodiment, and are not described herein again.
And secondly, training to obtain a beautification preference prediction model based on the user image data of at least one sample user and the beautification preference parameters of at least one sample user.
For example, an initial beautification preference prediction model may be obtained, model training may be performed on the initial beautification preference prediction model based on user image data of at least one sample user and beautification preference parameters of the at least one sample user, during the model training, parameters of the initial beautification preference prediction model may be continuously adjusted according to a deviation between the beautification preference parameters of the sample user and the beautification preference parameters output by the initial beautification preference prediction model, when the deviation between the beautification preference parameters output by the initial beautification preference prediction model and the beautification preference parameters of the sample user is smaller than a preset threshold, the model training may be ended, and the beautification preference prediction model may be output.
Alternatively, referring to fig. 5, the second mode may be implemented by each module in fig. 5. A series of beautification attributes and beautification preference parameters can be predefined, and for each anchor user, the background image characteristics of the live broadcast room of the anchor user and the face image characteristics of the anchor user are recorded. Moreover, the stay time and the activity of the sample user in each live broadcast room can be recorded, and beautification preference parameters of the sample user can be predicted according to the background image characteristics of the live broadcast room of the anchor user, the face image characteristics of the anchor user and the stay time and the activity of a large number of sample users in the live broadcast room. The relationship matrix of the beautifying preference and the user portrait data can be obtained through training according to beautifying preference parameters of a large number of sample users and the user portrait data. And the beautifying module is used for inputting the acquired user portrait data of the audience users into the relation matrix according to the relation matrix and outputting beautifying preference parameters of the audience users, so that the beautifying processing is performed on the live video in the live broadcast room.
Optionally, the beautification preference prediction model may be encapsulated as a mapping relationship between the user representation data and the beautification preference parameter, the mapping relationship between the user representation data and the beautification preference parameter including at least one user representation data and the beautification preference parameter corresponding to the at least one user representation data. The user portrait data and the beautification preference parameter are mapped according to the user portrait data of the first user, and the beautification preference parameter corresponding to the user portrait data is obtained from the mapping relation.
Using a live scene as an example, the beautification preference prediction model may be encapsulated as a mapping relationship between user portrait data for each viewer user and beautification preference parameters for each viewer user. The server can inquire the beautification preference parameter of each audience user according to the user portrait data of each audience user and send the corresponding beautification preference parameter to the terminal of each audience user. Of course, the mapping relationship between the user portrait data and the beautification preference parameter can also be directly sent to the terminal of the audience user, and the beautification preference parameter of the home terminal user can be inquired by the terminal of the audience user according to the user portrait data of the home terminal user.
And fourthly, acquiring beautification preference parameters according to the face preference information input by the first user.
The face preference information is used to indicate a preference of the first user for a face. And the beautification preference parameter is used for beautification processing any video into a video matched with the face preference information.
For example, the face preference information may include white skin, chinchilla, and large eyes. Correspondingly, beautification preference parameters can include higher brightness, higher face thinning parameter and ommatidium parameter, through higher brightness, can beautify the face image in the video and handle as the face image that has white skin, through higher face thinning parameter, can beautify the face image in the video and handle as the face image that has point chin, through ommatidium parameter, can beautify the face image in the video and handle as the face image that has ommatidium, so, when the user watches any video on the terminal, all can watch the video that has the face image of white skin, point chin and ommatidium, guarantee that the video broadcast can accord with user's people's face preference.
Optionally, a corresponding relationship between the face preference information and the beautification preference parameter may be established, and the corresponding relationship may be queried according to the face preference information input by the first user to obtain the corresponding beautification preference parameter. The corresponding relationship between the face preference information and the beautification preference parameter may include at least one type of face preference information and at least one type of corresponding beautification preference parameter. For example, the correspondence between the beautification preference parameter and the beautification parameter may be as shown in table 1 below.
TABLE 1
Face preference information | Beautification preference parameters |
White skin | High brightness |
Under the tip | High face thinning parameter |
Honey skin | Low brightness |
Round face | Low face thinning parameter |
In an exemplary scenario, assuming that a certain audience user prefers a anchor with white skin, chin top and eyes, the beautification preference parameters of the audience user include higher brightness, higher face thinning parameters and eyes parameters, when the audience user enters any live broadcast room to watch a live broadcast, the anchor with white skin, chin top and eyes can be presented by performing beauty treatment on a face image of the anchor, so that the taste of the audience user to the anchor is satisfied.
Regarding the manner of obtaining the face preference information of the first user, optionally, the terminal may display a face preference input interface. The first user can trigger an input operation on the face preference input interface to input face preference information. The terminal may receive the face preference information input on the face preference input interface according to the detected input operation. The face preference input interface can be provided as a questionnaire interface, and can also be provided as other interfaces with face preference information input, and the presentation form of the face preference input interface is not limited in this embodiment.
Optionally, receiving the face preference information input by the first user through the face preference input interface may be only one example, and may also receive the face preference information input by the first user through other manners, for example, receiving the face preference information input by the first user through voice, and the like.
And fifthly, acquiring beautification preference parameters according to the video background preference information input by the first user.
The video background preference information is used to indicate a preference of the first user for the video background image. The beautification preference parameter is used for beautification processing of the video background image of any video so as to match the video background image of the video with the video background preference information.
For example, the video background preference information may include a warm tone, the beautification preference parameter obtained according to the warm tone may include a color balance parameter, and the beautification of the video background image in the video may be performed to the video background image with the warm tone through the color balance parameter.
As to the manner of obtaining the video context preference information of the first user, the terminal may optionally display a video context preference input interface. The first user can trigger an input operation on the video background preference input interface to input video background preference information. The terminal may receive video background preference information input on the video background preference input interface according to the detected input operation. The video background preference input interface may be provided as a questionnaire interface, or may be provided as other interfaces with input video background preference information.
Optionally, receiving the video background preference information input by the first user through the video background preference input interface may be only one example, and may also receive the video background preference information input by the first user through other manners, for example, receiving the video background preference information input by the first user through voice, and the like.
And a sixth mode, acquiring the beautification parameters set by the second user as beautification preference parameters.
The second user is the recorder of the first video. For example, in a live scene, the second user may be an anchor user. As another example, in a short video playback scenario, the second user may be a shooting user of the short video. The second user can set initial beautification parameters for the first video, the terminal of the second user can send the first video and the beautification parameters to the server, and the server can receive the first video and the beautification parameters and send the first video and the beautification parameters to the terminal of the first user.
And a seventh mode of receiving the beautification preference parameters input by the first user.
The beautification preference parameter can be directly input on the terminal by the first user himself. Optionally, the terminal may display a beautification preference parameter configuration interface. The first user can trigger an input operation on the beautifying preference parameter configuration interface to input beautifying preference parameters. The terminal may receive beautification preference parameters input on the beautification preference parameter configuration interface according to the detected input operation. The beautification preference parameter configuration interface can be provided as a questionnaire interface, and can also be provided as other interfaces with input of beautification preference parameters.
Optionally, the receiving of the beautification preference parameter input by the first user through the beautification preference parameter configuration interface may be only an example, and the beautification preference parameter input by the first user may also be received through other manners, for example, the beautification preference parameter input by the first user through voice is received, and the like.
Alternatively, the beautification preference parameter may be obtained differently for the new user and the old user, depending on the new user and the old user. In particular, when the first user is an old user, the above-mentioned first mode may be performed. When the first user is a new user, the second and/or sixth modes may be performed.
The first point to be noted is that the execution subject of the first to seventh modes may be a terminal or a server, and the embodiment does not limit the device for determining the beautification preference parameter. If the beautifying preference parameters are obtained by the server through the first mode to the seventh mode, the server can send the beautifying preference parameters to the terminal, and the terminal can receive the beautifying preference parameters, so that the beautifying preference parameters are obtained.
The second point to be noted is that different modes from the mode one to the mode seven can be combined to form the mode of acquiring the beautification preference parameter. For example, if the first mode and the second mode are combined, after the beautification preference parameter is obtained in the first mode according to the image feature of the at least one third video and the historical viewing record of the first user on the at least one third video, the beautification preference parameter may be used as a first candidate beautification preference parameter. And when the user portrait data of the first user is input into the beautification preference prediction model and the beautification preference parameter is output, the beautification preference parameter can be used as a second candidate beautification preference parameter. And then, acquiring a final beautification preference parameter according to the first candidate beautification preference parameter and the second candidate beautification preference parameter. For example, the first candidate beautification preference parameter and the second candidate beautification preference parameter may be summed in a weighted manner, with the weighted sum being the beautification preference parameter.
302. The terminal receives the first video recorded by the second user.
The first video is a video to be played. From the content perspective of the first video, the first video may include at least one of a live video, a short video, a movie video. From the viewpoint of the data format of the first video, the data format of the first video may include at least one of stream data, a still file.
Regarding the process of receiving the first video, optionally, the terminal of the second user may record a video to obtain the first video, and send the first video to the server. The server may receive the first video, send the first video to the terminal of the first user, and the terminal of the first user may receive the first video.
Taking the application to a live scene as an example, the terminal of the anchor user can record a video to obtain a first video, and push the first video to the server in real time. The server may receive a first video of a terminal of an anchor user. The terminal of the first user may pull the first video from the server in real time, thereby receiving the first video.
Alternatively, the first video may be sent to the terminal of the first user by another device other than the server, for example, different terminals playing the first video may be connected based on a peer-to-peer (P2P) network, and the terminal may receive the first video sent by other terminals through the P2P network. Of course, the terminal of the second user may also directly send the first video to the terminal of the first user, and the terminal of the first user may receive the first video. The transmission mode for receiving the first video is not limited in this embodiment.
303. And the terminal beautifies the first video according to the beautification preference parameter to obtain a second video.
The second video is obtained after the first video is beautified according to the beautification preference parameter. For example, the second video may be a beautified processed live video. As another example, the second video may be a beautification-processed short video.
The manner of beautification processing of the first video may include any one or a combination of the following manners one and two.
And in the first mode, the terminal beautifies the background image of the first video according to the background beautification preference parameter to obtain a second video.
Taking the background beautification preference parameter as an example of the tone parameter, the terminal may adjust the tone of the background image of the first video according to the tone parameter to obtain the second video with the adjusted tone. Taking the background beautification preference parameter as an example of the brightness parameter, the terminal adjusts the brightness of the background image of the first video according to the brightness parameter to obtain the brightness-adjusted second video.
And in the second mode, the terminal performs facial beautification processing on the face image of the first video according to the facial beautification preference parameter to obtain a second video.
The manner of performing the beauty processing on the face image of the first video may include any one or a combination of the following manners (2.1) to (2.4).
And (2.1) the terminal performs buffing processing on the face image of the first video according to the buffing parameters to obtain a second video. The peeling parameter can be used for adjusting the definition of the face image, for example, the peeling parameter can be used for adjusting the definition of the face image from high to low, and the effect of beautifying the face image by peeling is achieved.
And (2.2) the terminal performs whitening processing on the face image of the first video according to the whitening parameters to obtain a second video. The whitening parameters can be used for adjusting the brightness of the face image, for example, the whitening parameters can be used for adjusting the brightness of the face image from dark to bright, so that the face beautifying effect of whitening the face image is achieved.
And (2.3) the terminal carries out filter processing on the face image of the first video according to the filter parameters to obtain a second video. Wherein the filter parameters may include at least one filter.
And (2.4) the terminal performs shape beautifying processing on the face image of the first video according to the shape beautifying parameters to obtain a second video. The shape parameter is used for adjusting the shape of five sense organs of the face image, for example, the shape parameter may include at least one of a face thinning parameter, a large eye parameter and a nose thinning parameter.
304. And the terminal plays the second video.
Optionally, when the method is applied to a live broadcast scene, the first video may be a video stream, each image frame of the video stream may be beautified in sequence, and each image frame after beautification is displayed in sequence according to the sequence of each image frame in the video, so as to achieve the effect of playing the second video after beautification.
In the embodiment, the second video is beautified according to the beautification parameters corresponding to the beautification preference parameters of the first user, so that when the terminal plays the second video, the second video can be ensured to accord with the beautification preference of the first user.
Illustratively, the terminal can display a live broadcast room interface, and play a live broadcast video beautified according to beautification preference parameters of a first user on the live broadcast room interface, so that the live broadcast video is presented to the first user, thereby ensuring that the live broadcast video takes care of personal preferences of the first user and attracting the first user to stay in the live broadcast room.
Furthermore, for any live broadcast room, beautifying preference parameters of different users can be different, and when different users enter the live broadcast room, beautifying effects of watched live broadcast videos can be different, so that the user-defined requirements of all users are met. In an exemplary scene, a user A prefers a higher whitening parameter, a user B prefers a higher face thinning parameter, and when the user A enters a live broadcast room of a anchor C to watch a live broadcast video, a terminal of the user A takes the higher whitening parameter as a beautifying parameter to whiten a face image of the anchor C; when the user B enters the live broadcast room of the anchor C to watch the live broadcast video, the terminal of the user B takes the higher face thinning parameter as the beautifying parameter to perform face thinning processing on the face image of the anchor C.
305. And the terminal updates the beautification preference parameters of the first user.
Optionally, the terminal may update the beautification preference parameter of the first user according to the video playing process, so as to ensure that the beautification preference parameter of the first user has timeliness, and may change along with the change of the beautification preference of the first user. Specifically, the process of updating the beautification preference parameters of the first user may include the following steps one through two.
Step one, generating a watching record of the first user to the second video.
In the playing process of the second video, the terminal can record the watching duration and the interactive behavior record of the first user on the second video, and generate the watching record of the first user on the second video according to the watching duration and the interactive behavior record of the first user on the second video.
And step two, updating the beautification preference parameter of the first user according to the image characteristics of the second video and the watching record of the first user on the second video.
Optionally, the terminal may retrieve the beautification preference parameter of the first user according to the image feature of the second video, the viewing record of the first user on the second video, the image feature of the at least one third video, and the historical viewing record of the first user on the at least one third video. The terminal can update the beautification preference parameter of the first user, which is stored before, to the beautification preference parameter of the first user, which is obtained again, so that the beautification preference parameter of the first user is updated.
Optionally, the updating of the beautification preference parameter of the first user by the terminal may be only an example, or the server may update the beautification preference parameter of the first user and send the updated beautification preference parameter to the terminal. Optionally, the process of updating the beautification preference parameters may continue. For example, the beautification preference parameter of the first user may be updated according to the image feature of the video played this time and the viewing record of the first user on the video each time the video is played. Taking a live broadcast scene as an example, the beautification preference parameter of the first user can be updated when the first user enters a live broadcast room to watch live broadcast.
Optionally, when the terminal of the second user uploads the video, the beautification processing may also be performed on the video, and the beautified video is sent to the server. The server may send the beautified processed video to the terminal of the first user. Therefore, the first video received by the terminal of the first user can be the video which is beautified by the terminal of the second user, the terminal of the first user can obtain the video which is beautified twice after the first video is beautified according to the beautification preference parameter of the first user, and the video which is beautified twice is played, so that the video which is beautified by the recording end and the playing end of the video is presented to the first user. The embodiment does not limit whether the terminal of the second user beautifies the video.
Optionally, when receiving the video sent by the terminal of the second user, the server may also perform beautification on the video, and send the beautified video to the terminal of the first user. Therefore, the first video received by the terminal of the first user can be the video which is beautified by the server, the terminal of the first user can obtain the video which is beautified twice after the beautifying processing is carried out on the first video according to the beautifying parameter corresponding to the beautifying preference parameter of the first user, and the video which is beautified twice is played, so that the video which is beautified by the server and the playing terminal of the video is presented to the first user. The embodiment does not limit whether the server performs beautification processing on the video.
The method provided by the embodiment of the application provides a scheme for playing the video in combination with beautification preference of audience users. The beautification processing is carried out on the video according to the beautification preference parameters of the audience users, and then the beautified video is played, so that the beautification parameters of the video can be attached to the beautification preference of the audience users, the accuracy of the beautification parameters is improved, the beautification processing is carried out on the video according to the beautification parameters with higher accuracy, the playing effect of the video can be improved, and the played video is guaranteed to meet the favor of the audience users. And the same video can be beautified on the terminals of different audience users according to different beautifying preference parameters, so that the individual beautifying preference of each audience user is fully considered, the user-defined requirement of the audience user is met, and the flexibility of video playing is improved. Especially, when the method is applied to a live broadcast scene, terminals of different audience users entering a live broadcast room can beautify live broadcast videos according to beautifying preference parameters of local users, and beautifying effects of live broadcast videos watched by different audience users entering the live broadcast room can be different, so that the live broadcast videos watched by all the audience users are enabled to be fit with personal beautifying preferences, each audience user is attracted to stay in the live broadcast room to watch live broadcast, and the residence rate of the live broadcast room is greatly improved.
Fig. 6 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present application. Referring to fig. 6, the apparatus includes:
an obtaining module 601, configured to obtain a beautification preference parameter of a first user;
a beautification module 602, configured to, when a first video recorded by a second user is received, beautify the first video according to the beautification preference parameter to obtain a second video, where the second user is any user other than the first user;
a playing module 603, configured to play the second video.
The device provided by the embodiment of the application provides a scheme for playing the video in combination with beautification preference of audience users. The beautification processing is carried out on the video according to the beautification preference parameters of the audience users, and then the beautified video is played, so that the beautification parameters of the video can be attached to the beautification preference of the audience users, the accuracy of the beautification parameters is improved, the beautification processing is carried out on the video according to the beautification parameters with higher accuracy, the playing effect of the video can be improved, and the played video is guaranteed to meet the favor of the audience users. And the same video can be beautified on the terminals of different audience users according to different beautifying preference parameters, so that the individual beautifying preference of each audience user is fully considered, the user-defined requirement of the audience user is met, and the flexibility of video playing is improved. Especially, when the method is applied to a live broadcast scene, terminals of different audience users entering a live broadcast room can beautify live broadcast videos according to beautifying preference parameters of local users, and beautifying effects of live broadcast videos watched by different audience users entering the live broadcast room can be different, so that the live broadcast videos watched by all the audience users are enabled to be fit with personal beautifying preferences, each audience user is attracted to stay in the live broadcast room to watch live broadcast, and the residence rate of the live broadcast room is greatly improved.
Optionally, the acquiring of the beautification preference parameter includes:
acquiring the beautification preference parameter according to the image characteristics of at least one third video and the historical watching record of the first user on the at least one third video; or,
inputting the user portrait data of the first user into a beautification preference prediction model and outputting the beautification preference parameters, wherein the beautification preference prediction model is used for predicting the beautification preference parameters of the user according to the user portrait data of the user; or,
acquiring beautification preference parameters according to the face preference information input by the first user; or,
acquiring the beautifying preference parameter according to the video background preference information input by the first user; or,
the beautification parameters set by the second user are used as the beautification preference parameters; or,
the beautification preference parameter input by the first user is received.
Optionally, the acquiring of the beautification preference parameter includes:
acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the watching duration of the at least one third video; or,
and acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the interactive behavior data of the at least one third video.
Optionally, the training process of the beautification preference prediction model includes:
acquiring beautification preference parameters of at least one sample user according to image characteristics of the at least one sample video and historical watching records of the at least one sample user on the at least one sample video;
and training to obtain the beautifying preference prediction model based on the user portrait data of the at least one sample user and the beautifying preference parameters of the at least one sample user.
Optionally, the beautification module 602 includes at least one of:
the background beautification submodule is used for beautifying the background image of the first video according to the background beautification preference parameter to obtain the second video;
and the beauty sub-module is used for performing beauty treatment on the face image of the first video according to the beauty preference parameter to obtain the second video.
Optionally, the apparatus further comprises:
and the updating module is used for updating the beautification preference parameter according to the image characteristics of the second video and the watching record of the first user on the second video.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the video playing apparatus provided in the foregoing embodiment, when playing a video, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above. In addition, the video playing apparatus and the video playing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 7 shows a block diagram of a terminal 700 according to an exemplary embodiment of the present application. The terminal 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement a video playback method provided by method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic position of the terminal 700 to implement navigation or LBS (location based Service). The positioning component 708 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of terminal 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor of a terminal to perform the video playing method in the above embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and so on.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (10)
1. A video playback method, the method comprising:
acquiring beautification preference parameters of a first user;
when a first video recorded by a second user is received, beautifying the first video according to the beautifying preference parameter to obtain a second video, wherein the second user is any user except the first user;
and playing the second video.
2. The method of claim 1, wherein the obtaining of the beautification preference parameter comprises:
acquiring beautification preference parameters according to image characteristics of at least one third video and historical watching records of the first user on the at least one third video; or,
inputting the user portrait data of the first user into a beautification preference prediction model and outputting the beautification preference parameters, wherein the beautification preference prediction model is used for predicting the beautification preference parameters of the user according to the user portrait data of the user; or,
acquiring beautification preference parameters according to the face preference information input by the first user; or,
acquiring the beautification preference parameter according to the video background preference information input by the first user; or,
the beautification parameters set by the second user are used as the beautification preference parameters; or,
receiving the beautification preference parameter input by the first user.
3. The method of claim 2, wherein obtaining the beautification preference parameter according to the image characteristics of the at least one third video and the historical viewing record of the at least one third video by the first user comprises:
acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the watching duration of the at least one third video; or,
and acquiring the beautification preference parameter according to the image characteristics of the at least one third video and the interactive behavior data of the at least one third video.
4. The method of claim 2, wherein the training process of the beautification preference prediction model comprises:
acquiring beautification preference parameters of at least one sample user according to image characteristics of the at least one sample video and historical watching records of the at least one sample user on the at least one sample video;
and training to obtain the beautifying preference prediction model based on the user portrait data of the at least one sample user and the beautifying preference parameters of the at least one sample user.
5. The method of claim 1, wherein the beautification processing of the first video according to the beautification preference parameter to obtain a second video comprises at least one of:
beautifying the background image of the first video according to the background beautifying preference parameter to obtain the second video;
and performing beauty treatment on the face image of the first video according to the beauty preference parameter to obtain the second video.
6. The method of claim 1, wherein after the playing the second video, the method further comprises:
and updating the beautification preference parameter according to the image characteristics of the second video and the watching record of the first user on the second video.
7. A video playback apparatus, comprising:
the acquisition module is used for acquiring beautification preference parameters of a first user;
the beautifying module is used for beautifying the first video recorded by a second user according to the beautifying preference parameter to obtain a second video, wherein the second user is any user except the first user;
and the playing module is used for playing the second video.
8. The apparatus of claim 7, wherein the beautifying preference parameter obtaining process comprises:
acquiring beautification preference parameters according to image characteristics of at least one third video and historical watching records of the first user on the at least one third video; or,
inputting the user portrait data of the first user into a beautification preference prediction model and outputting the beautification preference parameters, wherein the beautification preference prediction model is used for predicting the beautification preference parameters of the user according to the user portrait data of the user; or,
acquiring beautification preference parameters according to the face preference information input by the first user; or,
acquiring the beautification preference parameter according to the video background preference information input by the first user; or,
the beautification parameters set by the second user are used as the beautification preference parameters; or,
receiving the beautification preference parameter input by the first user.
9. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by the video playback method of any of claims 1-6.
10. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a video playback method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811131394.9A CN109035180A (en) | 2018-09-27 | 2018-09-27 | Video broadcasting method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811131394.9A CN109035180A (en) | 2018-09-27 | 2018-09-27 | Video broadcasting method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109035180A true CN109035180A (en) | 2018-12-18 |
Family
ID=64620640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811131394.9A Pending CN109035180A (en) | 2018-09-27 | 2018-09-27 | Video broadcasting method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109035180A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110072116A (en) * | 2019-05-06 | 2019-07-30 | 广州虎牙信息科技有限公司 | Virtual newscaster's recommended method, device and direct broadcast server |
CN110933468A (en) * | 2019-10-17 | 2020-03-27 | 宇龙计算机通信科技(深圳)有限公司 | Playing method, playing device, electronic equipment and medium |
CN112287260A (en) * | 2020-10-20 | 2021-01-29 | 维沃移动通信有限公司 | Content output method and device and electronic equipment |
CN112598605A (en) * | 2021-03-08 | 2021-04-02 | 江苏龙虎网信息科技股份有限公司 | Photo cloud transmission live-broadcast picture-repairing system based on face recognition |
CN112785488A (en) * | 2019-11-11 | 2021-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and device, storage medium and terminal |
CN112883211A (en) * | 2021-02-10 | 2021-06-01 | 维沃移动通信有限公司 | File sharing method and device, electronic equipment and medium |
CN113542873A (en) * | 2021-09-15 | 2021-10-22 | 杭州网易云音乐科技有限公司 | Data processing method and device, storage medium and electronic equipment |
WO2024051535A1 (en) * | 2022-09-06 | 2024-03-14 | 北京字跳网络技术有限公司 | Method and apparatus for processing live-streaming image frame, and device, readable storage medium and product |
WO2024183694A1 (en) * | 2023-03-07 | 2024-09-12 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device, computer-readable storage medium and product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140214489A1 (en) * | 2013-01-30 | 2014-07-31 | SocialGlimpz, Inc. | Methods and systems for facilitating visual feedback and analysis |
CN107347082A (en) * | 2016-05-04 | 2017-11-14 | 阿里巴巴集团控股有限公司 | The implementation method and device of video effect |
CN107371057A (en) * | 2017-06-16 | 2017-11-21 | 武汉斗鱼网络科技有限公司 | A kind of method and apparatus that U.S. face effect is set |
CN108040285A (en) * | 2017-11-15 | 2018-05-15 | 上海掌门科技有限公司 | Net cast picture adjusting method, computer equipment and storage medium |
CN108470362A (en) * | 2018-01-29 | 2018-08-31 | 北京奇虎科技有限公司 | A kind of method and apparatus for realizing video toning |
-
2018
- 2018-09-27 CN CN201811131394.9A patent/CN109035180A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140214489A1 (en) * | 2013-01-30 | 2014-07-31 | SocialGlimpz, Inc. | Methods and systems for facilitating visual feedback and analysis |
CN107347082A (en) * | 2016-05-04 | 2017-11-14 | 阿里巴巴集团控股有限公司 | The implementation method and device of video effect |
CN107371057A (en) * | 2017-06-16 | 2017-11-21 | 武汉斗鱼网络科技有限公司 | A kind of method and apparatus that U.S. face effect is set |
CN108040285A (en) * | 2017-11-15 | 2018-05-15 | 上海掌门科技有限公司 | Net cast picture adjusting method, computer equipment and storage medium |
CN108470362A (en) * | 2018-01-29 | 2018-08-31 | 北京奇虎科技有限公司 | A kind of method and apparatus for realizing video toning |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110072116A (en) * | 2019-05-06 | 2019-07-30 | 广州虎牙信息科技有限公司 | Virtual newscaster's recommended method, device and direct broadcast server |
CN110933468A (en) * | 2019-10-17 | 2020-03-27 | 宇龙计算机通信科技(深圳)有限公司 | Playing method, playing device, electronic equipment and medium |
CN112785488A (en) * | 2019-11-11 | 2021-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and device, storage medium and terminal |
CN112287260A (en) * | 2020-10-20 | 2021-01-29 | 维沃移动通信有限公司 | Content output method and device and electronic equipment |
CN112287260B (en) * | 2020-10-20 | 2024-09-06 | 维沃移动通信有限公司 | Content output method and device and electronic equipment |
CN112883211A (en) * | 2021-02-10 | 2021-06-01 | 维沃移动通信有限公司 | File sharing method and device, electronic equipment and medium |
CN112598605A (en) * | 2021-03-08 | 2021-04-02 | 江苏龙虎网信息科技股份有限公司 | Photo cloud transmission live-broadcast picture-repairing system based on face recognition |
CN113542873A (en) * | 2021-09-15 | 2021-10-22 | 杭州网易云音乐科技有限公司 | Data processing method and device, storage medium and electronic equipment |
WO2024051535A1 (en) * | 2022-09-06 | 2024-03-14 | 北京字跳网络技术有限公司 | Method and apparatus for processing live-streaming image frame, and device, readable storage medium and product |
WO2024183694A1 (en) * | 2023-03-07 | 2024-09-12 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device, computer-readable storage medium and product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109035180A (en) | Video broadcasting method, device, equipment and storage medium | |
CN109167950B (en) | Video recording method, video playing method, device, equipment and storage medium | |
CN109302538B (en) | Music playing method, device, terminal and storage medium | |
CN109600678B (en) | Information display method, device and system, server, terminal and storage medium | |
CN110865754B (en) | Information display method and device and terminal | |
CN109729372B (en) | Live broadcast room switching method, device, terminal, server and storage medium | |
CN108401124B (en) | Video recording method and device | |
CN109040297B (en) | User portrait generation method and device | |
CN111079012A (en) | Live broadcast room recommendation method and device, storage medium and terminal | |
CN110572711B (en) | Video cover generation method and device, computer equipment and storage medium | |
CN112533017B (en) | Live broadcast method, device, terminal and storage medium | |
CN110533585B (en) | Image face changing method, device, system, equipment and storage medium | |
CN108965757B (en) | Video recording method, device, terminal and storage medium | |
CN111355974A (en) | Method, apparatus, system, device and storage medium for virtual gift giving processing | |
CN110163066B (en) | Multimedia data recommendation method, device and storage medium | |
CN111445901B (en) | Audio data acquisition method and device, electronic equipment and storage medium | |
CN111586431B (en) | Method, device and equipment for live broadcast processing and storage medium | |
CN111836069A (en) | Virtual gift presenting method, device, terminal, server and storage medium | |
CN111432245B (en) | Multimedia information playing control method, device, equipment and storage medium | |
CN110418152B (en) | Method and device for carrying out live broadcast prompt | |
CN108848394A (en) | Net cast method, apparatus, terminal and storage medium | |
CN113395566B (en) | Video playing method and device, electronic equipment and computer readable storage medium | |
CN111787407B (en) | Interactive video playing method and device, computer equipment and storage medium | |
CN110808021B (en) | Audio playing method, device, terminal and storage medium | |
CN111276122A (en) | Audio generation method and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |
|
RJ01 | Rejection of invention patent application after publication |