[go: up one dir, main page]

CN114519112B - Multimedia object prediction method, device, equipment, medium and program product - Google Patents

Multimedia object prediction method, device, equipment, medium and program product Download PDF

Info

Publication number
CN114519112B
CN114519112B CN202210107935.4A CN202210107935A CN114519112B CN 114519112 B CN114519112 B CN 114519112B CN 202210107935 A CN202210107935 A CN 202210107935A CN 114519112 B CN114519112 B CN 114519112B
Authority
CN
China
Prior art keywords
user behavior
data
multimedia object
sub
target multimedia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210107935.4A
Other languages
Chinese (zh)
Other versions
CN114519112A (en
Inventor
盛心怡
宛言
高梓尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210107935.4A priority Critical patent/CN114519112B/en
Publication of CN114519112A publication Critical patent/CN114519112A/en
Application granted granted Critical
Publication of CN114519112B publication Critical patent/CN114519112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a method, apparatus, device, medium and program product for predicting a multimedia object. The method comprises the following steps: obtaining first user behavior duty cycle data of a target-to-media object in a first time period and second user behavior duty cycle data of the target-to-media object in a second time period, wherein the second time period is before the first time period; determining a first acceleration of the user behavior duty cycle of the target multimedia object according to the first user behavior duty cycle data and the second user behavior duty cycle data; and predicting the probability that the target multimedia object belongs to a preset type according to the first acceleration.

Description

Method, apparatus, device, medium and program product for predicting multimedia object
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a method, an apparatus, a device, a medium, and a program product for predicting a multimedia object.
Background
Compared with characters and pictures, the expressive force of multimedia objects such as video and audio is stronger. For example, video combines auditory and visual information, is rich in content, strong in expressive force, and better in intuitiveness, and is therefore popular on various social media platforms. With the increase of life pace, the fragmented information acquisition mode of short videos is gradually favored by people, and more people are willing to share the short videos shot by themselves on a short video platform and share the short videos which are interesting or other deserve to watch to other people. For multimedia object platforms (such as video or audio platforms), predicting a huge number of multimedia objects is of great importance in terms of saving bandwidth of the content distribution network, etc.
Disclosure of Invention
The disclosure provides a prediction technical scheme of a multimedia object.
According to an aspect of the present disclosure, there is provided a prediction method of a multimedia object, including:
obtaining first user behavior duty cycle data of a target-to-media object in a first time period and second user behavior duty cycle data of the target-to-media object in a second time period, wherein the second time period is before the first time period;
determining a first acceleration of the user behavior duty cycle of the target multimedia object according to the first user behavior duty cycle data and the second user behavior duty cycle data;
And predicting the probability that the target multimedia object belongs to a preset type according to the first acceleration.
By obtaining the first user behavior duty ratio data of the target to the media object in the first time period and the second user behavior duty ratio data of the target multimedia object in the second time period, the first acceleration of the user behavior duty ratio of the target multimedia object is determined according to the first user behavior duty ratio data and the second user behavior duty ratio data, and the probability that the target multimedia object belongs to the preset type is predicted according to the first acceleration, therefore, the probability that the target multimedia object belongs to the preset type can be accurately predicted based on the acceleration of the user behavior duty ratio of the target multimedia object, future performances of the target multimedia object can be predicted based on the generated user behavior data, delay is low, and therefore the multimedia object platform is facilitated to distribute the multimedia object more effectively, and bandwidth of a content distribution network is facilitated to be saved.
In one possible implementation manner, the first time period includes at least two first sub-time periods, the first user behavior duty cycle data includes at least two items of first sub-user behavior duty cycle data corresponding to the at least two first sub-time periods one to one, the second time period includes at least two second sub-time periods, and the second user behavior duty cycle data includes at least two items of second sub-user behavior duty cycle data corresponding to the at least two second sub-time periods one to one;
The obtaining the first user behavior duty ratio data of the target to the media object in the first time period and the second user behavior duty ratio data of the target multimedia object in the second time period comprises the following steps:
Determining first user behavior duty ratio data of the target to the media object in a first time period according to the at least two first sub-user behavior duty ratio data;
And determining second user behavior duty ratio data of the target multimedia object in a second time period according to the at least two second sub-user behavior duty ratio data.
In the implementation manner, the first user behavior proportion data is determined according to at least two first sub-user behavior proportion data corresponding to at least two first sub-time periods one by one, the second user behavior proportion data is determined according to at least two second sub-user behavior proportion data corresponding to at least two second sub-time periods one by one, and the first acceleration is determined based on the determined first user behavior proportion data and the second user behavior proportion data, so that the more stable first acceleration can be determined, and the more stable type prediction can be realized for the target multimedia object.
In one possible implementation of the present invention,
The determining, according to the at least two pieces of first sub-user behavior duty ratio data, first user behavior duty ratio data of the target to the media object in a first period of time includes: determining first user behavior ratio data of the target pair media object in a first time period according to the at least two first sub-user behavior ratio data and at least two first weights corresponding to the at least two first sub-user behavior ratio data one by one;
The determining the second user behavior proportion data of the target multimedia object in the second time period according to the at least two second sub-user behavior proportion data comprises the following steps: and determining second user behavior ratio data of the target pair media object in a second time period according to the at least two second sub-user behavior ratio data and at least two second weights corresponding to the at least two second sub-user behavior ratio data one by one.
In this implementation, the first user behavior duty ratio data is determined according to at least two first sub-user behavior duty ratio data and at least two first weights corresponding to the at least two first sub-user behavior duty ratio data one by one, the second user behavior duty ratio data is determined according to at least two second sub-user behavior duty ratio data and at least two second weights corresponding to the at least two second sub-user behavior duty ratio data one by one, and the first acceleration is determined based on the first user behavior duty ratio data and the second user behavior duty ratio data determined thereby, so that the probability that the target multimedia object belongs to the preset type can be predicted more accurately.
In one possible implementation manner, the first weight corresponding to the first sub-user behavior duty ratio data of any one of the at least two first sub-user behavior duty ratio data is inversely related to the interval duration between the first sub-time period corresponding to the first sub-user behavior duty ratio data and the current time, and is an exponential smoothing coefficient corresponding to the first sub-time period.
By adopting the first weight in the implementation manner to determine the first user behavior duty ratio data and determining the first acceleration based on the determined first user behavior duty ratio data, the accuracy of type prediction of the target multimedia object can be further improved.
In one possible implementation manner, the second weight corresponding to any one of the at least two second sub-user behavior duty cycle data is positively correlated with the interval duration between the second sub-time period corresponding to the second sub-user behavior duty cycle data and the current time.
By adopting the second weight in the implementation manner to determine the second user behavior duty ratio data and determining the first acceleration based on the determined second user behavior duty ratio data, the accuracy of type prediction of the target multimedia object can be further improved.
In a possible implementation manner, the predicting, according to the first acceleration, a probability that the target multimedia object belongs to a preset type includes:
Inputting the first acceleration into a pre-trained neural network, and predicting the probability that the target multimedia object belongs to a preset type through the neural network.
In this implementation, the probability that the target multimedia object belongs to the preset type is predicted via the neural network by inputting the first acceleration into the pre-trained neural network, so that the accuracy and speed of the probability that the target multimedia object belongs to the preset type can be improved.
In one possible implementation, before the inputting the first acceleration into the pre-trained neural network, the method further includes:
Acquiring a training object set, wherein the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not;
and training the neural network by adopting the training object set.
In this implementation manner, the training object set is obtained, where the training object set includes a plurality of training objects, the training objects are multimedia objects, and the training objects have labeling data, where the labeling data is used to indicate whether the training objects belong to the preset type, and the neural network is trained by using the training object set, so that the neural network can learn the ability of predicting the probability that the multimedia objects belong to the preset type.
According to an aspect of the present disclosure, there is provided a prediction apparatus of a multimedia object, including:
an obtaining module, configured to obtain first user behavior duty cycle data of a target-to-media object in a first time period, and second user behavior duty cycle data of the target-to-media object in a second time period, where the second time period is before the first time period;
The determining module is used for determining a first acceleration of the user behavior ratio of the target multimedia object according to the first user behavior ratio data and the second user behavior ratio data;
And the prediction module is used for predicting the probability that the target multimedia object belongs to a preset type according to the first acceleration.
In one possible implementation manner, the first time period includes at least two first sub-time periods, the first user behavior duty cycle data includes at least two items of first sub-user behavior duty cycle data corresponding to the at least two first sub-time periods one to one, the second time period includes at least two second sub-time periods, and the second user behavior duty cycle data includes at least two items of second sub-user behavior duty cycle data corresponding to the at least two second sub-time periods one to one;
the obtaining module is used for:
Determining first user behavior duty ratio data of the target to the media object in a first time period according to the at least two first sub-user behavior duty ratio data;
And determining second user behavior duty ratio data of the target multimedia object in a second time period according to the at least two second sub-user behavior duty ratio data.
In one possible implementation, the obtaining module is configured to
Determining first user behavior ratio data of the target pair media object in a first time period according to the at least two first sub-user behavior ratio data and at least two first weights corresponding to the at least two first sub-user behavior ratio data one by one;
And determining second user behavior ratio data of the target pair media object in a second time period according to the at least two second sub-user behavior ratio data and at least two second weights corresponding to the at least two second sub-user behavior ratio data one by one.
In one possible implementation manner, the first weight corresponding to the first sub-user behavior duty ratio data of any one of the at least two first sub-user behavior duty ratio data is inversely related to the interval duration between the first sub-time period corresponding to the first sub-user behavior duty ratio data and the current time, and is an exponential smoothing coefficient corresponding to the first sub-time period.
In one possible implementation manner, the second weight corresponding to any one of the at least two second sub-user behavior duty cycle data is positively correlated with the interval duration between the second sub-time period corresponding to the second sub-user behavior duty cycle data and the current time.
In one possible implementation, the prediction module is configured to:
Inputting the first acceleration into a pre-trained neural network, and predicting the probability that the target multimedia object belongs to a preset type through the neural network.
In one possible implementation, the apparatus further includes:
The training system comprises an acquisition module, a judgment module and a judgment module, wherein the acquisition module is used for acquiring a training object set, the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not;
And the training module is used for training the neural network by adopting the training object set.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In the embodiment of the disclosure, by obtaining the first user behavior duty ratio data of the target to the media object in the first time period and the second user behavior duty ratio data of the target multimedia object in the second time period, the first acceleration of the user behavior duty ratio of the target multimedia object is determined according to the first user behavior duty ratio data and the second user behavior duty ratio data, and the probability that the target multimedia object belongs to the preset type is predicted according to the first acceleration, so that the probability that the target multimedia object belongs to the preset type can be accurately predicted based on the acceleration of the user behavior duty ratio of the target multimedia object, and future performances of the target multimedia object can be predicted based on the generated user behavior data, the delay is low, and therefore the multimedia object platform is facilitated to distribute the multimedia object more effectively, and the bandwidth of the content distribution network is facilitated to be saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of a method for predicting a multimedia object provided by an embodiment of the present disclosure.
Fig. 2 illustrates a comparison diagram of acceleration determined according to a prediction method of a multimedia object provided in accordance with an embodiment of the present disclosure and acceleration determined according to equation 12 in the related art.
Fig. 3 illustrates a comparison of acceleration determined by a prediction method of a multimedia object according to an embodiment of the present disclosure with user behavior duty cycle data of the multimedia object.
Fig. 4 shows a block diagram of a prediction apparatus for a multimedia object provided by an embodiment of the present disclosure.
Fig. 5 shows a block diagram of another electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related art, it is generally determined which type the multimedia object belongs to based on posterior data of the multimedia object. For example, after a multimedia object is released, a period of time (e.g., 3-7 days) is waited for, and the amount of play of the multimedia object per day in the period of time is counted, and then the peak value of the amount of play of the multimedia object in the period of time is determined therefrom. Calculating the ratio of the peak value to the total play amount of all the multimedia objects on the multimedia object platform, and determining which ratio interval is preset to which ratio value falls, thereby determining which type the multimedia object belongs to. For example, if the ratio is greater than 0.5%, it is determined that the multimedia object belongs to class S, if the ratio is greater than 0.1% and less than or equal to 0.5%, it is determined that the multimedia object belongs to class A, and so on.
Taking a short video as an example, in the related art, after a certain short video is released, waiting for 3-7 days, counting the playing amount of the short video in each day in the period, and determining the peak value of the playing amount of the short video in the period. And calculating the ratio of the peak value to the total play quantity of all short videos on the short video platform, and determining which ratio interval is preset to fall into, so as to determine which type the short video belongs to.
In this way, the delay is high because it is often necessary to wait more than 3-7 days for determining which type of data the multimedia object belongs to is posterior data. Moreover, this approach often requires manual recalibration of the ratio intervals at intervals (e.g., every month), which is cumbersome and less accurate.
To solve the technical problems similar to the above, embodiments of the present disclosure provide a method, apparatus, device, medium, and program product for predicting a multimedia object, by obtaining first user behavior duty ratio data of a target to a media object in a first period of time and second user behavior duty ratio data of the target multimedia object in a second period of time, determining a first acceleration of the user behavior duty ratio of the target multimedia object according to the first user behavior duty ratio data and the second user behavior duty ratio data, and predicting a probability that the target multimedia object belongs to a preset type according to the first acceleration, thereby being capable of accurately predicting the probability that the target multimedia object belongs to the preset type based on the acceleration of the user behavior duty ratio of the target multimedia object, and predicting future performances of the target multimedia object based on the generated user behavior data, with a low delay, thereby contributing to a more efficient distribution of the multimedia object by a multimedia object platform, and thus contributing to a saving of bandwidth of a content distribution network.
The following describes in detail a method for predicting a multimedia object according to an embodiment of the present disclosure with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a method for predicting a multimedia object provided by an embodiment of the present disclosure. In one possible implementation, the method of predicting the multimedia object may be performed by a terminal device or a server or other electronic device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle mounted device, or a wearable device. In some possible implementations, the method of predicting the multimedia object may be implemented by a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the method for predicting the multimedia object includes steps S11 to S13.
In step S11, first user behavior duty cycle data of a target-to-media object in a first period of time is obtained, and second user behavior duty cycle data of the target-to-media object in a second period of time, wherein the second period of time is prior to the first period of time.
In step S12, a first acceleration of the user behavior ratio of the target multimedia object is determined based on the first user behavior ratio data and the second user behavior ratio data.
In step S13, according to the first acceleration, a probability that the target multimedia object belongs to a preset type is predicted.
In the disclosed embodiments, the target multimedia object may be any multimedia object that needs to be predicted. The multimedia object may be video, audio, etc., and the video may be short video or long video. For example, each video on the video platform may be used as a target multimedia object, respectively, so as to predict the probability that each video on the video platform belongs to a preset type. For another example, each video on the video platform that satisfies the preset condition may be respectively used as the target multimedia object, so as to predict the probability that each video on the video platform that satisfies the preset condition belongs to the preset type. The preset condition may include at least one of a preset release time condition, a preset definition condition, a preset duration condition, and the like.
The first time period may represent a certain time period prior to the current time. The first time period may or may not include the current time. The time span of the first time period may be a first preset duration. For example, the first preset time period may be 6 hours, 12 hours, 18 hours, and so on.
The second time period may represent a certain time period before the first time period. The second time period may or may not be continuous with the first time period. The time span of the second time period may be a second preset duration. For example, the second preset duration may be 6 hours, 12 hours, 18 hours, and so on. The second preset duration may be equal to or unequal to the first preset duration.
In the disclosed embodiments, the user behavior may represent a behavior of the user with respect to the multimedia object, and the user behavior data may be data generated based on the behavior of the user with respect to the multimedia object. For example, the user behavior may include at least one of play, praise, share, forward, comment, collection, search, etc., and accordingly, the user behavior data may include at least one of play data, praise data, share data, forward data, comment data, collection data, search data, etc. For example, the user behavior data may include at least one of play volume, praise volume, share volume, forward volume, comment volume, collection volume, search volume, and the like.
The user behavior duty cycle data of the target multimedia object may represent ratio data of the user behavior data of the target multimedia object to the same type of user behavior data of the target multimedia object set. For example, the play amount duty ratio data of the target multimedia object may represent ratio data of the play amount of the target multimedia object to the play amount of the target multimedia object set; the praise amount of the target multimedia object is the ratio data of the praise amount of the target multimedia object to the praise amount of the target multimedia object set.
Wherein the target multimedia object set may represent a multimedia object set for predicting a probability that the target multimedia object belongs to a preset type. The number of multimedia objects in the target multimedia object set may be more than two. The set of target multimedia objects may or may not include the target multimedia object. For example, the set of target multimedia objects may include all videos on the video platform. As another example, the target multimedia object set may include all videos on the video platform that meet a preset condition. As another example, the set of target multimedia objects may be generated from a plurality of videos selected by the user. In this example, the user may select a plurality of videos as the target multimedia object set, so that the user-selected videos may be used as a comparison reference to predict a probability that the target video belongs to a preset type.
The first user behavior duty cycle data may represent user behavior duty cycle data of the target multimedia object over a first period of time. The first user behavior duty cycle data may comprise one or more than two user behavior duty cycle data of the target multimedia object over a first period of time. In one possible implementation, the first user behavior rate data may include at least one of play amount rate data, praise amount rate data, share amount rate data, forward amount rate data, comment amount rate data, collection amount rate data, search amount rate data, and the like of the target multimedia object within the first time period.
The second user behavior duty cycle data may represent user behavior duty cycle data of the target multimedia object over a second period of time. The second user behavior duty cycle data may include one or more than two user behavior duty cycle data of the target multimedia object over a second period of time. In one possible implementation, the second user behavior rate data may include at least one of play amount rate data, praise amount rate data, share amount rate data, forward amount rate data, comment amount rate data, collection amount rate data, search amount rate data, and the like of the target multimedia object in the second period of time.
In the embodiment of the disclosure, the data types of the first user behavior duty cycle data and the second user behavior duty cycle data may be the same. For example, the first user behavior duty cycle data may include the play amount duty cycle data of the target multimedia object in the first period of time, and the second user behavior duty cycle data may include the play amount duty cycle data of the target multimedia object set in the second period of time. As another example, the first user behavior duty cycle data may include the play amount duty cycle data and the praise amount duty cycle data of the target multimedia object in the first period of time, and the second user behavior duty cycle data may include the play amount duty cycle data and the praise amount duty cycle data of the target multimedia object set in the second period of time. Of course, in some possible implementations, the data types of the first user behavior duty cycle data and the second user behavior duty cycle data may also be different.
In an embodiment of the present disclosure, the first acceleration may represent an acceleration of a user behavior duty cycle of the target multimedia object. The first acceleration may be determined from difference information between the first user behavior duty cycle data and the second user behavior duty cycle data.
In one example, the first acceleration a (t) may be determined using equation 1:
a (t) =v 1(t)-v2 (t) 1,
Where v 1 (t) represents the first user behavior duty cycle data and v 2 (t) represents the second user behavior duty cycle data.
In another example, a difference between the first user behavior duty cycle data and the second user behavior duty cycle data may be calculated, and a ratio of the difference to a third preset time period may be determined as the first acceleration.
In the embodiment of the disclosure, the first time period may include one or more first sub-time periods, the first user behavior duty cycle data may include one or more first sub-user behavior duty cycle data, and the first sub-user behavior duty cycle data corresponds to the first sub-time periods one to one, and the first sub-user behavior duty cycle data represents the user behavior duty cycle data of the target multimedia object in the corresponding first sub-time period. Wherein the first sub-period represents a sub-period of the first period, i.e. the first sub-period belongs to the first period. In the case where the number of first sub-periods is two or more, any two of the first sub-periods do not overlap. In the case where the number of the first sub-periods is two or more, the duration of each of the first sub-periods may be the same or different. For example, the duration of each first sub-period is 1 hour, or the duration of each first sub-period is half an hour, and so on.
The second time period may include one or more second sub-time periods, the second user behavior duty cycle data may include one or more second sub-user behavior duty cycle data, and the second sub-user behavior duty cycle data corresponds to the second sub-time periods one to one, and the second sub-user behavior duty cycle data represents the user behavior duty cycle data of the target multimedia object in the corresponding second sub-time period. Wherein the second sub-period represents a sub-period of the second period, i.e. the second sub-period belongs to the second period. In the case where the number of second sub-periods is two or more, any two second sub-periods do not overlap. In the case where the number of the second sub-periods is two or more, the duration of each second sub-period may be the same or different. For example, the duration of each second sub-period is 1 hour, or the duration of each second sub-period is half an hour, and so on.
The duration of the first sub-period may be the same as or different from the duration of the second sub-period. For example, the duration of the first sub-period and the duration of the second sub-period are each 1 hour.
In one possible implementation manner, the first time period includes at least two first sub-time periods, the first user behavior duty cycle data includes at least two items of first sub-user behavior duty cycle data corresponding to the at least two first sub-time periods one to one, the second time period includes at least two second sub-time periods, and the second user behavior duty cycle data includes at least two items of second sub-user behavior duty cycle data corresponding to the at least two second sub-time periods one to one; the obtaining the first user behavior duty ratio data of the target to the media object in the first time period and the second user behavior duty ratio data of the target multimedia object in the second time period comprises the following steps: determining first user behavior duty ratio data of the target to the media object in a first time period according to the at least two first sub-user behavior duty ratio data; and determining second user behavior duty ratio data of the target multimedia object in a second time period according to the at least two second sub-user behavior duty ratio data.
In one example, the first time period includes 6 first sub-time periods, the first user behavior duty cycle data includes 6 first sub-user behavior duty cycle data, the second time period includes 6 second sub-time periods, the second user behavior duty cycle data includes 6 second sub-user behavior duty cycle data, wherein the first sub-time periods can be denoted as t-i, the first sub-user behavior duty cycle data can be denoted as v p (t-i), the second sub-time periods can be denoted as t-11+i, the second sub-user behavior duty cycle data can be denoted as v p (t-11+i), and 0.ltoreq.i.ltoreq.5. In this example, the duration of the first sub-period and the second sub-period are each 1 hour.
In the implementation manner, the first user behavior proportion data is determined according to at least two first sub-user behavior proportion data corresponding to at least two first sub-time periods one by one, the second user behavior proportion data is determined according to at least two second sub-user behavior proportion data corresponding to at least two second sub-time periods one by one, and the first acceleration is determined based on the determined first user behavior proportion data and the second user behavior proportion data, so that the more stable first acceleration can be determined, and the more stable type prediction can be realized for the target multimedia object.
As an example of this implementation, the determining, according to the at least two pieces of first sub-user behavior duty cycle data, first user behavior duty cycle data of the target-to-media object in a first period of time includes: determining first user behavior ratio data of the target pair media object in a first time period according to the at least two first sub-user behavior ratio data and at least two first weights corresponding to the at least two first sub-user behavior ratio data one by one; the determining the second user behavior proportion data of the target multimedia object in the second time period according to the at least two second sub-user behavior proportion data comprises the following steps: and determining second user behavior ratio data of the target pair media object in a second time period according to the at least two second sub-user behavior ratio data and at least two second weights corresponding to the at least two second sub-user behavior ratio data one by one.
In this example, the first weight may represent a weight corresponding to the first sub-user behavior duty cycle data and the second weight may represent a weight corresponding to the second sub-user behavior duty cycle data. In one example, a first weight corresponding to the first sub-user behavior data v p (t-i) (i.e., a first weight corresponding to the first sub-time period t-i) may be denoted as α t-i, and a second weight corresponding to the second sub-user behavior data v p (t-11+i) (i.e., a second weight corresponding to the second sub-time period t-11+i) may be denoted as α t-11+i.
In this example, by determining first user behavior duty ratio data according to at least two pieces of first sub-user behavior duty ratio data and at least two pieces of first weights corresponding to the at least two pieces of first sub-user behavior duty ratio data one by one, determining second user behavior duty ratio data according to at least two pieces of second sub-user behavior duty ratio data and at least two pieces of second weights corresponding to the at least two pieces of second sub-user behavior duty ratio data one by one, and determining first acceleration based on the first user behavior duty ratio data and the second user behavior duty ratio data thus determined, the first acceleration thus determined can more accurately predict the probability that the target multimedia object belongs to the preset type.
In one example, the first weight corresponding to the first sub-user behavior duty ratio data of any one of the at least two items of first sub-user behavior duty ratio data is inversely related to the interval duration between the first sub-time period corresponding to the first sub-user behavior duty ratio data and the current time, and is an exponential smoothing coefficient corresponding to the first sub-time period. In this example, the first weight corresponding to the first sub-period is inversely related to the interval duration between the first sub-period and the current time, i.e., the smaller the interval duration between the first sub-period and the current time is, the larger the first weight corresponding to the first sub-period is, and the larger the interval duration between the first sub-period and the current time is, the smaller the first weight corresponding to the first sub-period is. In this example, the exponential smoothing coefficients may be noted asThe larger j is, the closer the first weights corresponding to the different first sub-time periods are. For example, j=1. By determining the first user behavior duty cycle data using the first weights in this example, and determining the first acceleration based on the first user behavior duty cycle data determined thereby, the accuracy of type prediction of the target multimedia object can be further improved.
In one example, the first user behavior duty cycle data v 1 (t) may be determined using equation 2:
In another example, the first weight may not be an exponential smoothing coefficient corresponding to the first sub-period, as long as it is inversely related to the interval duration between the first sub-period and the current time.
In one example, the second weight corresponding to the second sub-user behavior duty cycle data of any one of the at least two second sub-user behavior duty cycle data is positively correlated with the interval duration between the second sub-time period corresponding to the second sub-user behavior duty cycle data and the current time. In this example, the second weight corresponding to the second sub-period is positively correlated with the interval duration between the second period and the current time, i.e., the smaller the interval duration between the second period and the current time, the smaller the second weight corresponding to the second sub-period, and the larger the interval duration between the second period and the current time, the larger the second weight corresponding to the second sub-period. By determining the second user behavior duty cycle data using the second weight in this example, and determining the first acceleration based on the second user behavior duty cycle data determined thereby, the accuracy of type prediction of the target multimedia object can be further improved.
In one example, the second user behavior fraction data v 2 (t) may be determined using equation 3:
As another example of this implementation, the determining, based on the at least two first sub-user behavior duty cycle data, first user behavior duty cycle data of the target-to-media object over a first period of time includes: determining an average value of the at least two first sub-user behavior duty ratio data as first user behavior duty ratio data of the target to the media object in a first time period; the determining the second user behavior proportion data of the target multimedia object in the second time period according to the at least two second sub-user behavior proportion data comprises the following steps: and determining the average value of the at least two second sub-user behavior duty ratio data as second user behavior duty ratio data of the target pair media object in a second time period.
As another example of this implementation, the determining, based on the at least two first sub-user behavior duty cycle data, first user behavior duty cycle data of the target-to-media object over a first period of time includes: determining the median of the at least two first sub-user behavior duty ratio data as first user behavior duty ratio data of the target pair media object in a first time period; the determining the second user behavior proportion data of the target multimedia object in the second time period according to the at least two second sub-user behavior proportion data comprises the following steps: and determining the median of the at least two second sub-user behavior proportion data as second user behavior proportion data of the target pair media object in a second time period.
In an embodiment of the present disclosure, the probability that the target multimedia object belongs to the preset type is predicted at least according to the first acceleration. That is, the probability that the target multimedia object belongs to the preset type may be predicted only from the first acceleration, or the probability that the target multimedia object belongs to the preset type may be predicted together with the first acceleration and other information.
Wherein the preset type may be any type capable of representing a future performance of the multimedia object. For example, the preset type may be any type capable of representing future hotness, future play amount, etc. of the multimedia object. The number of the preset types can be one or more than two. For example, the preset type may include a burst type. As another example, the preset types may include burst type and non-burst type. As another example, the preset types may include an S-burst, a B-burst, and a C-burst. As another example, the preset types may include a high-quality class and a non-high-quality class.
In a possible implementation manner, the predicting, according to the first acceleration, a probability that the target multimedia object belongs to a preset type includes: inputting the first acceleration into a pre-trained neural network, and predicting the probability that the target multimedia object belongs to a preset type through the neural network. In this implementation, the neural network may employ DNN (Deep Neural Networks, deep neural network), for example, feedforward DNN may be employed. Of course, the neural network may be a neural network such as an MLP (Multi-Layer Perceptron), and the like, and is not limited thereto. In this implementation, the probability that the target multimedia object belongs to the preset type is predicted via the neural network by inputting the first acceleration into the pre-trained neural network, so that the accuracy and speed of the probability that the target multimedia object belongs to the preset type can be improved.
As an example of this implementation, before the inputting the first acceleration into the pre-trained neural network, the method further comprises: acquiring a training object set, wherein the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not; and training the neural network by adopting the training object set. In one example, training of the neural network may be performed using user behavior data within a fourth predetermined duration in the multimedia platform. For example, the fourth preset time period may be 6 months.
In this example, the training object set may include a plurality of training objects, for each of which third user behavior duty cycle data of the training object in a sixth time period and fourth user behavior duty cycle data of the training object in a seventh time period may be acquired, respectively, wherein the seventh time period precedes the sixth time period. A second acceleration of the user behavior duty cycle of the training object may be determined based on the third user behavior duty cycle data and the fourth user behavior duty cycle data. A second acceleration may be input to the neural network via which a probability that the training object belongs to a preset type is predicted. And training the neural network according to the true value of the type of the training object and the probability that the training object belongs to the preset type. Wherein, during the training of the neural network, the input data of the neural network includes, but is not limited to, a second acceleration.
In this example, the ability to predict the probability that the multimedia object belongs to the preset type can be learned by the neural network by obtaining a training object set, wherein the training object set includes a plurality of training objects, the training objects are multimedia objects, and the training objects have annotation data, the annotation data is used for indicating whether the training objects belong to the preset type, and training the neural network by using the training object set.
After the neural network training is completed, the neural network can be deployed in a real production environment to predict the probability that the multimedia object belongs to a preset type in real time, so that the exploded multimedia object can be captured in real time.
Of course, in other possible implementations, the first acceleration may not be processed through the neural network. For example, the first acceleration may also be processed by using a pre-designed function, so as to obtain a probability that the target multimedia object belongs to the preset type.
In one example, the probability that the target multimedia object belongs to the preset type may be denoted as y, where y e [0,1].
In a possible implementation manner, the predicting, according to the first acceleration, a probability that the target multimedia object belongs to a preset type includes: predicting a probability that the target multimedia object belongs to a preset type according to at least one of first change trend information of the user behavior of the target multimedia object in a third time period, second change trend information of the user behavior of the target multimedia object set in the third time period, first difference information between the user behavior of the target multimedia object and the user behavior of the target multimedia object set, second difference information between the user behavior of the target multimedia object and the change trend of the user behavior of the target multimedia object set, third difference information between the user behavior data of the target multimedia object in a fourth time period and the user behavior data of the fifth time period, and the first acceleration. In this implementation manner, by combining at least one of the first variation trend information, the second variation trend information, the first difference information, the second difference information, and the third difference information, the accuracy of the probability that the prediction target multimedia object belongs to the preset type can be further improved.
Wherein the third time period may represent a certain time period before the current time. The third time period may or may not include the current time. The time span of the third time period may be a fifth preset time period. For example, the fifth preset time period may be 24 hours, 12 hours, 48 hours, and so on.
The first trend information may be determined from first user behavior data of the target multimedia object over a third period of time, and the second trend information may be determined from second user behavior data of the target multimedia object set over the third period of time.
Wherein the first user behavior data may represent user behavior data of the target multimedia object over a third period of time. The first user behavior data may include one or more than two kinds of user behavior data. In one example, the first user behavior data may include at least one of play data, praise data, share data, forward data, comment data, collection data, etc. of the target multimedia object over the third time period.
The second user behavior data may represent user behavior data of the target set of multimedia objects over a third period of time. The second user behavior data may include one or more than two kinds of user behavior data. In one example, the second user behavior data may include at least one of play data, praise data, share data, forward data, comment data, collection data, etc. of the target multimedia object set over the third period of time.
In this implementation, the data types of the first user behavior data and the second user behavior data may be the same. For example, the first user behavior data may include play data of the target multimedia object during a third time period, and the second user behavior data may include play data of the target multimedia object set during the third time period. As another example, the first user behavior data may include play data and praise data of the target multimedia object during the third period of time, and the second user behavior data may include play data and praise data of the target multimedia object set during the third period of time. Of course, in some possible implementations, the data types of the first user behavior data and the second user behavior data may also be different.
In this implementation, the first trend information may represent trend information of a user behavior for the target multimedia object over a third period of time. The second trend information may represent trend information of user behavior for the target multimedia object set over a third period of time. The change trend information may be any information capable of showing a change trend. For example, the change trend information may be any information capable of showing an upward trend, a steady trend, a downward trend, an upward speed, a downward speed, or the like.
In one example, the third time period includes a third sub-time period and at least one fourth sub-time period, the fourth sub-time period preceding the third sub-time period; the first user behavior data includes: first sub-user behavior data of the target multimedia object in a third sub-period and at least one second sub-user behavior data of the target multimedia object in at least one fourth sub-period; the second user behavior data includes: third sub-user behavior data of the target multimedia object set in a third sub-period, and at least one fourth sub-user behavior data of the target multimedia object set in at least one fourth sub-period.
Wherein the number of the fourth sub-time periods is one or more than two. The third sub-period and the at least one fourth sub-period both belong to a third period, the third sub-period does not overlap any one of the at least one fourth sub-period, and in the case where the number of fourth sub-periods is two or more, any two of the fourth sub-periods do not overlap. The third sub-period may be the same as or different from each of the fourth sub-periods. For example, the duration of each of the third sub-period and each of the fourth sub-period is 1 hour, or the duration of each of the third sub-period and each of the fourth sub-period is half an hour, and so on.
In one example, the number of fourth sub-periods is more than two, the duration of each of the third sub-period and the fourth sub-period is 1 hour, the third sub-period may be denoted as the T-th hour, and the at least one fourth sub-period may be denoted as the T-D-th to T-1-th hours, respectively, wherein D is an integer greater than or equal to 2. For example, D equals 24.
The first sub-user behavior data may represent user behavior data of the target multimedia object during a third sub-period of time and the second sub-user behavior data may represent user behavior data of the target multimedia object during a fourth sub-period of time. Wherein the second sub-user behavior data corresponds to the fourth sub-time period one by one, for example, D is equal to 24, the first user behavior data includes 1 item of first sub-user behavior data and 24 items of second sub-user behavior data, the 24 items of second sub-user behavior data corresponds to the T-24 th to T-1 th hours one by one, that is, the 24 items of second sub-user behavior data include: user behavior data of the target multimedia object at T-24 hours, user behavior data of the target multimedia object at T-23 hours, … …, user behavior data of the target multimedia object at T-1 hour. For example, if the user behavior data includes a play amount, the 24 pieces of second sub-user behavior data may include: the playing amount of the target multimedia object at the T-24 hours, the playing amount of the target multimedia object at the T-23 hours, … … and the playing amount of the target multimedia object at the T-1 hours.
The third sub-user behavior data may represent user behavior data of the target multimedia object set within a third sub-period of time and the fourth sub-user behavior data may represent user behavior data of the target multimedia object set within a fourth sub-period of time. Wherein the fourth sub-user behavior data corresponds to the fourth sub-time period one-to-one, for example, D is equal to 24, the first user behavior data includes 1 item of third sub-user behavior data and 24 items of fourth sub-user behavior data, the 24 items of fourth sub-user behavior data corresponds to the T-24 th to T-1 th hours one-to-one, that is, the 24 items of fourth sub-user behavior data include: user behavior data of the target multimedia object set at T-24 hours, user behavior data of the target multimedia object set at T-23 hours, … …, user behavior data of the target multimedia object set at T-1 hour. For example, if the user behavior data includes a play amount, the 24 items of fourth sub-user behavior data may include: the playing amount of the target multimedia object set at the T-24 hours, the playing amount of the target multimedia object set at the T-23 hours, … … and the playing amount of the target multimedia object set at the T-1 hours.
In this example, based on the first sub-user behavior data of the target multimedia object in the third sub-period and the at least one second sub-user behavior data of the target multimedia object in the at least one fourth sub-period, the first trend information of the user behavior of the target multimedia object in the third period can be more accurately determined, and based on the third sub-user behavior data of the target multimedia object set in the third sub-period and the at least one fourth sub-user behavior data of the target multimedia object set in the at least one fourth sub-period, the second trend information of the user behavior of the target multimedia object set in the third period can be more accurately determined, so that the probability that the target multimedia object belongs to the preset type can be more accurately predicted.
In one example, determining first trend information of user behavior for a target multimedia object over a third period of time from the first user behavior data includes: determining fourth difference information between the first sub-user behavior data and at least one second sub-user behavior data; determining first variation trend information of the user behavior aiming at the target multimedia object in a third time period according to the fourth difference information; determining second variation trend information of the user behavior of the target multimedia object set in a third time period according to the second user behavior data, wherein the second variation trend information comprises: determining fifth difference information between the third sub-user behavior data and at least one fourth sub-user behavior data; and determining second change trend information of the user behavior of the target multimedia object set in a third time period according to the fifth difference information.
In this example, the fourth difference information may represent difference information between the first sub-user behavior data and the at least one second sub-user behavior data. The fourth difference information can be calculated by calculating a ratio of the first sub-user behavior data to at least one second sub-user behavior data, calculating a difference value, calculating a ratio after taking the logarithm, and the like.
For example, the fourth difference information X 1 may be determined using equation 4:
Where n represents the play amount of the target multimedia object, n T represents the play amount of the target multimedia object at the T-th hour, and n m represents the play amount of the target multimedia object at the m-th hour.
As another example, the fourth difference information X 1 may be determined using equation 5:
as another example, the fourth difference information X 1 may be determined using equation 6:
After the fourth difference information is determined, the fourth difference information may be taken as the first variation trend information. Or the fourth difference information may be subjected to preset processing to obtain the first variation trend information. For example, the fourth difference information may be multiplied by a preset coefficient to obtain the first variation trend information.
In this example, the fifth difference information may represent difference information between the third sub-user behavior data and at least one item of fourth sub-user behavior data. The fifth difference information can be calculated by calculating the ratio of the third sub-user behavior data to at least one fourth sub-user behavior data, calculating the difference value, calculating the ratio after taking the logarithm, and the like.
For example, the fifth difference information X 2 may be determined using equation 7:
Where N represents the play amount of the target multimedia object set, N T represents the play amount of the target multimedia object set at the T-th hour, and N m represents the play amount of the target multimedia object set at the m-th hour.
As another example, the fifth difference information X 2 may be determined using equation 8:
as another example, the fifth difference information X 2 may be determined using equation 9:
After the fifth difference information is determined, the fifth difference information may be regarded as second variation trend information. Or the fifth difference information may be subjected to preset processing to obtain the second variation trend information. For example, the fifth difference information may be multiplied by a preset coefficient to obtain the second variation trend information.
In the above example, by determining fourth difference information between the first sub-user behavior data and at least one item of second sub-user behavior data, determining first variation trend information of the user behavior for the target multimedia object within a third time period according to the fourth difference information, determining fifth difference information between the third sub-user behavior data and at least one item of fourth sub-user behavior data, and determining second variation trend information of the user behavior for the target multimedia object set within the third time period according to the fifth difference information, the first variation trend information and the second variation trend information can be determined more accurately, and thus the probability that the target multimedia object belongs to the preset type can be predicted more accurately.
In this implementation manner, the second change trend information may be used as a comparison reference, and the probability that the target multimedia object belongs to the preset type may be predicted according to the first change trend information and the second change trend information.
In this implementation, the first difference information may represent difference information between user behavior for the target multimedia object and user behavior for the target multimedia object set. The first difference information can be calculated by calculating a ratio of the first user behavior data to the second user behavior data, calculating a difference value, calculating a ratio after taking the logarithm, and the like.
In one example, first difference information between user behavior for the target multimedia object and user behavior for the target multimedia object set may be determined from a ratio of first sub-user behavior data in the first user behavior data to third sub-user behavior data in the second user behavior data. For example, the first difference information X 3 may be determined using equation 10:
Where N T represents the playing amount of the target multimedia object at the T-th hour, and N T represents the playing amount of the target multimedia object set at the T-th hour.
In another example, in addition to the first sub-user behavior data and the third sub-user behavior data, at least one second sub-user behavior data and/or at least one fourth sub-user behavior data may be considered in determining the first difference information.
According to the first user behavior data and the second user behavior data, first difference information between the user behaviors aiming at the target multimedia object and the user behaviors aiming at the target multimedia object set is determined, and the probability that the target multimedia object belongs to a preset type is predicted by combining the first difference information, so that the accuracy of predicting the probability that the target multimedia object belongs to the preset type can be further improved.
In this implementation, the second difference information may represent difference information between a user behavior for the target multimedia object and a trend of change in the user behavior for the target multimedia object set. The second difference information can be calculated by calculating the ratio of the first change trend information to the second change trend information, calculating the difference value and the like.
For example, the second difference information X 4 may be determined using equation 11:
Wherein X 1 represents first change trend information, and X 2 represents second change trend information.
According to the first change trend information of the user behaviors aiming at the target multimedia objects and the second change trend information of the user behaviors aiming at the target multimedia object sets, the second difference information is determined, and the probability that the target multimedia objects belong to the preset types is predicted by combining the second difference information, so that the accuracy of predicting the probability that the target multimedia objects belong to the preset types can be further improved.
In this implementation, the third difference information may represent difference information between user behavior data of the target multimedia object in a fourth period of time and a fifth period of time, wherein the fifth period of time is a period of time preceding the fourth period of time. The time spans of the fourth time period and the fifth time period may be the same or different. For example, the time spans of the fourth time period and the fifth time period are each 1 hour, or are each half an hour, and so on. In one example, the fourth time period may represent the latest time period and the fifth time period may represent the last time period of the fourth time period. For example, the fourth period of time may be the t-th hour and the fifth period of time may be the t-1 th hour.
As an example of this implementation, the probability that the target multimedia object belongs to the preset type may be predicted based on the first variation trend information, the second variation trend information, the third difference information, and the first acceleration. In one example, the first variation trend information, the second variation trend information, the third difference information, and the first acceleration may be input into a pre-trained neural network via which a probability that the target multimedia object belongs to a preset type is predicted. The first change trend information, the second change trend information, the third difference information and the first acceleration are processed through the pre-trained neural network to obtain the probability that the target multimedia object belongs to the preset type, so that the accuracy and the speed of predicting the probability that the target multimedia object belongs to the preset type can be improved. In another example, the first change trend information, the second change trend information, the third difference information, and the first acceleration may be processed by using a pre-designed function, so as to obtain a probability that the target multimedia object belongs to a preset type.
In one example, before inputting the first trend information, the second trend information, the third difference information, and the first acceleration into the pre-trained neural network, the method further comprises: acquiring a training object set, wherein the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not; and training the neural network by adopting the training object set.
In this example, the training object set may include a plurality of training objects, for each of which user behavior data from R-D hours to R hours may be acquired separately. For example, for each training object, the play amount of the training object from the R-D hour to the R hour may be acquired separately.
For any training object in the training object set, according to the user behavior data of the training object from the R-hour and the user behavior data from the R-D hour to the R-1 hour, third change trend information of the user behavior of the training object from the R-D hour to the R-1 hour can be determined; and according to the user behavior data of the training object set in the R hour and the user behavior data from the R-D hour to the R-1 hour, fourth change trend information of the user behavior of the training object set from the R-D hour to the R hour can be determined. For example, third trend information of the user behavior from the R-D hour to the R-1 hour for the training object may be determined according to the play amount of the training object at the R-h and the play amount from the R-D hour to the R-1 hour; and determining fourth change trend information of the user behaviors aiming at the training object set from the R-D hour to the R-1 hour according to the play amount of the training object set from the R-D hour and the play amount from the R-D hour to the R-1 hour.
And according to the user behavior data of the training object in the R hour and the user behavior data of the training object in the R-1 hour, sixth difference information between the user behavior data of the training object in the R hour and the user behavior data of the training object in the R-1 hour can be obtained.
Third user behavior duty cycle data of the training object in a sixth time period may be obtained, and fourth user behavior duty cycle data of the training object in a seventh time period, wherein the seventh time period precedes the sixth time period. A second acceleration of the user behavior duty cycle of the training object may be determined based on the third user behavior duty cycle data and the fourth user behavior duty cycle data.
Third variation trend information, fourth variation trend information, sixth difference information, and second acceleration may be input to the neural network, and a probability that the training object belongs to a preset type may be predicted via the neural network. And training the neural network according to the true value of the type of the training object and the probability that the training object belongs to the preset type.
As another example of the implementation, the probability that the target multimedia object belongs to the preset type may be predicted based on the first variation trend information, the second variation trend information, the first difference information, the second difference information, the third difference information, and the first acceleration. In one example, the first variation trend information, the second variation trend information, the first difference information, the second difference information, the third difference information, and the first acceleration may be input into a pre-trained neural network via which a probability that the target multimedia object belongs to a preset type is predicted. The first change trend information, the second change trend information, the first difference information, the second difference information, the third difference information and the first acceleration are processed through the pre-trained neural network, so that the probability that the target multimedia object belongs to the preset type is obtained, and the accuracy and the speed for predicting the probability that the target multimedia object belongs to the preset type can be improved. In another example, the first change trend information, the second change trend information, the first difference information, the second difference information, the third difference information, and the first acceleration may be processed by using a pre-designed function, so as to obtain a probability that the target multimedia object belongs to the preset type.
In one example, before inputting the first trend information, the second trend information, the first difference information, the second difference information, the third difference information, and the first acceleration into the pre-trained neural network, the method further comprises: acquiring a training object set, wherein the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not; and training the neural network by adopting the training object set.
In this example, the training object set may include a plurality of training objects, for each of which user behavior data from R-D hours to R hours may be acquired separately. For example, for each training object, the play amount of the training object from the R-D hour to the R hour may be acquired separately.
For any training object in the training object set, according to the user behavior data of the training object from the R-hour and the user behavior data from the R-D hour to the R-1 hour, third change trend information of the user behavior of the training object from the R-D hour to the R-1 hour can be determined; and according to the user behavior data of the training object set in the R hour and the user behavior data from the R-D hour to the R-1 hour, fourth change trend information of the user behavior of the training object set from the R-D hour to the R hour can be determined. For example, third trend information of the user behavior from the R-D hour to the R-1 hour for the training object may be determined according to the play amount of the training object at the R-h and the play amount from the R-D hour to the R-1 hour; and determining fourth change trend information of the user behaviors aiming at the training object set from the R-D hour to the R-1 hour according to the play amount of the training object set from the R-D hour and the play amount from the R-D hour to the R-1 hour.
Seventh difference information between the user behavior for the training object and the user behavior for the training object set may be determined according to the user behavior data of the training object at the R-hour and the user behavior data of the training object at the R-D-hour to the R-1 hour, and the user behavior data of the training object set at the R-hour and the user behavior data of the training object at the R-D-hour to the R-1 hour. For example, seventh difference information between the user behavior for the training object and the user behavior for the training object set may be determined according to the user behavior data of the training object at the R-th hour and the user behavior data of the training object set at the R-th hour. For example, seventh difference information between the user behavior for the training object and the user behavior for the training object set may be determined according to the play amount of the training object at the R-th hour and the play amount of the training object set at the R-th hour. For example, a ratio of the play amount of the training object at the R-th hour to the play amount of the training object set at the R-th hour may be used as the seventh difference information.
Eighth difference information between the user behavior for the training object and the trend of the user behavior for the training object set may be determined based on the third trend information and the fourth trend information. For example, a ratio of the third variation trend information to the fourth variation trend information may be used as the eighth difference information.
And according to the user behavior data of the training object in the R hour and the user behavior data of the training object in the R-1 hour, sixth difference information between the user behavior data of the training object in the R hour and the user behavior data of the training object in the R-1 hour can be obtained.
Third user behavior duty cycle data of the training object in a sixth time period may be obtained, and fourth user behavior duty cycle data of the training object in a seventh time period, wherein the seventh time period precedes the sixth time period. A second acceleration of the user behavior duty cycle of the training object may be determined based on the third user behavior duty cycle data and the fourth user behavior duty cycle data.
Third variation trend information, fourth variation trend information, seventh difference information, eighth difference information, sixth difference information, and second acceleration may be input to the neural network, and a probability that the training object belongs to a preset type may be predicted via the neural network. And training the neural network according to the true value of the type of the training object and the probability that the training object belongs to the preset type.
In a possible implementation manner, after the predicting the probability that the target multimedia object belongs to a preset type, the method further includes: and distributing the bandwidth corresponding to the target multimedia object according to the probability that the target multimedia object belongs to the preset type.
As an example of this implementation, the preset type includes S class; the allocating the bandwidth corresponding to the target multimedia object according to the probability that the target multimedia object belongs to the preset type includes: and allocating the bandwidth corresponding to the target multimedia object according to the probability that the target belongs to the S class of the media object, wherein the bandwidth corresponding to the target multimedia object is positively correlated with the probability that the target multimedia object belongs to the S class. That is, the greater the probability that the target multimedia object belongs to the class S, the greater the bandwidth corresponding to the target multimedia object; the smaller the probability that the target multimedia object belongs to the S class is, the smaller the bandwidth corresponding to the target multimedia object is.
As another example of this implementation, the preset type includes class C; the allocating the bandwidth corresponding to the target multimedia object according to the probability that the target multimedia object belongs to the preset type includes: and distributing the bandwidth corresponding to the target multimedia object according to the probability that the target belongs to the class C of the media object, wherein the bandwidth corresponding to the target multimedia object is inversely related to the probability that the target multimedia object belongs to the class C. That is, the greater the probability that the target multimedia object belongs to class C, the smaller the bandwidth corresponding to the target multimedia object; the smaller the probability that the target multimedia object belongs to class C, the larger the bandwidth corresponding to the target multimedia object.
As another example of this implementation, the number of preset types is greater than or equal to 2; the allocating the bandwidth corresponding to the target multimedia object according to the probability that the target multimedia object belongs to the preset type includes: determining the preset type corresponding to the maximum probability in the probability that the target multimedia object belongs to the preset type as the type to which the target multimedia object belongs; and distributing the bandwidth corresponding to the target multimedia object according to the type of the target multimedia object. For example, the preset type includes a class S, a class a, a class B, and a class C; the probabilities that the target multimedia object belongs to the preset type comprise 4 probabilities, which are respectively: the probability that the target multimedia object belongs to class S, the probability that the target multimedia object belongs to class A, the probability that the target multimedia object belongs to class B, and the probability that the target multimedia object belongs to class C. If the probability that the target multimedia object belongs to the S class is the largest in the 4 probabilities, determining that the type to which the target multimedia object belongs is the S class; if the probability that the target multimedia object belongs to the class A is the largest in the 4 probabilities, determining that the type to which the target multimedia object belongs is class A; if the probability that the target multimedia object belongs to the B class is the largest in the 4 probabilities, determining that the type of the target multimedia object belongs to the B class; if the probability that the target multimedia object belongs to class C is the largest among the 4 probabilities, it may be determined that the type to which the target multimedia object belongs is class C. In this example, the bandwidth corresponding to class S is greater than the bandwidth corresponding to class a, which is greater than the bandwidth corresponding to class B, which is greater than the bandwidth corresponding to class C. For example, if multimedia object V 1 belongs to class S and multimedia object V 2 belongs to class a, then the bandwidth allocated to multimedia object V 1 is greater than the bandwidth allocated to multimedia object V 2.
According to this implementation, a more efficient distribution of multimedia objects by the multimedia object platform is facilitated, thereby facilitating a saving of bandwidth of the content distribution network.
The method for predicting the multimedia object provided by the embodiment of the present disclosure is described below through a specific application scenario. In the application scenario, the prediction method of the multimedia object may be performed by a server corresponding to the short video platform. In the application scene, the preset type is the burst video, and the server can respectively predict the probability that each short video meeting the preset condition on the short video platform belongs to the burst video.
For any target short video, the server can acquire the playing amount of the target short video from t-11 hours to t hours and the total playing amount of all the short videos on the short video platform from t-11 hours to t hours. According to the playing amount of the target short video from the t-11 hours to the t hours and the total playing amount, the playing amount duty ratio of the target short video from the t-11 hours to the t hours can be determined. According to the playing amount ratio of the target short video from the t-5 hours to the t hours, adopting the formula 2, the first playing amount ratio v 1 (t) of the target short video can be determined; according to the playing amount ratio of the target short video from the t-11 hour to the t-6 hour, the second playing amount ratio v 2 (t) of the target short video can be determined by adopting the formula 3. According to the first play amount ratio v 1 (t) and the second play amount ratio v 2 (t) of the target short video, the first acceleration of the target short video may be determined using equation 1.
For short video platforms, X 2 may be determined from the amount of play of each short video in the short video platform from time T-D to time T using equation 7 above. For a target short video, X 1 can be calculated using equation 4 and X 4 can be calculated using equation 11. For short videos with X 4 greater than 1, X 3 may be calculated using equation 10.
And according to the playing amount of the target short video in the t hour and the playing amount of the target short video in the t-1 hour, third difference information can be calculated.
And inputting the first acceleration, the X 1、X2、X3、X4 and the third difference information into a pre-trained neural network, so that the probability that the target short video belongs to the burst video can be obtained.
The server may also obtain a burst video set based on M short videos having the highest probability of belonging to the burst video, where M is greater than 1, or may obtain the burst video set based on short videos having a probability of belonging to the burst video greater than or equal to a preset probability.
In the application scene, the short video platform can timely discover and capture high-quality short videos from a large number of short videos and allocate larger bandwidths to the high-quality short videos, so that the activity of the short video platform can be improved on the premise of saving the bandwidths of a content distribution network. In addition, in the application scene, the probability that each short video in the short video platform belongs to the burst video can be dynamically updated every hour with the granularity of hours, and the timeliness is high.
Fig. 2 illustrates a comparison diagram of acceleration determined according to a prediction method of a multimedia object provided in accordance with an embodiment of the present disclosure and acceleration determined according to equation 12 in the related art.
A' (t) =v (t) -v (t-1) formula 12,
In fig. 2, a solid line represents a curve of acceleration determined according to a prediction method of a multimedia object provided according to an embodiment of the present disclosure, and a dotted line represents a curve of acceleration determined according to equation 12. As can be seen from fig. 2, the acceleration determined by the prediction method of the multimedia object provided according to the embodiment of the present disclosure is smoother and more significantly captures the change of the user behavior duty ratio of the multimedia object, compared to the acceleration determined according to equation 12.
Fig. 3 illustrates a comparison of acceleration determined by a prediction method of a multimedia object according to an embodiment of the present disclosure with user behavior duty cycle data of the multimedia object. In fig. 3, a solid line represents a curve of acceleration determined by a prediction method of a multimedia object provided according to an embodiment of the present disclosure, and a dotted line represents a curve of user behavior duty data of the same multimedia object. The ordinate on the left side of fig. 3 is the ordinate corresponding to the solid line, and the ordinate on the right side of fig. 3 is the ordinate corresponding to the broken line. As can be seen from fig. 3, the acceleration determined by the prediction method of the multimedia object according to the embodiment of the present disclosure can significantly capture the change of the user behavior ratio of the multimedia object compared to the user behavior ratio data of the multimedia object.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a prediction apparatus, an electronic device, a computer readable storage medium, and a program for a multimedia object, where the foregoing may be used to implement any one of the prediction methods for a multimedia object provided in the disclosure, and the corresponding technical schemes and technical effects may be referred to the corresponding descriptions of the method parts and are not repeated.
Fig. 4 shows a block diagram of a prediction apparatus for a multimedia object provided by an embodiment of the present disclosure. As shown in fig. 4, the prediction apparatus of a multimedia object includes:
An obtaining module 41, configured to obtain first user behavior duty cycle data of a target-to-media object in a first period of time, and second user behavior duty cycle data of the target-to-media object in a second period of time, where the second period of time is before the first period of time;
A determining module 42, configured to determine a first acceleration of the user behavior duty cycle of the target multimedia object according to the first user behavior duty cycle data and the second user behavior duty cycle data;
And the prediction module 43 is configured to predict a probability that the target multimedia object belongs to a preset type according to the first acceleration.
In one possible implementation manner, the first time period includes at least two first sub-time periods, the first user behavior duty cycle data includes at least two items of first sub-user behavior duty cycle data corresponding to the at least two first sub-time periods one to one, the second time period includes at least two second sub-time periods, and the second user behavior duty cycle data includes at least two items of second sub-user behavior duty cycle data corresponding to the at least two second sub-time periods one to one;
the obtaining module 41 is configured to:
Determining first user behavior duty ratio data of the target to the media object in a first time period according to the at least two first sub-user behavior duty ratio data;
And determining second user behavior duty ratio data of the target multimedia object in a second time period according to the at least two second sub-user behavior duty ratio data.
In a possible implementation, the obtaining module 41 is configured to
Determining first user behavior ratio data of the target pair media object in a first time period according to the at least two first sub-user behavior ratio data and at least two first weights corresponding to the at least two first sub-user behavior ratio data one by one;
And determining second user behavior ratio data of the target pair media object in a second time period according to the at least two second sub-user behavior ratio data and at least two second weights corresponding to the at least two second sub-user behavior ratio data one by one.
In one possible implementation manner, the first weight corresponding to the first sub-user behavior duty ratio data of any one of the at least two first sub-user behavior duty ratio data is inversely related to the interval duration between the first sub-time period corresponding to the first sub-user behavior duty ratio data and the current time, and is an exponential smoothing coefficient corresponding to the first sub-time period.
In one possible implementation manner, the second weight corresponding to any one of the at least two second sub-user behavior duty cycle data is positively correlated with the interval duration between the second sub-time period corresponding to the second sub-user behavior duty cycle data and the current time.
In one possible implementation, the prediction module 43 is configured to:
Inputting the first acceleration into a pre-trained neural network, and predicting the probability that the target multimedia object belongs to a preset type through the neural network.
In one possible implementation, the apparatus further includes:
The training system comprises an acquisition module, a judgment module and a judgment module, wherein the acquisition module is used for acquiring a training object set, the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not;
And the training module is used for training the neural network by adopting the training object set.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation and technical effects of the functions or modules may refer to the descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. Wherein the computer readable storage medium may be a non-volatile computer readable storage medium or may be a volatile computer readable storage medium.
The disclosed embodiments also propose a computer program comprising computer readable code which, when run in an electronic device, causes a processor in the electronic device to carry out the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the above method.
The embodiment of the disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 5 shows a block diagram of another electronic device 1900 provided by an embodiment of the disclosure. Referring to FIG. 5, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows Server TM), the apple Inc. promoted graphical user interface-based operating system (Mac OS X TM), the multi-user, multi-process computer operating system (Unix TM), the free and open source Unix-like operating system (Linux TM), the open source Unix-like operating system (FreeBSD TM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
If the technical scheme of the embodiment of the disclosure relates to personal information, the product applying the technical scheme of the embodiment of the disclosure clearly informs the personal information processing rule and obtains personal independent consent before processing the personal information. If the technical solution of the embodiment of the present disclosure relates to sensitive personal information, the product applying the technical solution of the embodiment of the present disclosure obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of "explicit consent". For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A method of predicting a multimedia object, comprising:
Obtaining first user behavior duty cycle data of a target multimedia object in a first time period and second user behavior duty cycle data of the target multimedia object in a second time period, wherein the second time period is before the first time period; the user behavior ratio data of the target multimedia object represents ratio data of the user behavior data of the target multimedia object to the user behavior data of the same type of the target multimedia object set; the target multimedia object set represents a multimedia object set for predicting a probability that the target multimedia object belongs to a preset type; the first user behavior proportion data represent user behavior proportion data of the target multimedia object in the first time period, and the second user behavior proportion data represent user behavior proportion data of the target multimedia object in the second time period;
determining a first acceleration of the user behavior duty cycle of the target multimedia object according to the first user behavior duty cycle data and the second user behavior duty cycle data;
And predicting the probability that the target multimedia object belongs to a preset type according to the first acceleration.
2. The method of claim 1, wherein the first time period comprises at least two first sub-time periods, the first user behavior duty cycle data comprises at least two first sub-user behavior duty cycle data in one-to-one correspondence with the at least two first sub-time periods, the second time period comprises at least two second sub-time periods, and the second user behavior duty cycle data comprises at least two second sub-user behavior duty cycle data in one-to-one correspondence with the at least two second sub-time periods;
The obtaining the first user behavior duty ratio data of the target multimedia object in the first time period and the second user behavior duty ratio data of the target multimedia object in the second time period comprises the following steps:
Determining first user behavior duty ratio data of the target multimedia object in a first time period according to the at least two first sub-user behavior duty ratio data;
And determining second user behavior duty ratio data of the target multimedia object in a second time period according to the at least two second sub-user behavior duty ratio data.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
The determining, according to the at least two first sub-user behavior duty cycle data, first user behavior duty cycle data of the target multimedia object in a first period of time includes: determining first user behavior ratio data of the target multimedia object in a first time period according to the at least two pieces of first sub-user behavior ratio data and at least two pieces of first weights corresponding to the at least two pieces of first sub-user behavior ratio data one by one;
The determining the second user behavior proportion data of the target multimedia object in the second time period according to the at least two second sub-user behavior proportion data comprises the following steps: and determining second user behavior proportion data of the target multimedia object in a second time period according to the at least two second sub-user behavior proportion data and at least two second weights corresponding to the at least two second sub-user behavior proportion data one by one.
4. A method according to claim 3, wherein the first weight corresponding to any one of the at least two first sub-user behavior duty cycle data is inversely related to the interval duration between the first sub-time period corresponding to the first sub-user behavior duty cycle data and the current time, and is an exponential smoothing coefficient corresponding to the first sub-time period.
5. The method according to claim 3 or 4, wherein a second weight corresponding to any one of the at least two second sub-user behavior duty cycle data is positively correlated with an interval duration between a second sub-time period corresponding to the second sub-user behavior duty cycle data and a current time.
6. The method according to any one of claims 1 to 4, wherein predicting the probability that the target multimedia object belongs to a preset type based on the first acceleration comprises:
Inputting the first acceleration into a pre-trained neural network, and predicting the probability that the target multimedia object belongs to a preset type through the neural network.
7. The method of claim 6, wherein prior to said inputting the first acceleration into the pre-trained neural network, the method further comprises:
Acquiring a training object set, wherein the training object set comprises a plurality of training objects, the training objects are multimedia objects, and the training objects are provided with annotation data, and the annotation data are used for indicating whether the training objects belong to the preset type or not;
and training the neural network by adopting the training object set.
8. A device for predicting a multimedia object, comprising:
An obtaining module, configured to obtain first user behavior duty cycle data of a target multimedia object in a first period of time, and second user behavior duty cycle data of the target multimedia object in a second period of time, where the second period of time is before the first period of time; the user behavior ratio data of the target multimedia object represents ratio data of the user behavior data of the target multimedia object to the user behavior data of the same type of the target multimedia object set; the target multimedia object set represents a multimedia object set for predicting a probability that the target multimedia object belongs to a preset type; the first user behavior proportion data represent user behavior proportion data of the target multimedia object in the first time period, and the second user behavior proportion data represent user behavior proportion data of the target multimedia object in the second time period;
The determining module is used for determining a first acceleration of the user behavior ratio of the target multimedia object according to the first user behavior ratio data and the second user behavior ratio data;
And the prediction module is used for predicting the probability that the target multimedia object belongs to a preset type according to the first acceleration.
9. An electronic device, comprising:
One or more processors;
a memory for storing executable instructions;
Wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 7.
11. A computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the method of any one of claims 1 to 7.
CN202210107935.4A 2022-01-28 2022-01-28 Multimedia object prediction method, device, equipment, medium and program product Active CN114519112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210107935.4A CN114519112B (en) 2022-01-28 2022-01-28 Multimedia object prediction method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210107935.4A CN114519112B (en) 2022-01-28 2022-01-28 Multimedia object prediction method, device, equipment, medium and program product

Publications (2)

Publication Number Publication Date
CN114519112A CN114519112A (en) 2022-05-20
CN114519112B true CN114519112B (en) 2024-11-22

Family

ID=81597333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210107935.4A Active CN114519112B (en) 2022-01-28 2022-01-28 Multimedia object prediction method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN114519112B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164481A (en) * 2011-12-16 2013-06-19 盛乐信息技术(上海)有限公司 Recommendation method and system of video with largest rising trend
CN109255676A (en) * 2018-08-14 2019-01-22 平安科技(深圳)有限公司 Method of Commodity Recommendation, device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5193624B2 (en) * 2008-02-19 2013-05-08 ルネサスエレクトロニクス株式会社 Data processor
CN109450999A (en) * 2018-10-26 2019-03-08 北京亿幕信息技术有限公司 A kind of cloud cuts account data analysis method and system
CN110442790B (en) * 2019-08-07 2024-05-10 深圳市雅阅科技有限公司 Method, device, server and storage medium for recommending multimedia data
JP2021064283A (en) * 2019-10-16 2021-04-22 富士通株式会社 Storage control device and program
CN112541745B (en) * 2020-12-22 2024-04-09 平安银行股份有限公司 User behavior data analysis method and device, electronic equipment and readable storage medium
CN113590948B (en) * 2021-07-28 2024-03-26 咪咕数字传媒有限公司 Information recommendation method, device, equipment and computer storage medium
CN113806568B (en) * 2021-08-10 2023-11-03 中国人民大学 Multimedia resource recommendation method, device, electronic device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164481A (en) * 2011-12-16 2013-06-19 盛乐信息技术(上海)有限公司 Recommendation method and system of video with largest rising trend
CN109255676A (en) * 2018-08-14 2019-01-22 平安科技(深圳)有限公司 Method of Commodity Recommendation, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114519112A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
KR102064203B1 (en) Emoji recommendation method and device
CN110832533B (en) Antagonistic method and system for generating user preference content
CN113032112A (en) Resource scheduling method and device, electronic equipment and storage medium
CN109858614B (en) Neural network training method and device, electronic equipment and storage medium
CN110245757B (en) Image sample processing method and device, electronic equipment and storage medium
JP2022552034A (en) CLUSTERING METHOD AND DEVICE, ELECTRONIC DEVICE AND STORAGE MEDIUM
CN111460384B (en) Policy evaluation method, device and equipment
WO2017152734A1 (en) Data processing method and relevant devices and systems
US20160086100A1 (en) Selecting strangers for information spreading on a social network
CN113011210B (en) Video processing method and device
CN110059225B (en) Video classification method and device, terminal equipment and storage medium
CN112115321B (en) Training method and device for content recommendation model, electronic equipment and storage medium
US20200183931A1 (en) Continuous caster scheduling with template driven search
CN112423127A (en) Video loading method and device
CN110781373A (en) List updating method and device, readable medium and electronic equipment
CN112000842A (en) Video processing method and device
CN114519112B (en) Multimedia object prediction method, device, equipment, medium and program product
CN109492804A (en) Dispatching method, device, electronic equipment and the storage medium of photographed scene
CN108614845B (en) Behavior estimation method and device based on media file
CN112948763B (en) Piece quantity prediction method and device, electronic equipment and storage medium
CN115412731B (en) Video processing method, device, equipment and storage medium
CN114339402B (en) Video playing completion rate prediction method and device, medium and electronic equipment
CN114297419B (en) Method, apparatus, device, medium and program product for predicting multimedia object
CN113361677B (en) Quantification method and device for neural network model
CN114339252A (en) Data compression method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230822

Address after: Room 101, 8th Floor, Building 12, Yard 16, West Erqi Road, Haidian District, Beijing, 100085

Applicant after: Beijing Dajia Internet Information Technology Co.,Ltd.

Address before: 100000 C302, third floor, building 12, Zhongguancun Software Park, Haidian District, Beijing

Applicant before: Beijing Zhuoyue Lexiang Network Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant