CN112182391A - User portrait drawing method and device - Google Patents
User portrait drawing method and device Download PDFInfo
- Publication number
- CN112182391A CN112182391A CN202011060473.2A CN202011060473A CN112182391A CN 112182391 A CN112182391 A CN 112182391A CN 202011060473 A CN202011060473 A CN 202011060473A CN 112182391 A CN112182391 A CN 112182391A
- Authority
- CN
- China
- Prior art keywords
- portrait
- label
- data
- user
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 23
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 16
- 230000006399 behavior Effects 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 6
- 230000008094 contradictory effect Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 7
- 238000012163 sequencing technique Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/955—Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a user portrait method and device. The method of the present application comprises: a label rule base is constructed in advance, the label rule base comprises a plurality of portrait labels, each portrait label corresponds to different portrait label values, and each portrait label value corresponds to a prior probability; acquiring multi-scene data of a user to be imaged; matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data; and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value. The apparatus of the present application includes: the system comprises a preprocessing unit, a data acquisition unit, a data matching unit and a user portrait unit.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a user portrait method and apparatus.
Background
The user portrait, namely the user information tagging, abstracts a user overall view by collecting and analyzing data of main information such as user static attributes, social attributes and behavior attributes, and is used for supporting a basic mode of big data application such as personalized recommendation.
User portrayal is an effective tool for outlining target users and connecting user appeal and design direction. User portrayal is currently widely used in various fields. For example, the user figures can be used for knowing the needs and hobbies of the user so as to optimize the product, or the user figures can be used for finding matched crowds for advertisement putting.
At present, a user portrait method is mainly based on the behavior of a user accessing a website, the user portrait is performed through access information of the user for a period of time, and the acquired user data are few in source and not rich in data, so that the user portrait cannot be displayed really and comprehensively.
Disclosure of Invention
The embodiment of the application provides a user portrait method and a user portrait device, and accuracy of user portrait is improved.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a user portrait method, where a tag rule base is pre-constructed, the tag rule base includes a plurality of portrait tags, each portrait tag corresponds to a different portrait tag value, and each portrait tag value corresponds to a prior probability, and the method includes: acquiring multi-scene data of a user to be imaged; matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data; and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
In a second aspect, an embodiment of the present application further provides a user portrait apparatus, including: the system comprises a preprocessing unit, a label rule base and a display unit, wherein the preprocessing unit is used for constructing the label rule base in advance, the label rule base comprises a plurality of portrait labels, each portrait label corresponds to different portrait label values, and each portrait label value corresponds to a priori probability; the data acquisition unit is used for acquiring multi-scene data of a user to be imaged; the data matching unit is used for matching the multi-scene data of the user to be imaged with the image label of the label rule base to obtain target data; and the user portrait unit is used for calculating the optimal portrait label value of each portrait label by using a naive Bayesian algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor; and a memory arranged to store computer executable instructions, the memory further storing a pre-built tag rule base comprising a plurality of portrait tags, each portrait tag corresponding to a different portrait tag value, each portrait tag value corresponding to a prior probability, the executable instructions when executed cause the processor to:
acquiring multi-scene data of a user to be imaged; matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data; and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform operations comprising:
acquiring multi-scene data of a user to be imaged; matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data; and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
on one hand, the portrait tags in the tag rule base are constructed to have prior probability, when a user portrays, the prior probability of the portrait tag values in the tag rule base is used as the prior probability of a naive Bayes algorithm to calculate the optimal portrait tag value of each portrait tag of the user to be pictured, the influence of individual difference of the user on the portrait precision is avoided by means of the robustness of the naive Bayes algorithm, and the accuracy of the portrait of the user is improved; on the other hand, multi-scene data of the user is obtained, the multi-scene data is matched with portrait tags in the tag rule base, so that target data corresponding to each portrait tag which is matched is multi-dimensional data under multiple scenes, user attributes can be expressed more accurately, and accuracy of portrait of the user is further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow diagram of a user portrayal method shown in an embodiment of the present application;
FIG. 2 is a flow chart of a multi-dimensional user representation using a tag rule base according to an embodiment of the present application;
FIG. 3 is a block diagram of a user portrait apparatus shown in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a user portrayal method shown in an embodiment of the present application, the method shown in FIG. 1 including the steps of:
and S100, a label rule base is constructed in advance, the label rule base comprises a plurality of portrait labels, each portrait label corresponds to different portrait label values, and each portrait label value corresponds to a prior probability.
The tag rule base in the embodiment is a dynamic base, and supports operations of adding, deleting, modifying and the like of the portrait tags.
Step S110, multi-scene data of the user to be imaged is obtained.
In this step, a Deep Packet inspection tool (DPI) and/or a crawler tool may be used to obtain one or more of internet access behavior data, location data, communication data, device attribute data, and real-name system basic data of a user to be imaged within a set time, for example, the DPI is used to capture the internet access behavior data, location data, and communication data of the user to be imaged within a week from a source address of the user to be imaged, and after the captured data is subjected to format unification processing, the obtained data is multi-scene data of the user to be imaged, which is used to perform multi-dimensional description on the user to be imaged.
And step S120, matching the multi-scene data of the user to be imaged with the image label of the label rule base to obtain target data.
And step S130, calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
As shown in fig. 1, on one hand, the portrait tags in the tag rule base have prior probabilities by constructing the portrait tags, when a user portrait is performed, the prior probabilities of the portrait tag values in the tag rule base are used as the prior probabilities of the naive bayes algorithm to calculate the optimal portrait tag values of the portrait tags of the user to be portrait, the robustness of the naive bayes algorithm is used to avoid the influence of the individual differences of the user on the portrait precision, and the accuracy of the portrait of the user is improved; on the other hand, multi-scene data of the user is obtained, the multi-scene data is matched with portrait tags in the tag rule base, so that target data corresponding to each portrait tag which is matched is multi-dimensional data under multiple scenes, user attributes can be expressed more accurately, and accuracy of portrait of the user is further improved.
The tag rule base is a basis of user portrait, and the construction method of the tag rule base is explained in detail through the following embodiments to facilitate introduction of a subsequent user portrait method based on the tag rule base.
In one embodiment, the tag rule base is constructed by:
first, multi-scene image sample data is acquired, wherein the multi-scene image sample data comprises but is not limited to internet behavior data, position data, communication data, equipment attribute data and real-name system basic data.
The embodiment can acquire sample user data, for example, data which can uniquely identify a user identity, such as a mobile phone number and an identification number of a user, is used as sample user data, multi-scene data of a sample user is matched based on the sample user data, for example, data in various scenes, such as communication data, equipment data, social data and shopping data of the user, are matched based on the mobile phone number of the user by using a crawler tool, and the matched multi-scene data is used as image sample data.
Then, extracting various tag rules and scene marks of a tag rule base according to the image sample data of the multiple scenes, wherein the tag rules are used for indicating a mode for acquiring the image sample data, and the scene marks are used for indicating the mode for acquiring the image sample data of the specified scene.
Since the present embodiment extracts various scene flags based on the multi-scene image sample data, the designated scene indicated by the scene flag should be included in the multi-scene image sample data. Taking a scene for generating the internet behavior data as an example, the designated scene indicated by the corresponding scene mark includes, but is not limited to, scenes of various e-commerce platforms, various social platforms, various multimedia platforms, and the like.
The method comprises the steps that a tag rule base is constructed, wherein the tag rule is used for indicating a mode of acquiring image sample data, and a scene mark is used for indicating to acquire the image sample data of a specified scene; when the target data of the user to be imaged is obtained, the label rule is used for indicating the mode of obtaining the multi-scene data of the user to be imaged, and the scene mark is used for indicating the data of the appointed scene of the user to be imaged.
The data acquisition mode indicated by the tag rule includes, but is not limited to, fuzzy matching and straight matching, where fuzzy matching refers to selecting specified scene data including a keyword field indicated by the tag rule from the image sample data, and straight matching refers to selecting scene data in which a specified scene is specifically a name field of the tag rule from the image sample data.
For example, the academic portrait label is: this family and above- <xx examination > -APP, this family and above-xexamination-APP;
wherein "this subject and above" are portrait label values of portrait labels, "x × research >" and "x research" are two label rules, "APP" is a designated scene of portrait labels, and then a academic portrait label with a label rule of "x × research >" can be understood as: extracting scene data of the Xx research APP from the portrait sample data as data matched with the portrait label, and knowing that the label rule corresponding to the portrait label is a matching rule such as a straight rule; and the portrait label with label rule of ". Inquiry" can be understood as: scene data of various research APPs are extracted from the portrait sample data to be used as data matched with the portrait label, and the label rule corresponding to the portrait label is known to be a fuzzy matching rule.
It can be understood that: in order to better distinguish the data matching modes specified by the tag rule, in this embodiment, an identifier is set for each data matching mode, for example, an identifier is set to be "< >" for a matching mode such as a straight mode, and an identifier is set to be "×" for a fuzzy matching mode.
Then, according to the portrait sample data of the multiple scenes, counting the probability value of the portrait label value of each portrait label under each related label rule and related scene mark, wherein the probability value is the prior probability of the portrait label value, the purpose of generating the prior probability is to provide the prior probability for the subsequent naive Bayes algorithm, and the user portrait is performed through the naive Bayes algorithm.
It can be seen that the portrait label in this embodiment may correspond to a plurality of label rules, and the label rules may also correspond to a plurality of scene marks, that is, the portrait label and the label rule are in a many-to-many relationship, and the label rule and the scene mark are also in a many-to-many relationship.
In the embodiment, the multi-scene portrait sample data mainly has two sources, one part is data fed back by an industry marketing result, and the part of data is verified to be very accurate; and the other part is extracted from historical data of multiple scenes, the image sample data of the part is subjected to full scene analysis, the intersection of the multiple scene data under the corresponding label rule is calculated, and the intersection data is the image sample data of the corresponding image label value under the corresponding label rule. For example, to obtain sample data of the portrait label of "sex-girl", intersection data of multiple scenes where women's clothing, cosmetics, mothers and infants are browsed many times can be searched, and the obtained intersection data is the multi-scene portrait sample data of the portrait label value of "sex-girl".
The accuracy of the image sample data shown in this embodiment determines the accuracy of the prior probability of each image label value under the corresponding label rule in the label rule, and only if the image sample data is accurate, the label rule counted by the image sample data and the prior probability corresponding to the label rule can be accurate.
Finally, generating the portrait label with the prior probability in the label rule base according to the portrait label value, the label rule, the scene mark and the prior probability, wherein the generated portrait label is as follows: portrait tag values-tag rules with matching mode identifiers-scene flags-prior probabilities.
In order to enrich the label rule base with multiple sources, in one embodiment, a portrait label value of the portrait label can be formulated according to business requirements, such as preference requirements; generating an portrait label without prior probability in a label rule base according to the portrait label value, the label rule and the scene mark, wherein the generated portrait label is as follows: portrait tag value-tag rule with matching mode identifier-scene flag.
Therefore, the constructed label rule base comprises portrait labels with prior probability and portrait labels without prior probability, after the portrait labels in the label rule base are generated, whether the label rules in the label rule base are reasonable or not needs to be regularly checked, for example, by counting the active user quantity occupation ratio corresponding to each label rule in the label rule base, when the active user quantity occupation ratio corresponding to the label rules exceeds a check threshold, early warning information is generated, whether the label rules with the active user quantity occupation ratio exceeding the check threshold are effective or not is checked according to the early warning information, and the invalid label rules are deleted from the label rule base.
In this embodiment, the active user amount ratio corresponding to the tag rule may be understood as: and the ratio of the number of users corresponding to the generated part of the image sample data to the total number of users generating all the image sample data is the ratio of the active user amount corresponding to the label rule by utilizing the image sample data acquired by a certain label rule.
In one example, tag rules in the tag rule base are detected according to a predetermined frequency (for example, every month), the active user amount ratio of a certain tag rule corresponding to a certain portrait tag is counted, and if the active user amount ratio exceeds 50%, warning information is sent to the tag rule.
For example, the monthly and live occupancy ratio of the tag rule with the field value of a certain e-commerce platform and the matching mode identifier "< >" with the identifier being straight and the like is counted, that is, the monthly and live user amount of the tag rule is counted to account for the total monthly and live user amount, the total monthly and live user amount can be understood as the total number of internet surfing people in one month, for example, the total number of active users in one month is 14 billion, the total number of active users in a certain e-commerce platform is 7 billion, the occupancy ratio reaches 50%, the number of the e-commerce platform is too large, the analysis of the e-commerce platform bottom layer is probably wrong, and actually, the number of people is not so high, at this time, early warning information can be sent, a bottom-layer URL (Uniform Resource Locator) is checked according to the early warning information, and whether the tag rule is effective is further determined according.
Therefore, through the steps, the embodiment can construct a tag rule base to perform user portrayal. The tag rule base of the embodiment is established according to different scenes, and stores portrait tag data in a standardized format, wherein portrait tags mainly comprise portrait tag values, tag rules, scene tags, prior probability and other field data, the portrait tags comprise but are not limited to sex, age, city and academic calendar, the portrait tag values are determined according to the portrait tags, for example, the sex tag values are 'male and female', the tag rules specify rules for getting numbers in multiple scenes, the scene tags are tag data for distinguishing which part of the scene the tag rules specifically get, and the prior probability is data subsequently provided for naive Bayes.
After a label rule base is constructed, performing data processing by using label rules in combination with multi-scene data of a user to be imaged, wherein the data processing comprises the processes of data matching, data denoising, data summarization, calculation sequencing and the like, namely performing denoising processing on target data, filtering out data with a user attribute metric value smaller than a preset threshold value, and obtaining denoised data; summarizing the denoised data according to the user attributes to obtain summarized data corresponding to each user attribute; and performing user portrait on the user to be portrait according to the portrait label corresponding to the summarized data.
The processes of data matching, data denoising, data summarization, calculation sorting, and the like are specifically described with reference to fig. 2.
(1) Data matching process
Referring to fig. 2, multi-scene data of a user to be imaged is matched with an image tag of a tag rule base to obtain target data. The target data acquisition comprises two matching modes, one mode is a straight equal matching mode indicated by a label rule in the portrait label, the multi-scene data is subjected to straight equal matching through the field value of the label rule in the portrait label (at the moment, the field value is a specific name, and the identifier is a straight equal matching identifier "< >) and the appointed scene field of the scene mark, and the matched data is the target data. The other method is to use fuzzy matching method indicated by label rule in the image label, to make fuzzy matching to multi-scene data through field value of label rule in the image label (at this time, the field value is key word, the identifier is fuzzy matching identifier "+") and appointed scene field of scene mark, and to obtain the data as target data as long as matching is possible.
(2) Data de-noising processing
And denoising the target data, and filtering out the data with the user attribute metric value smaller than a preset threshold value to obtain the denoised data. User attribute metric values include, but are not limited to, click through volume, number of days accessed, data category, and the like. The click rate is the sum of the number of clicks of the user on the platform; the number of access days refers to the number of access days of the platform in a certain period, the number of access days is the number of days after duplication removal, namely when the user accesses the platform for multiple times in one day, the number of access days is recorded as 1 day; the data type refers to a data type corresponding to access data generated by a user on a platform, and assuming that target data is user data matched with a preference portrait label, one portrait label value of the preference portrait label is "gallop", and the target data can be divided into two types, one type is data related to "gallop class C", and the other type is data related to "gallop class E", and the value of the data type corresponding to the target data is 2. The denoising processing is mainly to filter out inaccurate target data, for example, to filter out data with a user click amount of 1 in a specified period, and such data has a high probability of misoperation and cannot accurately reflect a corresponding portrait label of a user.
(3) Data summarization
Summarizing the denoised data according to the user attributes to obtain summarized data corresponding to each user attribute, for example, summarizing the denoised data by click quantity, visit days, visit time and data types in a certain period, wherein the visit time refers to the visit time of the user to the platform, summarizing the denoised data according to the visit time includes but is not limited to sorting the denoised data according to the visit time corresponding to the data according to a time sequence, for example, sorting the denoised data according to the time data which is closest to the current time and is next closest to the current time ….
(4) Calculating rank
When a user image is performed on a user to be imaged according to the image label corresponding to the summarized data, if the image label corresponding to the summarized data has a prior probability, calculating an optimal image label value of the image label of the user to be imaged by using the prior probability corresponding to the summarized data and a naive Bayesian algorithm; if the portrait label corresponding to the summarized data does not have the prior probability, determining the optimal portrait label value of the portrait label of the user to be pictured according to the user attribute metric value corresponding to the summarized data; and generating a user portrait table of the user to be pictured according to the optimal portrait label value of each portrait label.
In the embodiment, each tag rule in the tag rule base is independent, so that for the summarized data with the prior probability, after the part of the summarized data is summarized based on the tag rules, the optimal portrait tag value can be calculated by adopting naive Bayes. Taking a gender image tag as an example, the corresponding portrait tag value is two data of "male" or "female", and for various internet behavior data of the user a, whether the user a is a male or a female is predicted under the internet behavior data, then only the probabilities of P (gender ═ male | various internet behaviors) and P (gender ═ female | various internet behaviors) need to be calculated, and finally the probabilities in the two cases are compared, wherein the gender corresponding to the larger probability value is the predicted optimal portrait tag value of the user a.
For the summarized data without prior probability, the naive bayesian algorithm cannot be adopted to calculate the optimal portrait label value, and the embodiment adopts a multidimensional sequencing method to obtain the optimal portrait label value of the portrait label matched with the summarized data.
For example, the image tag values corresponding to the highest values are sorted in descending order according to the number of access days, the number of data types and the click rate, then the image tag values corresponding to the highest values are sorted in descending order according to the number of data types, the number of access days and the click rate, another image tag value corresponding to the highest value is selected, if the image tag values obtained in the two ways are the same, the image tag value is determined to be the optimal image tag value of the image tag, and if the image tag values obtained in the two ways are different, the image tag value corresponding to the latest access time is selected to be the optimal image tag value.
Assuming that the summarized data is the summarized data matched with the portrait label A, the portrait label A has no prior probability, and assuming that the portrait label A has two portrait label values, respectively a1 and a2, after first performing a first round of descending sorting according to the number of visit days, the type of data, and the click rate, if the descending sorting result obtained based on the number of visit days is: for the number of access days with the portrait tag value of a1 > for the number of access days with the portrait tag value of a2, the portrait tag value a1 is used as the first round of sorting results, and if the descending sort result is obtained based on the number of access days: if the number of access days for the image tag value a1 is equal to the number of access days for the image tag value a2, then the image tag values a1 and a2 are sorted based on the data type, if the sort is not equal to the sort, then the larger value is used as the first round of sorting result, and if the sort is equal to the sort, the sorting is continued based on the click amount until the first round of sorting result is obtained. And after the first round of sequencing results are obtained, continuing to perform second round sequencing, namely performing descending sequencing according to the data types, the access days and the click rate to obtain second round sequencing results. If the first round sorting result and the second round sorting result are both portrait label values a1, the optimal portrait label value of the portrait labels corresponding to the summarized data may be determined to be a 1. If the first round of sorting results is different from the second round of sorting results, one is portrait label value a1 and one is portrait label value a2, then the portrait label value corresponding to the newly generated user data may be used as the optimal portrait label value for the portrait label corresponding to the summary data.
The embodiment of the application shows a specific implementation manner that when the portrait label corresponding to the summarized data does not have the prior probability, the optimal portrait label value of the portrait label of the user to be portrait is determined according to the user attribute metric corresponding to the summarized data. Of course, it should be understood that this step may be implemented in other ways, and the embodiment of this application is not limited thereto.
By the method, the embodiment can calculate the optimal label value of each portrait label of the user to be portrait. After the optimal portrait label value of each portrait label is obtained, performing contradiction verification on each portrait label of a user to be portrait, and performing accepting and rejecting processing on the portrait labels with the contradictory optimal portrait label values to obtain the portrait labels subjected to the contradiction verification; for example, if the child age tag value is 0-6 based on the multi-scene data of the user to be imaged, and the user age tag value is 18 years based on the multi-scene data of the user to be imaged, the two tag values are contradictory, and the two portrait tags can be chosen or rejected based on the application requirements.
And generating a user portrait table to perform multi-dimensional description on the user to be portrait according to the portrait label subjected to the contradictory verification and the optimal portrait label value thereof. That is, all portrait labels of the user to be pictured can be displayed in a row of data, and finally a wide list is generated and provided for relevant personnel. Where portrait tags include, but are not limited to, gender, age, marital status, presence of kids, child's age, academic calendar, province of home, city of home, presence of car, car preference, music preference.
Fig. 3 is a block diagram of a user portrait apparatus according to an embodiment of the present application, and as shown in fig. 3, the apparatus 300 includes:
the preprocessing unit 310 is configured to pre-construct a tag rule base, where the tag rule base includes a plurality of portrait tags, each portrait tag corresponds to a different portrait tag value, and each portrait tag value corresponds to a prior probability;
a data obtaining unit 320, configured to obtain multi-scene data of a user to be imaged;
the data matching unit 330 is configured to match multi-scene data of the user to be imaged with the image tag of the tag rule base to obtain target data;
and the user portrait unit 340 is configured to calculate an optimal portrait label value of each portrait label by using a naive bayesian algorithm according to a prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generate a user portrait result according to the optimal portrait label value.
In some embodiments, the preprocessing unit 310 is configured to obtain image sample data of multiple scenes, and extract multiple tag rules and scene flags of a tag rule base according to the image sample data of the multiple scenes, where the tag rules are used to indicate a manner of obtaining data of a specified scene, and the scene flags are used to indicate the specified scene; counting the probability value of the portrait label value of each portrait label under each relevant label rule and relevant scene mark according to the portrait sample data, wherein the probability value is the prior probability of the portrait label value; and generating the portrait label with the prior probability in the label rule base according to the portrait label value, the label rule, the scene mark and the prior probability.
In some embodiments, the preprocessing unit 310 is further configured to formulate a portrait label value for the portrait label according to the business requirement; and generating the portrait label without prior probability in the label rule base according to the portrait label value, the label rule and the scene mark.
In some embodiments, the apparatus 300 further includes an early warning unit, configured to count an active user amount ratio corresponding to each tag rule in the tag rule base after generating the portrait tags in the tag rule base; when the proportion of the active user amount corresponding to the label rule exceeds a detection threshold value, generating early warning information; and checking whether the label rule with the active user quantity ratio exceeding the detection threshold value is effective or not according to the early warning information, and deleting the invalid label rule from the label rule base.
In some embodiments, user representation unit 340 includes a denoising module, a summarizing module, and a calculating module;
the denoising module is used for denoising the target data and filtering out data with the user attribute metric value smaller than a preset threshold value to obtain denoised data;
the summarizing module is used for summarizing the denoised data according to the user attributes to obtain summarized data corresponding to each user attribute;
and the calculation module is used for performing user portrait on the user to be portrait according to the portrait label corresponding to the summarized data.
In some embodiments, the calculation module is further configured to calculate, when the portrait label corresponding to the summarized data has a prior probability, an optimal portrait label value of the portrait label of the user to be pictured by using the prior probability corresponding to the summarized data and a naive bayesian algorithm; when the portrait label corresponding to the summarized data does not have the prior probability, determining the optimal portrait label value of the portrait label of the user to be pictured according to the user attribute metric value corresponding to the summarized data; and generating a user portrait table of the user to be pictured according to the optimal portrait label value of each portrait label.
In some embodiments, the calculation module is further configured to, after obtaining the optimal portrait label value of each portrait label, perform contradiction verification on each portrait label of the user to be portrait, and perform rejection processing on the portrait labels with the inconsistent optimal portrait label values to obtain the portrait labels subjected to the contradiction verification; and generating a user portrait table to perform multi-dimensional description on the user to be portrait according to the portrait label subjected to the contradictory verification and the optimal portrait label value thereof.
In some embodiments, the data obtaining unit 320 is configured to obtain, by using a deep packet inspection tool and/or a crawler tool, one or more of internet behavior data, location data, communication data, device attribute data, and real-name system basic data of a user to be depicted within a set time; and carrying out format unification treatment on the obtained data to obtain multi-scene data of the user to be imaged.
It can be understood that the user representation apparatus can implement the steps of the user representation method provided in the foregoing embodiments, and the related explanations regarding the user representation method are applicable to the user representation apparatus, and are not repeated herein.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 4, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the non-volatile memory into the memory and runs the computer program to form the user portrait apparatus on a logical level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
acquiring multi-scene data of a user to be imaged;
matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data;
and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
The method performed by the user-portrait apparatus as disclosed in the embodiment of FIG. 1 of the present application may be implemented in or by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may also perform the method performed by the user representation apparatus in fig. 1, and implement the functions of the user representation apparatus in the embodiment shown in fig. 1, which are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which, when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the method performed by the user portrait apparatus in the embodiment shown in fig. 1, and are specifically configured to perform:
acquiring multi-scene data of a user to be imaged;
matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data;
and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A user portrait method is characterized in that a label rule base is constructed in advance, the label rule base comprises a plurality of portrait labels, each portrait label corresponds to a different portrait label value, and each portrait label value corresponds to a priori probability, and the method comprises the following steps:
acquiring multi-scene data of a user to be imaged;
matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data;
and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
2. The method of claim 1, wherein the method of building a tag rule base comprises:
acquiring multi-scene portrait sample data, and extracting various tag rules and scene marks of a tag rule base according to the multi-scene portrait sample data, wherein the tag rules are used for indicating a mode of acquiring portrait sample data, and the scene marks are used for indicating to acquire portrait sample data of a specified scene;
counting the probability value of the portrait label value of each portrait label under each relevant label rule and relevant scene mark according to the portrait sample data, wherein the probability value is the prior probability of the portrait label value;
and generating the portrait label with the prior probability in the label rule base according to the portrait label value, the label rule, the scene mark and the prior probability.
3. The method of claim 2, wherein the method of building a tag rule base further comprises:
formulating an image label value of the image label according to business requirements;
and generating the portrait label without prior probability in the label rule base according to the portrait label value, the label rule and the scene mark.
4. The method of claim 2 or 3, after generating the portrait tags in the tag rule base, further comprising:
counting the active user quantity ratio corresponding to each label rule in the label rule base;
when the proportion of the active user amount corresponding to the label rule exceeds a detection threshold value, generating early warning information;
and checking whether the label rule with the active user quantity ratio exceeding the detection threshold value is effective or not according to the early warning information, and deleting the invalid label rule from the label rule base.
5. The method of claim 3, wherein computing an optimal portrait label value for each portrait label using a naive Bayes algorithm based on a prior probability of each portrait label value corresponding to each portrait label matching the target data comprises:
denoising the target data, and filtering out data with the user attribute metric value smaller than a preset threshold value to obtain denoised data;
summarizing the denoised data according to the user attributes to obtain summarized data corresponding to each user attribute;
and performing user portrait on the user to be portrait according to the portrait label corresponding to the summarized data.
6. The method of claim 5, wherein user-profiling a user to be profiled according to a profile tag corresponding to the summary data comprises:
when the portrait label corresponding to the summarized data has the prior probability, calculating the optimal portrait label value of the portrait label of the user to be pictured by utilizing the prior probability corresponding to the summarized data and a naive Bayes algorithm;
when the portrait label corresponding to the summarized data does not have the prior probability, determining the optimal portrait label value of the portrait label of the user to be pictured according to the user attribute metric value corresponding to the summarized data;
and generating a user portrait table of the user to be pictured according to the optimal portrait label value of each portrait label.
7. The method of claim 6, after obtaining the optimal portrait label values for each portrait label, further comprising:
carrying out contradiction verification on each portrait label of a user to be portrait, and performing rejection processing on the portrait labels with the contradictory optimal portrait label values to obtain the portrait labels subjected to the contradiction verification;
and generating a user portrait table to perform multi-dimensional description on the user to be portrait according to the portrait label subjected to the contradictory verification and the optimal portrait label value thereof.
8. The method of claim 1, wherein obtaining multi-scene data of a user to be rendered comprises:
acquiring one or more of internet behavior data, position data, communication data, equipment attribute data and real-name system basic data of a user to be represented within set time by using a deep packet detection tool and/or a crawler tool;
and carrying out format unification treatment on the obtained data to obtain multi-scene data of the user to be imaged.
9. A user-portrait apparatus, comprising:
the system comprises a preprocessing unit, a label rule base and a display unit, wherein the preprocessing unit is used for constructing the label rule base in advance, the label rule base comprises a plurality of portrait labels, each portrait label corresponds to different portrait label values, and each portrait label value corresponds to a priori probability;
the data acquisition unit is used for acquiring multi-scene data of a user to be imaged;
the data matching unit is used for matching the multi-scene data of the user to be imaged with the image label of the label rule base to obtain target data;
and the user portrait unit is used for calculating the optimal portrait label value of each portrait label by using a naive Bayesian algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
10. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions, the memory further storing a pre-built tag rule base comprising a plurality of portrait tags, each portrait tag corresponding to a different portrait tag value, each portrait tag value corresponding to a prior probability, the executable instructions when executed cause the processor to:
acquiring multi-scene data of a user to be imaged;
matching multi-scene data of a user to be imaged with an image label of a label rule base to obtain target data;
and calculating the optimal portrait label value of each portrait label by using a naive Bayes algorithm according to the prior probability of each portrait label value corresponding to each portrait label matched with the target data, and generating a user portrait result according to the optimal portrait label value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011060473.2A CN112182391A (en) | 2020-09-30 | 2020-09-30 | User portrait drawing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011060473.2A CN112182391A (en) | 2020-09-30 | 2020-09-30 | User portrait drawing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112182391A true CN112182391A (en) | 2021-01-05 |
Family
ID=73946295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011060473.2A Pending CN112182391A (en) | 2020-09-30 | 2020-09-30 | User portrait drawing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112182391A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112765146A (en) * | 2021-01-26 | 2021-05-07 | 四川新网银行股份有限公司 | Method for monitoring data quality of user portrait label |
CN112883269A (en) * | 2021-02-26 | 2021-06-01 | 上海连尚网络科技有限公司 | Method and equipment for adjusting label data information |
CN113313344A (en) * | 2021-04-13 | 2021-08-27 | 武汉烽火众智数字技术有限责任公司 | Label system construction method and system fusing multiple modes |
CN113901084A (en) * | 2021-09-17 | 2022-01-07 | 作业帮教育科技(北京)有限公司 | User portrait real-time generation device and method and electronic equipment |
EP4123479A3 (en) * | 2021-12-30 | 2023-05-17 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method and apparatus for denoising click data, electronic device and storage medium |
CN117407809A (en) * | 2023-12-01 | 2024-01-16 | 广东铭太信息科技有限公司 | Audit management system and method based on multi-user portrait fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105893407A (en) * | 2015-11-12 | 2016-08-24 | 乐视云计算有限公司 | Individual user portraying method and system |
CN106354519A (en) * | 2016-09-30 | 2017-01-25 | 乐视控股(北京)有限公司 | Method and device for generating label for user portrait |
WO2017157146A1 (en) * | 2016-03-15 | 2017-09-21 | 平安科技(深圳)有限公司 | User portrait-based personalized recommendation method and apparatus, server, and storage medium |
CN109903097A (en) * | 2019-03-05 | 2019-06-18 | 云南电网有限责任公司信息中心 | A kind of user draws a portrait construction method and user draws a portrait construction device |
CN111210326A (en) * | 2019-12-27 | 2020-05-29 | 大象慧云信息技术有限公司 | Method and system for constructing user portrait |
-
2020
- 2020-09-30 CN CN202011060473.2A patent/CN112182391A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105893407A (en) * | 2015-11-12 | 2016-08-24 | 乐视云计算有限公司 | Individual user portraying method and system |
WO2017157146A1 (en) * | 2016-03-15 | 2017-09-21 | 平安科技(深圳)有限公司 | User portrait-based personalized recommendation method and apparatus, server, and storage medium |
CN106354519A (en) * | 2016-09-30 | 2017-01-25 | 乐视控股(北京)有限公司 | Method and device for generating label for user portrait |
CN109903097A (en) * | 2019-03-05 | 2019-06-18 | 云南电网有限责任公司信息中心 | A kind of user draws a portrait construction method and user draws a portrait construction device |
CN111210326A (en) * | 2019-12-27 | 2020-05-29 | 大象慧云信息技术有限公司 | Method and system for constructing user portrait |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112765146A (en) * | 2021-01-26 | 2021-05-07 | 四川新网银行股份有限公司 | Method for monitoring data quality of user portrait label |
CN112765146B (en) * | 2021-01-26 | 2022-10-21 | 四川新网银行股份有限公司 | Method for monitoring data quality of user portrait label |
CN112883269A (en) * | 2021-02-26 | 2021-06-01 | 上海连尚网络科技有限公司 | Method and equipment for adjusting label data information |
CN112883269B (en) * | 2021-02-26 | 2024-05-31 | 上海连尚网络科技有限公司 | A method and device for adjusting label data information |
CN113313344A (en) * | 2021-04-13 | 2021-08-27 | 武汉烽火众智数字技术有限责任公司 | Label system construction method and system fusing multiple modes |
CN113901084A (en) * | 2021-09-17 | 2022-01-07 | 作业帮教育科技(北京)有限公司 | User portrait real-time generation device and method and electronic equipment |
EP4123479A3 (en) * | 2021-12-30 | 2023-05-17 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method and apparatus for denoising click data, electronic device and storage medium |
US12174824B2 (en) | 2021-12-30 | 2024-12-24 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method for denoising click data, electronic device and storage medium |
CN117407809A (en) * | 2023-12-01 | 2024-01-16 | 广东铭太信息科技有限公司 | Audit management system and method based on multi-user portrait fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112182391A (en) | User portrait drawing method and device | |
CN109359244B (en) | Personalized information recommendation method and device | |
CN108121737B (en) | Method, device and system for generating business object attribute identifier | |
CN107341220B (en) | Multi-source data fusion method and device | |
US20140040371A1 (en) | Systems and methods for identifying geographic locations of social media content collected over social networks | |
WO2019169978A1 (en) | Resource recommendation method and device | |
CN108550046B (en) | Resource and marketing recommendation method and device and electronic equipment | |
US20170214646A1 (en) | Systems and methods for providing social media location information | |
CN106874335B (en) | Behavior data processing method and device and server | |
CN111177568B (en) | Object pushing method based on multi-source data, electronic device and storage medium | |
CN111782946A (en) | Book friend recommendation method, calculation device and computer storage medium | |
CN114581207A (en) | Commodity image big data accurate pushing method and system for E-commerce platform | |
CN113836128A (en) | Abnormal data identification method, system, equipment and storage medium | |
CN111339409A (en) | Map display method and system | |
WO2018033052A1 (en) | Method and system for evaluating user portrait data | |
CN105989066A (en) | Information processing method and device | |
CN111340575A (en) | Resource pushing method and device and electronic equipment | |
CN112685618A (en) | User feature identification method and device, computing equipment and computer storage medium | |
CN112860722A (en) | Data checking method and device, electronic equipment and readable storage medium | |
JPWO2019234827A1 (en) | Information processing device, judgment method, and program | |
CN109144999B (en) | Data positioning method, device, storage medium and program product | |
CN108830298B (en) | Method and device for determining user feature tag | |
CN113297471A (en) | Method and device for generating data object label and searching data object and electronic equipment | |
CN112597772B (en) | A method for determining hotspot information, computer equipment and device | |
CN110489640A (en) | Content recommendation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210105 |