[go: up one dir, main page]

CN114155034B - A method for evaluating user recognition of advertisements based on feature recognition - Google Patents

A method for evaluating user recognition of advertisements based on feature recognition Download PDF

Info

Publication number
CN114155034B
CN114155034B CN202111481967.2A CN202111481967A CN114155034B CN 114155034 B CN114155034 B CN 114155034B CN 202111481967 A CN202111481967 A CN 202111481967A CN 114155034 B CN114155034 B CN 114155034B
Authority
CN
China
Prior art keywords
user
advertisement
recognition
data
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111481967.2A
Other languages
Chinese (zh)
Other versions
CN114155034A (en
Inventor
苏娟
吴育怀
汪功林
陈孝君
梁雨菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Grapefruit Cool Media Information Technology Co ltd
Original Assignee
Anhui Grapefruit Cool Media Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Grapefruit Cool Media Information Technology Co ltd filed Critical Anhui Grapefruit Cool Media Information Technology Co ltd
Priority to CN202111481967.2A priority Critical patent/CN114155034B/en
Publication of CN114155034A publication Critical patent/CN114155034A/en
Application granted granted Critical
Publication of CN114155034B publication Critical patent/CN114155034B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0265Vehicular advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Signal Processing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of big data processing, and particularly relates to a method for evaluating acceptance of a user to advertisements based on feature recognition. The method comprises the following step S1 of acquiring characteristic data of advertisements currently played, wherein the step comprises the step of acquiring playing time length T of each advertisement played and a keyword data set associated with each advertisement. S2, acquiring feedback data of each user on advertisement playing, wherein the feedback data comprises voice stream data generated by all users in an advertisement playing area during advertisement playing, video stream data of all users in the advertisement playing area and instructions which are sent by one or more users in the advertisement playing area and require to switch the currently played advertisement, and judging whether the instructions requiring to switch the currently played advertisement are received or not. And S3, calculating the acceptance evaluation value of each user on the current advertisement according to different feedback information. The invention solves the problem that the existing advertisement delivery equipment cannot acquire the feedback and evaluation of the off-line user on advertisement delivery.

Description

User acceptance evaluation method for advertisement based on feature recognition
The application relates to a split application of an intelligent media management system based on a VOC (volatile organic compound) vehicle owner big data platform, which has the application number of CN 202110687605.2 and the application date of 2021/06/21.
Technical Field
The invention belongs to the field of big data processing, and particularly relates to a method for evaluating acceptance of a user to advertisements based on feature recognition.
Background
In scenes such as elevators, shops, garages, subway stations and the like, a large number of advertisement putting equipment for playing advertisements are almost arranged. These devices will cycle through the built-in advertising video. The conventional advertisement delivery device can only play advertisements in a server according to a fixed play sequence, and cannot adjust the play sequence of the advertisements or the content of the advertisement play according to different users. If the played content needs to be replaced, the related device manager needs to manually switch or update the played content of the device from a remote place or a field place.
When the online advertisement is put, an advertiser can analyze and judge the demands of the user according to the web browsing behaviors of the grabbed user or the use or search records in various APP, forecast the preference or demand of the user and accurately push the targeted advertisement to the user based on the preference or demand. When the advertisement is put on line, the evaluation of the advertisement by the user is difficult to judge, and the advertisement can be put in a specific area only by judging according to experience by an advertiser. In fact, most advertisers do not care about even user feedback when putting offline advertisements, and only wish to push the same advertisement content densely and in full coverage at different places in a "carpet-like bombing" manner, so as to achieve better advertising effects. This advertising approach is inefficient and wasteful. The reason for this dilemma is that there is no method in the prior art that can effectively evaluate the user's acceptance of advertisements. If the problem can be solved, the putting effect of different advertisements can be effectively analyzed on line.
Disclosure of Invention
In order to solve the problem that the existing advertisement delivery equipment is low in advertisement delivery efficiency and cannot acquire feedback and evaluation of offline users on advertisement delivery, the invention provides a method for evaluating the acceptance of users on advertisements based on feature recognition.
The invention is realized by adopting the following technical scheme:
a method for evaluating the acceptance of a user to an advertisement based on feature recognition comprises the following steps:
s1, acquiring characteristic data of an advertisement currently played:
And acquiring the playing time length T of each advertisement to be played and the keyword data set associated with each advertisement.
S2, obtaining feedback data of each user on advertisement playing, wherein the feedback data comprises the following steps:
The method comprises the steps of obtaining voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area and instructions which are sent by one or more users in the advertisement putting area and require switching of advertisements which are currently played, judging whether the instructions which require switching of the advertisements which are currently played are received, if yes, assigning 1 to a feature quantity SW reflecting the instructions, and otherwise assigning 0 to the SW.
S3, calculating the acceptance evaluation value of each user on the current advertisement, which comprises the following steps:
S31, carrying out voice recognition on voice stream data, extracting keywords matched with feature data in a keyword data set, and counting the number N 1 of the keywords.
S32, video motion recognition is carried out on the video stream data, gesture motions representing feedback of the user on the currently played advertisement are extracted, and the number N 2 of gesture motions is counted.
S33, carrying out video action recognition on video stream data, extracting characteristic actions reflecting eye attention position changes of all users, and calculating attention time t n of all users on the currently played advertisement according to the characteristic actions, wherein n represents the user number of the current user.
S34, frame-separated sampling is carried out on the frame-separated images of the video stream data according to the sampling frequency, image recognition is carried out on the frame-separated sampled images, facial expressions of all users are extracted, the facial expressions are classified to be like, neglect or dislike, the number of three types of expression classification results of all users is counted, and the proportion of the number of the three types of expression classification results of all users in the total sample size of the users is calculated.
And S35, acquiring the value of SW.
S36, calculating the acceptance evaluation value E n of each user on the current advertisement through the following formula:
In the above formula, n represents the user number of the current user, E n represents the evaluation value of the user with the number n to the currently played advertisement, E n is more than or equal to 0, and the larger the value of E n is, the higher the acceptance of the user to the currently played multimedia is reflected; The method comprises the steps of representing attention concentration of a user with the number of n on a currently played advertisement, k 1 representing an influence factor of voice information feedback on a whole approval evaluation result, k 2 representing an influence factor of gesture action feedback on the whole approval evaluation result, k 3 representing an influence factor of expression feedback on the whole approval evaluation result, k 4 representing an influence factor of attention concentration on the whole approval evaluation result, m 1 representing a score of a single keyword in the voice information feedback, m 2 representing a score of a single gesture action in the gesture action feedback, m 3 representing a score of attention concentration, a representing a score of a favorite expression, p 1,n representing a ratio of a user with the number of n to a total image of a frame sample, b representing a score of a neglected expression, p 2,n representing a ratio of a user with the number of n to a total image of a frame sample, c representing a score of a averse expression, and p 3,n representing a ratio of a user with the number of n to a averse expression in the total image of a frame sample.
As a further improvement of the present invention, in step S1, the setting of the related words is completed before each advertisement is put, and the feature data in the keyword data set associated with each advertisement at least includes:
(1) Keywords reflecting advertised promotional products;
(2) Keywords reflecting a target customer group for which the advertisement is directed;
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement;
(4) High frequency or special keywords in the advertisement words;
(5) Classifying the time length of the advertisement;
(6) And classifying the styles of the advertisements.
As a further improvement of the invention, in the step S2, the mode of the instruction of the user for switching the advertisement which is currently played comprises key input, voice interaction and gesture interaction, wherein the voice interaction is realized by identifying a voice keyword which is sent by the user and is required to switch the advertisement which is currently played, the gesture interaction is realized by identifying a characteristic gesture which is sent by the user and is required to switch the advertisement which is currently played, and the key input indicates the key input instruction which is directly input by the user through keys and is required to switch the advertisement which is currently played.
As a further improvement of the invention, the voice keywords are obtained by voice recognition algorithm according to real-time voice stream data recognition. The characteristic gestures are obtained by a video motion recognition algorithm according to real-time video stream data. The key input instruction is obtained through an entity switching key module arranged on the advertisement playing site.
As a further improvement of the present invention, in step S2, the feedback data of the advertisement playing by the user includes the following contents:
(1) The change of expression when the user views the advertisement, including like, neglect or dislike;
(2) The direct discussion of the advertisement by the user refers to whether the content in the keyword dataset corresponding to the advertisement is related;
(3) Gesture actions made by the user when watching the advertisement;
(4) The time when the user is focused on viewing a certain advertisement;
(5) Whether the user requires to switch the currently playing advertisement.
As a further improvement of the present invention, the time of attention paid to a user to watch a certain advertisement is calculated by the following formula:
In the above formula, t n represents the attention duration of the user with the number n to the currently played advertisement, t 1n represents the direct-view duration of the user with the number n during the current advertisement play, t 2n represents the eye-closing duration of the user with the number n during the current advertisement play, t 3n represents the low head duration of the user with the number n during the current advertisement play, and t 4n represents the turn-around duration of the user with the number n during the current advertisement play.
As a further improvement of the present invention, in step S32, the gesture action of the user feeding back the currently playing advertisement includes the head-up or head-turning action of the user, in which the head is switched from the non-direct-view state to the direct-view state, with the head pointing, clapping, and pointing to the advertisement playing interface generated during the advertisement playing.
As a further improvement of the invention, the voice stream data is acquired by a pickup installed at the advertisement playing device.
As a further improvement of the invention, the video stream data is acquired by a video monitoring device installed around the advertising playback device.
As a further improvement of the present invention, in step S34, expression recognition is performed using a neural network algorithm trained with a large number of samples.
The technical scheme provided by the invention has the following beneficial effects:
According to the method for evaluating the acceptance of the user to the advertisement based on the feature recognition, provided by the invention, various types of feedback information made by the user when the advertisement is played are obtained, then various types of feedback behaviors of the user are recognized by utilizing leading edge technologies such as voice recognition, image recognition and video action recognition, and various types of feedback behaviors are quantized, so that a standard unity is provided for different advertisements which are put in, and the acceptance evaluation value with reference value is provided.
The approval evaluation value calculated by the present invention considers the types of all feedback behaviors that the user can check during viewing of the advertisement. Based on the recognition of various types of characteristics of the feedback behavior, the accurate acceptance evaluation of different advertisements by each user is obtained, and further the evaluation criterion of advertisement putting effect can be used.
Drawings
FIG. 1 is a schematic diagram of module connection of an intelligent media management system based on a VOC vehicle owner big data platform provided in embodiment 1 of the present invention;
FIG. 2 is a flowchart of a method for precisely delivering advertisements based on user portraits in embodiment 1 of the present invention;
FIG. 3 is a logic block diagram of a process for acquiring a user tag of a current user in embodiment 1 of the present invention;
FIG. 4 is a logic block diagram of a target image dataset acquisition process for a current user group in accordance with embodiment 1 of the present invention;
FIG. 5 is a flowchart of a method for creating an advertisement analysis database according to embodiment 2 of the present invention;
FIG. 6 is a chart showing the classification of feature data contained in an identity tag in an advertisement analysis database according to embodiment 2 of the present invention;
FIG. 7 is a diagram showing the type distinction of feature data contained in a user image dataset according to embodiment 2 of the present invention;
FIG. 8 is a block diagram of a system for creating an advertisement analysis database according to embodiment 3 of the present invention;
FIG. 9 is a flowchart of a method for evaluating the acceptance of advertisements by users based on feature recognition in embodiment 4 of the present invention;
FIG. 10 is a flowchart of a method for analyzing user demand in time in a business turn scenario in embodiment 5 of the present invention;
FIG. 11 is a flowchart of a matching method of user requirements and advertisement content in embodiment 6 of the present invention;
fig. 12 is a schematic block diagram of a garage huge curtain MAX intelligent terminal with intelligent voice interaction function provided in embodiment 7 of the present invention;
Fig. 13 is a type distinction chart of a switching instruction adopted by a man-machine interaction module in a garage huge screen MAX intelligent terminal with an intelligent voice interaction function according to embodiment 7 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
The embodiment provides an intelligent media management system based on a VOC (volatile organic compound) vehicle owner big data platform, which is used for acquiring the matching degree between a current user and an advertisement to be put, and further adjusting an advertisement playing sequence table. In this embodiment, as shown in fig. 1, the intelligent media management system includes a keyword extraction module, a historical user information query module, a user type classification module, a user tag establishment module, an identity feature recognition module, a target portrait dataset establishment module, and an advertisement play sequence table adjustment module.
The keyword extraction module is used for extracting keyword data sets associated with each advertisement in the advertisement playing sequence table, and characteristic data in the keyword data sets are preset keywords related to the content of the advertisement.
In this embodiment, the feature data in the keyword data set of each advertisement at least includes, as in embodiment 1:
(1) Keywords reflecting advertised promotional products.
(2) Reflecting keywords of the target customer group for which the advertisement is directed.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the advertisement word.
(5) The duration of the advertisement is classified.
(6) And classifying the styles of the advertisements.
The historical user information inquiry module is used for inquiring the user portrait data set of each historical user from an advertisement analysis database to acquire various feature data about each historical user in the user portrait data set. The advertisement analysis database is stored in the VOC vehicle owner cloud big data platform. The advertisement analysis database is the database created in embodiment 1. The advertisement analysis database comprises a user portrait data set of collected historical users, wherein the user portrait data set comprises facial feature data of each historical user and user tags, the user tags comprise identity tags, likes and dislikes, the identity tags store feature data reflecting the identity features of the users, the feature data in the identity tags comprise gender, age range, wearing style and other features, and the other features represent identifiable non-gender, age range and wearing style features which are useful for distinguishing the identity features of the users. The liked tag stores feature data of an object reflecting user's likes, and the disliked tag stores feature data of an object reflecting user's dislikes. The identity feature recognition module extracts features of which the ages are 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old or over 70 years old, and the wearing styles comprise leisure, business, sports, children or old people. The identity tag is characterized in that other characteristics are reflected by whether to wear glasses, whether to wear hats, whether to lose hair, whether to apply lipstick, whether to wear high-heeled shoes, whether to store beards and whether to wear wristwatches, and if the characteristics are the characteristics, the characteristic data reflecting the characteristics are added into the other characteristics, otherwise, the characteristic data are not added into the other characteristics.
The user type classification module is used for extracting facial features of all target users in an advertisement putting area, and then comparing the extracted facial features with facial features of all historical users in the advertisement analysis database to distinguish whether the current user is a historical user or a newly added user.
In the embodiment, the data sources of the user type classification module and the identity characteristic recognition module are video stream data of multi-angle monitoring of the advertisement putting area, and the user type classification module comprises a facial characteristic extraction unit, a facial characteristic comparison unit and a user type classification unit. The facial feature comparison unit is used for acquiring all the facial features extracted by the facial feature extraction unit and all the facial features of the historical users inquired by the historical user information inquiry module, and comparing the facial features of all the users. The user type classifying unit is used for classifying all users appearing in the video stream data into historical users or newly added users according to the result of the facial feature comparison unit.
The user label establishing module is used for establishing an empty user label for each newly added user, wherein the established user label comprises an identity label, a liked label and an disliked label, and the user label establishing module is also used for adding a special user number in the identity label of each newly added user.
The identity feature recognition module is used for extracting the identity features of the newly added user and adding the extracted identity features into the corresponding identity tags of the newly added user.
The target portrait data set building module is used for:
(1) Setting a historical user proportion critical value q0, and calculating the duty ratio q of the current user identified as the historical user in the advertisement putting area in the current user group.
(2) Judging the size relation between q and q0, and making the following decision according to the judging result:
(i) When q is more than or equal to q0, extracting feature data in preference labels of all historical users, and taking the feature data as a target image data set of a current user group after the feature data are de-duplicated;
(ii) When q is less than q0, extracting feature data in preference labels of all historical users. And then calculating the coincidence degree Dc1 of the content in the identity label of each newly added user and the content in the identity label of each historical user in the advertisement analysis database in sequence. The calculation formula of Dc1 is as follows:
And extracting characteristic data in preference labels of the historical users with the largest overlap ratio Dc1 with the identity labels of the newly added users. And combining the two parts of characteristic data, and taking the characteristic data as a target image data set of the current user group after the characteristic data is de-duplicated.
The advertisement play sequence table adjusting module is used for (1) calculating the coincidence ratio Dc2 of the characteristic data in the keyword data set and the characteristic data in the target portrait data set, wherein the characteristic data are related to each advertisement and extracted by the keyword extracting module, and the calculation formula of the coincidence ratio Dc2 is as follows:
(2) And reordering all advertisements in the advertisement playing sequence table according to the sequence from big to small of the Dc2 calculation result of each advertisement to obtain an adjusted advertisement playing sequence table.
In the embodiment, the intelligent media management system based on the VOC vehicle owner big data platform is applied to an advertisement delivery system with multi-angle monitoring equipment, the advertisement delivery system is used for playing advertisements to be delivered according to an advertisement playing sequence table, and the multi-angle monitoring equipment is used for acquiring multi-angle monitoring video stream data of all target users in an advertisement delivery area of the advertisement delivery equipment. The identity feature recognition module performs image recognition on the frame-divided images of the shot video stream data by utilizing the image recognition unit, and further extracts features which are reflected in the images and have the same category as feature data stored in the identity tag.
Of course, in other embodiments, the relevant advertisement delivery system and multi-angle monitoring device may also be used as part of the intelligent media management system based on the VOC owner big data platform in this embodiment. And further, the integrated coordination control is carried out on the processes of data acquisition, data processing, data analysis and advertisement delivery which are required to be completed in the embodiment.
The embodiment also comprises an advertisement accurate delivery method based on the user portrait. The accurate delivery method is applied to the intelligent media management system based on the VOC vehicle owner big data platform in the embodiment, and as shown in fig. 2, the accurate delivery method comprises the following steps:
step one, obtaining a user tag of a current user, as shown in fig. 3, wherein the specific process is as follows:
1. Facial features of each current user in the advertising area are obtained.
2. Face recognition is sequentially carried out on each current user, an advertisement analysis database containing user portrait data sets of a plurality of historical users is queried according to the face recognition result, and the following judgment is made:
(1) When the facial features of the current user match the feature data in one of the historical user facial feature data, all feature data in the user tags of the historical user are obtained.
(2) When the facial features of the current user are not matched with the feature data in the facial feature data of all the historical users, judging that the current user is a newly added user, and establishing an empty user tag for the newly added user.
The user portrait data set comprises facial feature data of a corresponding historical user and user labels, wherein the user labels comprise identity labels, likes labels and dislikes labels.
3. The method comprises the steps of obtaining a multi-angle image of a newly added user, carrying out image recognition on the multi-angle image, and supplementing feature data in an identity tag of the newly added user according to a recognition result, wherein the feature data supplemented in the identity tag comprise user numbers, sexes, age groups, wearing styles and other features, and the other features represent identifiable non-sexes, age groups and wearing styles which are useful for distinguishing the identity features of the user.
Step two, a target image data set of the current user group is established, as shown in fig. 4, and the specific process is as follows:
1. Setting a historical user proportion critical value q0, and calculating the duty ratio q of the current user identified as the historical user in the advertisement putting area in the current user group.
2. Judging the size relation between q and q0, and making the following decision according to the judging result:
(1) And when q is more than or equal to q0, extracting the characteristic data in preference labels of all historical users, and taking the characteristic data as a target image data set of the current user group after the characteristic data are de-duplicated.
(2) And when q is smaller than q0, extracting feature data in preference labels of all the historical users, and sequentially calculating the coincidence ratio Dc1 of the content in the identity label of each newly added user and the content in the identity label of each historical user, wherein the calculation formula of the coincidence ratio Dc1 is as follows:
and combining the two parts of characteristic data (the identified historical user and the favorite label of the historical user with the largest overlap ratio with the identity label of each newly added user), and taking the combined characteristic data as a target image data set of the current user group after the characteristic data is de-duplicated.
Step three, adjusting the playing sequence of advertisements in the advertisement playing sequence table, wherein the specific process is as follows:
1. and acquiring a keyword data set associated with each advertisement in the advertisement playing sequence table, wherein characteristic data in the keyword data set are a plurality of preset keywords related to the content of the advertisement currently played.
2. Feature data in the target portrait data set is obtained, and the coincidence degrees Dc2 and Dc2 of the feature data in the keyword data set and the feature data in the target portrait data set associated with each advertisement are calculated as follows:
3. and sequencing all advertisements in the advertisement playing sequence table according to the sequence from big to small of the Dc2 calculation result of each advertisement to obtain a readjusted advertisement playing sequence table.
The adjustment method of the advertisement playing sequence table in the advertisement putting system provided in the implementation is mainly based on the principle and implementation logic as follows:
because the embodiment obtains the data in the created advertisement analysis database, when the advertisement is put, all users in the advertisement putting area are identified by face recognition, and whether the users belong to historical users in the advertisement analysis database or newly added users which are not collected by the advertisement analysis database can be distinguished.
The portrayal process for the historical user, i.e. the rich feature data in the user tag, has been implemented in view of the advertisement analysis data. At this time, when a large majority of users within the advertising area belong to historical users, it can be considered that the needs and preferences of these historical users can represent the current entire user population. And obtaining a target portrait data set for describing the liking or the demand of the current user group by obtaining the preference label of the corresponding historical user and extracting the characteristic data.
When the number of newly added users in the advertisement delivery area reaches a certain level, the representation cannot be performed by only relying on the historical users. The real-time analysis of the newly added users is obviously not enough at this time, but because the implementation can query an advertisement analysis data set with a large enough sample size and rich enough data, the embodiment can identify the identity features of the newly added users (which can be realized by an image identification technology), then compare the identity features with the user tags in the advertisement analysis data set, extract the most suitable historical users from the identification features, and temporarily use the user tags of the historical users as the user tags of the newly added users so as to obtain the features in the preference tags of the newly added users. Since the identity characteristics of the user (e.g. age, height, sex, wear make-up, physiological characteristics) have a great correlation with the user's needs or preferences (characteristics in preference tags). This approximate replacement in the present embodiment should therefore be highly reliable. According to the technical scheme, the target portrait data set of the user group containing a large number of newly added users can be obtained.
After the target portrait data set of the user group in the advertisement putting area is obtained, the embodiment further compares the feature data in the target portrait data set with the keyword data sets of the advertisements to be played, so that the overlapping degree of the feature data in the target portrait data set and the keyword data sets of the advertisements to be played can be found, the user group is the target client of the advertisements if the overlapping degree is higher, the advertisements are put in the position where the advertisements are put preferentially, and based on the logic, the embodiment realizes the reordering of the advertisement playing sequence table, and ensures that the most suitable advertisements can be put into the target group more preferentially.
Example 2
The present embodiment provides an advertisement analysis database containing a plurality of historical user data. The advertisement analysis database is the advertisement analysis database mentioned in embodiment 1. The data in the advertisement analysis database realizes the accurate portrait of the user's interests and hobbies, thereby being capable of carrying out accurate advertisement targeted marketing to the user.
The data in the advertisement analysis database is mainly obtained by identifying the identity characteristics of the user in the scenes of elevators, garages, shops and the like and analyzing the approval evaluation results of the user on the video advertisements. The data within the advertisement analysis database mainly includes the following:
(1) The feature is mainly used for distinguishing the identities of different users and is used as a unique identity mark of the users, and meanwhile, the advertisement analysis database also distributes a special user number for the users according to the different identity marks.
(2) The content of the part of data is rich, and all the characteristics which can be obtained and are useful for distinguishing the identity characteristics of the user are covered, wherein the characteristics comprise age, height, posture, wearing make-up, physiological state and the like, and the characteristics have reference values for judging the working type, behavior habit, demand characteristics, hobbies, group and the like of the user.
(3) The user preference object, the data of the part is obtained through the feedback of the user to different types of advertisements, the content of the part is continuously updated and continuously optimized, and the user preference object can be basically characterized.
(4) The data of the part is obtained through the feedback of the user on different types of advertisements, the content of the part is continuously updated and continuously optimized, and the object which is not concerned or disliked in the current state of the user can be basically described.
In this embodiment, as shown in fig. 5, the method for creating the advertisement analysis database is as follows:
step one, establishing user labels of all users
1. In the advertisement playing process, facial features of each user are sequentially acquired, and facial recognition is carried out on the facial features.
2. According to the face recognition result, the advertisement analysis database is queried, and whether the face characteristics of the current user are matched with the face characteristics of a certain historical user in the advertisement analysis database is judged:
(1) If yes, the current user is skipped.
(2) Otherwise, an empty user label is established for the current user, wherein the user label comprises an identity label, a liked label and an disliked label.
3. And acquiring multi-angle images of each user, and supplementing characteristic data in the identity tags of each user according to the image recognition result of the multi-angle images.
In this step, profiling of each user, whether new or historical, can be achieved, and the user can be profiled and analyzed as long as the user is present in the target area and can be captured. This enables the size of the advertisement analysis database established in this embodiment to be high, and the sample is also sufficiently rich. And laying a data foundation for application development by applying the database in the later stage.
In this embodiment, as shown in FIG. 6, the supplemental feature data in the identity tag includes user number, gender, age, wear style and other features representing identifiable non-gender, age and wear style features useful for distinguishing user identity features.
The age group in the identity tag is one of 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old and over 70 years old which are classified according to the image recognition result, and the wearing style in the identity tag comprises leisure, business, sports, children or old. In this embodiment, the age is considered to have an important effect on the requirement of the user, so the age characteristic is one of identity characteristics that must be considered. Meanwhile, because the professional characteristics of the user cannot be directly acquired through conventional image information collection, the professional or social identities of the user can be roughly divided to a certain extent by classifying the wearing styles of the user.
Meanwhile, other characteristics in the identity tag reflect contents including whether glasses are worn, whether hats are worn, whether lipsticks are smeared, whether high-heeled shoes are worn, whether beards are accumulated, whether wrist watches are worn and the like, and if the characteristics are the characteristics, the characteristic data reflecting the characteristics are added in other characteristics, otherwise, the characteristic data are not added in other characteristics. Other features in identity tags are very typical user-distinguishing features that have great relevance to the consumer needs of different users. Women wearing high-heeled shoes, for example, with lipstick, may have a higher sensitivity to advertising of clothing and cosmetics. The person holding the hair is generally not very concerned about the shaver. Hair loss people may be more interested in hair growth products and health products, etc.
In fact, after some more various feature extraction techniques are applied, the embodiment may also obtain more different types of identity features, and the more abundant the obtained feature quantity, the more refined the feature classification of the user.
Step two, obtaining the characteristic data of the advertisement currently played
1. And acquiring the playing time length T of each advertisement to be played and the keyword data set associated with each advertisement.
The feature data in the keyword data set are preset keywords related to the content of the advertisement currently played. The feature data within the keyword dataset for each advertisement includes at least:
(1) Keywords reflecting advertised promotional products.
(2) Reflecting keywords of the target customer group for which the advertisement is directed.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the advertisement word.
(5) The duration of the advertisement is classified.
(6) And classifying the styles of the advertisements.
In this embodiment, rich keywords are provided for each advertisement, including the types of information that a customer may receive from an advertisement. When the user approves the advertisement representation or gives positive feedback on the content in the advertisement, then some or all of the features in the keyword dataset for the advertisement may be considered user-care or preference. Conversely, when a user exhibits aversion or negative feedback to an advertisement, the user may be considered to be indifferent or aversive to certain features in the keyword dataset for that advertisement. In this way. When the sample size of the feedback data of the corresponding user to different types of advertisements is collected to be large enough, the preference of the user can be basically analyzed, and further portrayal of the preference of the user can be realized.
Step three, obtaining feedback data of each user for advertisement playing
1. The method comprises the steps of obtaining voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area, and sending an instruction for switching the currently played advertisement by one or more users in the advertisement putting area.
The mode of the instruction of the user for switching the currently played advertisement comprises key input, voice interaction and gesture interaction. The voice interaction is realized by identifying voice keywords which are sent by a user and require to switch the advertisement which is currently played, the gesture interaction is realized by identifying characteristic gestures which are sent by the user and require to switch the advertisement which is currently played, and the key input indicates a key input instruction which is directly input by the user through keys and requires to switch the advertisement which is currently played.
The voice key words are obtained by a voice recognition algorithm according to real-time voice stream data recognition, the characteristic gestures are obtained by a video action recognition algorithm according to real-time video stream data recognition, and the key input instruction is obtained through an entity switching key module installed on an advertisement playing site.
In this embodiment, the feedback of the user mainly includes the following aspects:
(1) The user views the advertisement.
(2) Direct discussion of advertisements by users. Such as talking about an actor or speaker in an advertisement, talking about the effect of a product, etc
(3) Gesture actions made by the user while viewing the advertisement. For example, a user's hand is directed to the advertisement playing device, prompting other users to watch, reflecting that the user is concerned about the advertisement currently being played.
(4) The time when the user is focused on viewing a particular advertisement.
(5) The user requests to switch the currently playing advertisement. This directly reflects that the user dislikes the advertisement.
In addition, other types of feedback can be extracted under mature technical conditions and applied to later data analysis, such as laughter of a user, characteristic actions in other detail, and the like.
2. And judging whether an instruction for switching the currently played advertisement is received, if so, assigning 1 to the feature quantity SW reflecting the instruction, and otherwise, assigning 0 to the SW.
Step four, calculating the acceptance evaluation value of each user on the current advertisement
1. And carrying out voice recognition on voice stream data, extracting keywords matched with the feature data in the keyword data set, and counting the number N 1.
2. And extracting gesture actions representing feedback of the user on the currently played advertisement, and counting the number N 2 of the gesture actions.
The gesture actions of the user for feeding back the currently played advertisement comprise nodding, clapping, pointing the hand to the advertisement playing interface, head raising or turning actions of switching the head from a non-direct-view state to a direct-view state, and the like, which are generated during the advertisement playing process.
3. And extracting characteristic actions reflecting the eye attention position change of each user, and calculating the attention time t n of each user to the currently played advertisement according to the characteristic actions, wherein n represents the user number of the current user.
The calculation method of the attention time t n of the user with the number n to the currently played advertisement is as follows:
In the above formula, t 1n represents the direct-view time period of the user with the number n during the current advertisement playing, t 2n represents the eye-closing time period of the user with the number n during the current advertisement playing, t 3n represents the head-down time period of the user with the number n during the current advertisement playing, and t 4n represents the head-turning time period of the user with the number n during the current advertisement playing.
In this embodiment, when the time length of the attention of the user to the advertisement is counted, the time length of the user looking at the advertisement playing interface directly is considered, and the time length of the user looking at the advertisement in a non-direct state is considered. In the embodiment, the time length which is determined to be in the non-concerned state is removed, and then the average value is calculated with the time length which is determined to be in the concerned state, so that the relatively accurate concerned time length is obtained.
4. The method comprises the steps of carrying out frame-separating sampling on frame-separated images of video stream data according to sampling frequency, carrying out image recognition on the frame-separated sampled images, extracting facial expressions of each user, classifying the facial expressions into likes, neglects or dislikes, respectively counting the number of three types of expression classification results of each user, and calculating the duty ratio of the number of the three types of expression classification results of each user in the total sample size of the user.
5. And acquiring the value of the SW.
6. The acceptance evaluation value E n of each user to the current advertisement is calculated by the following formula:
In the above formula, n represents the user number of the current user, E n represents the evaluation value of the user with the number n to the currently played advertisement, E n is more than or equal to 0, and the larger the value of E n is, the higher the acceptance of the user to the currently played multimedia is reflected; The method comprises the steps of representing attention concentration of a user with the number of n on a currently played advertisement, k 1 representing an influence factor of voice information feedback on a whole approval evaluation result, k 2 representing an influence factor of gesture action feedback on the whole approval evaluation result, k 3 representing an influence factor of expression feedback on the whole approval evaluation result, k 4 representing an influence factor of attention concentration on the whole approval evaluation result, m 1 representing a score of a single keyword in the voice information feedback, m 2 representing a score of a single gesture action in the gesture action feedback, m 3 representing a score of attention concentration, a representing a score of a favorite expression, p 1,n representing a ratio of a user with the number of n to a total image of a frame sample, b representing a score of a neglected expression, p 2,n representing a ratio of a user with the number of n to a total image of a frame sample, c representing a score of a averse expression, and p 3,n representing a ratio of a user with the number of n to a averse expression in the total image of a frame sample.
In this embodiment, expression recognition may be performed using a neural network algorithm trained on a large number of samples. The voice recognition, the video action recognition, and the like have a large number of products which can be directly applied, and for this part of the content, the embodiment will not be described in detail.
In this embodiment, various types of feedback information made by the user on the played advertisement are extracted from the voice stream data and the video stream data of the user through the technologies of voice recognition, image recognition and video action recognition, and after the feedback information is quantized by the method provided by this embodiment, an evaluation result reflecting the acceptance of the user on the current advertisement can be obtained. This result reflects the user's likes and dislikes of the current advertisement and can then be used to characterize the user's needs or interests.
Step five, establishing or updating advertisement analysis database
1. A high threshold E h and a low threshold E l of E n are set, where E h represents the threshold at which the user likes the currently playing advertisement, and E l represents the threshold at which the user dislikes the currently playing advertisement, E l >0.
2. And when E n≥Eh and p 1,n+p2,n≥p3,n are carried out, adding the characteristic data in the keyword dataset related to the currently played advertisement into the preference label corresponding to the current user, carrying out characteristic data duplication removal on the supplemented preference label, and deleting the characteristic data identical to the characteristic data in the keyword dataset in the aversion label corresponding to the current user.
3. When E is less than or equal to E l and p 2,n+p3,n≥p1,n, adding the characteristic data in the keyword dataset associated with the currently played advertisement into the aversion tag corresponding to the current user, performing characteristic data duplication removal on the supplemented aversion tag, and deleting the characteristic data matched with the characteristic data in the keyword dataset in the preference tag corresponding to the current user.
4. Updating the user label of each user to obtain a new user portrait data set of each user, and creating an advertisement analysis database.
Wherein, as shown in fig. 7, the user portrait data set includes facial feature data and user tags of the corresponding user.
The most core content in the advertisement analysis database is the content of the likes and dislikes labels obtained according to the behavior analysis of the user, and the content is direct data for later analysis of the user demands. In this embodiment, the user's likes and dislikes can be directly estimated through feedback to the user when watching the advertisement, and the user's likes and dislikes should be consistent with some or all of the features in the keyword dataset of the advertisement. Therefore, in this embodiment, after each advertisement is played, the accuracy attitude of the advertisement is determined by analyzing and counting the feedback information of the user, and then the keyword dataset of the advertisement is used as the feature in the liked label or disliked label of the current user when the specific condition is satisfied.
In order to avoid misclassification, a more stringent audit of the determined attitudes of the users is required. The determination process of the embodiment introduces a special threshold value determined according to expert experience, which is used as a basis for determining the true attitude of the user, and the thresholds E h and E l in the embodiment are determined after repeated verification, so that the reliability is higher. Further, the final portrait to the user is ensured to be accurate and reliable.
Example 3
In this embodiment, a system for creating an advertisement analysis database is provided, which uses the method for creating an advertisement analysis database included in embodiment 2 to implement the process of creating and updating the advertisement analysis database.
As shown in fig. 8, the creation system includes a history user inquiry module, an advertisement feature data extraction module, a user feedback data extraction module, a face recognition module, an image recognition module, a voice recognition module, a video action recognition module, a user tag creation module, an acceptance evaluation value calculation module, and a database creation module.
The historical user query module is used for querying an advertisement analysis database and extracting a user portrait data set of the collected historical users, wherein the user portrait data set comprises facial feature data of each historical user and user tags, and the user tags comprise identity tags, likes and dislikes.
The advertisement feature data extraction module is used for extracting the playing time length T of each played advertisement and a keyword data set associated with the advertisement when the advertisement is played by an advertisement delivery system.
And the user feedback data extraction module is used for (1) acquiring voice information generated by a user watching the advertisement in the advertisement putting area when the advertisement putting system plays the advertisement, and obtaining voice stream data related to each advertisement. (2) And acquiring multi-angle monitoring videos of all users watching advertisements in the advertisement putting area when the advertisement putting system plays the advertisements, and obtaining video stream data related to each advertisement. (3) And when the acquisition is successful, the characteristic quantity SW representing the switching instruction is assigned to be 1, and otherwise, the SW is assigned to be 0.
The face recognition module is used for obtaining an image data set through framing processing according to the video stream data, extracting the facial features of each user appearing in the image data set, completing the comparison process of the facial features of the current user and the facial features of each historical user in the advertisement analysis database, and distinguishing the newly added user from the historical user.
The image recognition module is used for carrying out image recognition on an image data set obtained by framing video stream data, and further (1) obtaining various feature data reflecting the identity features of the newly added user. (2) The expressions of all users during the advertisement playing are extracted and classified as one of likes, ignores or dislikes.
The voice recognition module is used for carrying out voice recognition on voice stream data, and further (1) acquiring the voice interaction instruction which is sent by a user during the advertisement playing and is used for representing the requirement of switching the currently played advertisement. (2) Extracting all words in the voice stream data, and finding out keywords matched with the feature data in the keyword data set.
The video action recognition module is used for recognizing video actions of the video stream data, and further (1) extracting gesture interaction instructions which are sent by a certain user and characterize the advertisement which is required to be switched to be currently played in the video stream data. (2) And extracting gesture actions which are sent by a certain user and used for representing feedback on the currently played advertisement from the video stream data. (3) And extracting a characteristic action reflecting the change of the eye attention position of a certain user in the current advertisement playing process.
The user tag establishing module is used for establishing an empty user tag for each identified newly-added user, and supplementing various feature data which are acquired by the image identifying module and reflect the identity features of the newly-added user into the identity tag of the corresponding user.
The acceptance evaluation value calculation module is used for (1) acquiring keywords which are identified by the voice identification module from voice stream data and matched with the characteristic data in the keyword data set, and counting the number N 1 of the keywords. (2) Gesture actions recognized by the video action recognition module and reflecting feedback of the user on the currently played advertisement are obtained, and the number N 2 of gesture actions is counted. (3) And acquiring a characteristic action which is identified by the video action identification module and reflects the eye attention position change of a certain user in the current advertisement playing process, and calculating the attention time t n of the current user to the currently played advertisement according to the characteristic action, wherein n represents the user number of the current user. The calculation formula of the attention time t n of the user with the number n to the currently played advertisement is as follows:
In the above formula, t 1n represents the direct-view time period of the user with the number n during the current advertisement playing, t 2n represents the eye-closing time period of the user with the number n during the current advertisement playing, t 3n represents the head-down time period of the user with the number n during the current advertisement playing, and t 4n represents the head-turning time period of the user with the number n during the current advertisement playing.
(4) And acquiring the number of the three types of expression classification results of each user identified by the image identification module, and calculating the ratio of the number of the three types of expression classification results of each user in the total sample size. (5) acquiring a value of SW. (6) The acceptance evaluation value E n of each user to the current advertisement is calculated by the following formula:
In the above formula, n represents the user number of the current user, E n represents the evaluation value of the user with the number n to the currently played advertisement, E n is more than or equal to 0, and the larger the value of E n is, the higher the acceptance of the user to the currently played multimedia is reflected; The method comprises the steps of representing attention concentration of a user with the number of n on a currently played advertisement, k 1 representing an influence factor of voice information feedback on a whole approval evaluation result, k 2 representing an influence factor of gesture action feedback on the whole approval evaluation result, k 3 representing an influence factor of expression feedback on the whole approval evaluation result, k 4 representing an influence factor of attention concentration on the whole approval evaluation result, m 1 representing a score of a single keyword in the voice information feedback, m 2 representing a score of a single gesture action in the gesture action feedback, m 3 representing a score of attention concentration, a representing a score of a favorite expression, p 1,n representing a ratio of a user with the number of n to a total image of a frame sample, b representing a score of a neglected expression, p 2,n representing a ratio of a user with the number of n to a total image of a frame sample, c representing a score of a averse expression, and p 3,n representing a ratio of a user with the number of n to a averse expression in the total image of a frame sample.
The database creation module is used for (1) setting a high threshold E h and a low threshold E l of E n according to expert experience, wherein E h represents a critical value of advertisements which are liked to be played currently by a user, E l represents a critical value of advertisements which are disliked to be played currently by the user, and E l is more than 0. (2) And (3) when E n≥Eh and p 1,n+p2,n≥p3,n are carried out, adding the characteristic data in the keyword data set associated with the advertisement which is currently played into the preference label corresponding to the current user, carrying out characteristic data duplication removal on the supplemented preference label, and deleting the characteristic data which is the same as the characteristic data in the keyword data set in the aversion label corresponding to the current user. (ii) When E is less than or equal to E l and p 2,n+p3,n≥p1,n, adding the characteristic data in the keyword dataset associated with the currently played advertisement into the aversion tag corresponding to the current user, performing characteristic data duplication removal on the supplemented aversion tag, and deleting the characteristic data matched with the characteristic data in the keyword dataset in the preference tag corresponding to the current user. (3) And sequentially updating the user labels of each user to obtain a new user portrait data set of each user, thereby completing the creation or updating of the advertisement analysis database. The user portrait data set comprises facial feature data of a corresponding user and a user tag.
The advertisement analysis database in this embodiment is empty at the beginning of creation, and after the user portrait data set of the first historical user is entered therein, the creation system of the advertisement analysis database determines that the current user is an added user or a historical user by comparing the facial features of the current user with the facial features of the historical users in the advertisement analysis database, and records the user portrait data set of the distinguished added user into the advertisement analysis database, or updates the user tag in the user portrait data set of the existing historical user in the advertisement analysis database.
Example 4
On the basis of the foregoing embodiment, this embodiment provides a method for evaluating the acceptance of an advertisement by a user based on feature recognition, as shown in fig. 9, including the steps of:
Step one, obtaining characteristic data of the advertisement currently played
And acquiring the playing time length T of each advertisement to be played and the keyword data set associated with each advertisement.
The feature data in the keyword data set are preset keywords related to the content of the advertisement currently played. The feature data within the keyword dataset for each advertisement includes at least:
(1) Keywords reflecting advertised promotional products.
(2) Reflecting keywords of the target customer group for which the advertisement is directed.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the advertisement word.
(5) The duration of the advertisement is classified.
(6) And classifying the styles of the advertisements.
Step two, obtaining feedback data of each user for advertisement playing
1. The method comprises the steps of obtaining voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area, and sending an instruction for switching the currently played advertisement by one or more users in the advertisement putting area.
The mode of the instruction of the user for switching the currently played advertisement comprises key input, voice interaction and gesture interaction. The voice interaction is realized by identifying voice keywords which are sent by a user and require to switch the advertisement which is currently played, the gesture interaction is realized by identifying characteristic gestures which are sent by the user and require to switch the advertisement which is currently played, and the key input indicates a key input instruction which is directly input by the user through keys and requires to switch the advertisement which is currently played.
The voice key words are obtained by a voice recognition algorithm according to real-time voice stream data recognition, the characteristic gestures are obtained by a video action recognition algorithm according to real-time video stream data recognition, and the key input instruction is obtained through an entity switching key module installed on an advertisement playing site.
In this embodiment, the feedback of the user mainly includes the following aspects:
(1) The user views the advertisement.
(2) Direct discussion of advertisements by users. Such as talking about an actor or speaker in an advertisement, talking about the effect of a product, etc
(3) Gesture actions made by the user while viewing the advertisement. For example, a user's hand is directed to the advertisement playing device, prompting other users to pay attention, reflecting that the user is concerned about the advertisement currently being played.
(4) The time when the user is focused on viewing a particular advertisement.
(5) The user requests to switch the currently playing advertisement. This directly reflects that the user dislikes the advertisement.
In addition, other types of feedback can be extracted under mature technical conditions and applied to later data analysis, such as laughter of a user, characteristic actions in other detail, and the like.
2. And judging whether an instruction for switching the currently played advertisement is received, if so, assigning 1 to the feature quantity SW reflecting the instruction, and otherwise, assigning 0 to the SW.
Step three, calculating the acceptance evaluation value of each user on the current advertisement
1. And carrying out voice recognition on voice stream data, extracting keywords matched with the feature data in the keyword data set, and counting the number N 1.
2. And extracting gesture actions representing feedback of the user on the currently played advertisement, and counting the number N 2 of the gesture actions.
The gesture actions of the user for feeding back the currently played advertisement comprise nodding, clapping, pointing the hand to the advertisement playing interface, head raising or turning actions of switching the head from a non-direct-view state to a direct-view state, and the like, which are generated during the advertisement playing process.
3. And extracting characteristic actions reflecting the eye attention position change of each user, and calculating the attention time t n of each user to the currently played advertisement according to the characteristic actions, wherein n represents the user number of the current user.
The calculation method of the attention time t n of the user with the number n to the currently played advertisement is as follows:
In the above formula, t 1n represents the direct-view time period of the user with the number n during the current advertisement playing, t 2n represents the eye-closing time period of the user with the number n during the current advertisement playing, t 3n represents the head-down time period of the user with the number n during the current advertisement playing, and t 4n represents the head-turning time period of the user with the number n during the current advertisement playing.
In this embodiment, when the time length of the attention of the user to the advertisement is counted, the time length of the user looking at the advertisement playing interface directly is considered, and the time length of the user looking at the advertisement in a non-direct state is considered. In this embodiment, the time length determined to be in the non-attention state is removed, and then the average value is calculated with the time length determined to be in the attention state, so that the relatively accurate attention time length can be obtained.
4. The method comprises the steps of carrying out frame-separating sampling on frame-separated images of video stream data according to sampling frequency, carrying out image recognition on the frame-separated sampled images, extracting facial expressions of each user, classifying the facial expressions into likes, neglects or dislikes, respectively counting the number of three types of expression classification results of each user, and calculating the duty ratio of the number of the three types of expression classification results of each user in the total sample size of the user.
5. And acquiring the value of the SW.
6. The acceptance evaluation value E n of each user to the current advertisement is calculated by the following formula:
In the above formula, n represents the user number of the current user, E n represents the evaluation value of the user with the number n to the currently played advertisement, E n is more than or equal to 0, and the larger the value of E n is, the higher the acceptance of the user to the currently played multimedia is reflected; The method comprises the steps of representing attention concentration of a user with the number of n on a currently played advertisement, k 1 representing an influence factor of voice information feedback on a whole approval evaluation result, k 2 representing an influence factor of gesture action feedback on the whole approval evaluation result, k 3 representing an influence factor of expression feedback on the whole approval evaluation result, k 4 representing an influence factor of attention concentration on the whole approval evaluation result, m 1 representing a score of a single keyword in the voice information feedback, m 2 representing a score of a single gesture action in the gesture action feedback, m 3 representing a score of attention concentration, a representing a score of a favorite expression, p 1,n representing a ratio of a user with the number of n to a total image of a frame sample, b representing a score of a neglected expression, p 2,n representing a ratio of a user with the number of n to a total image of a frame sample, c representing a score of a averse expression, and p 3,n representing a ratio of a user with the number of n to a averse expression in the total image of a frame sample.
The method provided by the embodiment can be used for identifying various types of characteristics fed back according to feedback made by the user when the advertisement is played, so that the acceptance evaluation of the user on the advertisement is obtained. The method can collect various feedback of the user, and the obtained acceptance evaluation result of the user on the advertisement is more accurate and can be used as a basis for evaluating the advertisement putting effect.
Example 5
The embodiment provides a method for analyzing user demands in a business district scene in time, which is further developed on the method of embodiment 4, and realizes the most direct and rapid prediction or evaluation of the user demands of specific users. As shown in fig. 10, the method includes the steps of:
And 1, acquiring facial features of a current user in the advertisement putting area.
Step 2, carrying out face recognition on the current user in turn, inquiring an advertisement analysis database (the advertisement analysis database is the advertisement analysis database in the previous embodiment) containing user portrait data sets of a plurality of historical users according to the face recognition result, and making the following judgment:
(1) When the facial features of the current user match the feature data in one of the historical user facial feature data, all feature data in the user tags of the historical user are obtained.
(2) When the facial features of the current user are not matched with the feature data in the facial feature data of all the historical users, judging that the current user is a newly added user, and establishing an empty user tag for the newly added user.
The user portrait data set comprises facial feature data of a corresponding historical user and user labels, wherein the user labels comprise identity labels, likes labels and dislikes labels.
And step 3, acquiring a multi-angle image of the newly added user, carrying out image recognition on the multi-angle image, and supplementing feature data in an identity tag of the newly added user according to a recognition result, wherein the feature data supplemented in the identity tag comprises a user number, gender, age bracket, wearing style and other features, and the other features represent identifiable non-gender, age bracket and wearing style features which are useful for distinguishing the identity features of the user.
And 4, comparing all the characteristic data in the identity tag with the identity tags of all the historical users in the advertisement analysis database, and calculating the characteristic coincidence ratio Dc3 between the characteristic data and the identity tags, wherein the calculation formula of the characteristic coincidence ratio Dc3 is as follows:
and 5, extracting feature data in a liked tag and a disliked tag of a historical user with the largest feature overlap ratio Dc3 value with the current user in the advertisement analysis database, and filling the feature data into a user image data set of a newly added user to complete a timely analysis process of the user requirements of the current user.
By analyzing the above process, the method in the embodiment can analyze and identify the user when the user is just out of the field, further establish a pre-estimated characteristic and behavior portrait data set, predict the likes and dislikes of the user, and realize the timely analysis of the user demands based on the prediction. The method of analysis is more timely and effective and does not require long-term "tracking" and evaluation of the user. Therefore, the method has high practical value, and meanwhile, the accuracy of the timely analysis result has great correlation with the sample size in the advertisement analysis database of the user portrait data set containing a plurality of historical users. The larger the sample size of the advertisement analysis database, the more accurate the result of such timely analysis.
The logic of the method of this embodiment is that firstly, the facial features of the user in the specific scene are obtained, and whether the data sample of the user is already recorded in the advertisement analysis database is determined, if yes, the content of the liked label and disliked label recorded in the advertisement analysis database by the user is directly extracted, and is used as the user image dataset of the user, so as to analyze and predict the user requirement of the user. When the data sample of the user is not recorded in the advertisement analysis database, the identity characteristics of the user are extracted firstly, then the like-like labels and the dislike labels in the identity labels of the historical users recorded in the advertisement analysis database, the identity characteristics of which are most similar to those of the user (judged by Dc 3), are extracted, and the like-like labels and the dislike labels are used as a user image data set of the current user, so that the user requirements of the user are analyzed.
Example 6
The embodiment provides a matching method of user requirements and advertisement contents, which is developed on the basis of the previous embodiment and is used for selecting an advertisement which is most matched with a current user from advertisements to be put currently, wherein the matching method comprises the following steps of:
Step 1, acquiring keyword data sets of all advertisements to be put currently, wherein the keyword data sets are the keyword data sets established in any one of the embodiments, and the keyword data sets contain keywords reflecting various characteristic data of advertisement contents.
Step 2, acquiring a user portrait data set of a current user, wherein the user portrait data set is a final result acquired by a method for analyzing user requirements in a business district scene provided in the foregoing embodiment 5;
And 3, calculating the matching degree Dc4 of the characteristic data in the keyword data set of each advertisement and the data in the current user portrait data set, wherein the calculation formula of the Dc4 is as follows:
and 4, taking the advertisement with the maximum Dc4 value as the advertisement which is most matched with the current user, and completing the matching process of the user requirement and the advertisement content.
The matched advertisement is best matched with the actual demand of the user, so that the best propaganda and popularization effect can be obtained. In practice, the best matching advertisement should be placed preferentially for the identified current user.
The matching method of the user requirements and the advertisement content adopted in the embodiment adopts feature matching, in the feature matching process, the features (features in preference characterization) representing the user requirements are obtained according to feedback of the user in the historical advertisement playing process, and the feature data are keywords of the corresponding advertisements. Thus, feature matching is often very easy to be successful when it is performed with the actual advertisement to be placed, and the result of such feature matching will be more accurate in view of the consistency and long-term nature of the user's preferences.
Example 7
The embodiment provides a huge curtain MAX intelligent terminal of garage that possesses intelligent voice interaction function, and this huge curtain MAX intelligent terminal of garage is used for according to the user when broadcasting the advertisement with self interactive process, realizes waiting to throw the update of advertisement in the advertisement broadcast sequence table. The scheme of the embodiment belongs to the technical scheme and the deep development and application of achievements in the previous embodiments. The garage huge curtain MAX intelligent terminal with the intelligent voice interaction function of the embodiment adopts the partial processing method and the equipment module in the embodiment.
Specifically, as shown in fig. 12, the garage huge screen MAX intelligent terminal provided in this embodiment includes an advertisement playing module, a voice collecting module, a video monitoring module, an advertisement feature data extracting module, a user feedback data extracting module, an image recognizing module, a voice recognizing module, a video action recognizing module, a man-machine interaction module, an acceptance evaluation value calculating module and an advertisement playing sequence updating module.
The advertisement playing module is used for sequentially playing each advertisement to be put according to the advertisement playing sequence table, and switching the advertisement which is being played after receiving a switching instruction sent by the man-machine interaction module. The advertisement playing module is a garage huge screen MAX display screen.
The voice acquisition module is used for acquiring voice information generated by a user group watching the advertisements around the advertisement playing module when each advertisement is played by the advertisement playing module. The voice acquisition module is provided with a plurality of sound pickups arranged around the garage huge screen MAX display screen, and the sound pickups are distributed on one side of the display surface of the garage huge screen MAX display screen.
The video monitoring module is used for monitoring the user group watching the advertisements around the advertisement playing module at multiple angles when each advertisement is played by the advertisement playing module. The view finding range of the video monitoring module is one side of the display surface of the MAX display screen facing the huge curtain of the garage, and the video monitoring module comprises a plurality of monitoring cameras which shoot the view finding range from different angles.
The advertisement feature data extraction module is used for extracting the playing time length T of each advertisement played by the advertisement playing module and a keyword data set associated with the advertisement.
The user feedback data extraction module is used for (1) receiving the voice information collected by the voice collection module and obtaining voice stream data related to each advertisement. (2) And receiving the multi-angle monitoring video collected by the video monitoring module to obtain video stream data related to each advertisement. (3) And acquiring the switching instruction which is sent by a man-machine interaction module and is required to switch the advertisement currently played, and when the switching instruction is received, assigning a characteristic quantity SW representing the switching instruction to be 1, otherwise, assigning the SW to be 0.
The image recognition module is used for carrying out image recognition on an image data set obtained by framing video stream data, further extracting expressions of all users during advertisement playing, and classifying the expressions into one of likes, neglects or dislikes. The image recognition module comprises an expression recognition unit, and the expression recognition unit adopts a neural network recognition algorithm trained by a large number of training sets to complete the process of classifying the expression of the user in the image.
The voice recognition module is used for carrying out voice recognition on voice stream data, and further (1) acquiring voice interaction instructions which are sent by a user during the advertisement playing and are used for representing the requirement of switching the currently played advertisement. (2) Extracting all words in the voice stream data, and finding out keywords matched with the feature data in the keyword data set.
The voice recognition module comprises a voice interaction instruction extraction unit and a keyword extraction unit, wherein the voice interaction instruction extraction unit sends an extracted voice interaction instruction to the voice interaction unit in the man-machine interaction module, and the keyword extraction unit sends an extracted keyword matched with characteristic data in a keyword data set to the acceptance evaluation value calculation module.
The video action recognition module is used for recognizing video actions of the video stream data, and further (1) extracting gesture interaction instructions which are sent by a certain user and characterize the advertisement which is required to be switched to be currently played in the video stream data. (2) Extracting the gesture action which is sent by a certain user and is used for representing feedback to the advertisement currently played in the video stream data, and (3) extracting the characteristic action which reflects the change of the eye attention position of the certain user in the current advertisement playing process.
The video action extraction module comprises a gesture interaction instruction extraction unit, a gesture action feedback extraction unit and an eye feature action extraction unit, wherein the gesture interaction instruction extraction unit sends an extracted gesture interaction instruction to a gesture interaction unit in the man-machine interaction module, and the gesture action feedback extraction unit and the eye feature action extraction unit send extracted feature data to the acceptance evaluation value calculation module.
The man-machine interaction module is used for acquiring an instruction which is sent by a user and is required to switch the advertisement which is currently played, and sending a switching instruction, wherein the mode of the user which is required to switch the advertisement which is currently played comprises key input, voice interaction and gesture interaction as shown in fig. 13. The man-machine interaction module comprises an entity key module, wherein the key module is used for receiving a key input instruction which is directly sent by a user and is required to switch the advertisement which is currently played, the man-machine interaction module further comprises a voice interaction unit and a gesture interaction unit, the voice interaction unit is used for obtaining a voice interaction instruction which is sent by the user and is required to switch the advertisement which is currently played, the voice interaction instruction is obtained by voice recognition according to real-time voice stream data through a voice recognition module, the gesture interaction unit is used for obtaining a gesture interaction instruction which is sent by the user and is required to switch the advertisement which is currently played, and the gesture interaction instruction is obtained by video action recognition according to the real-time video stream data through a video action recognition module.
The acceptance evaluation value calculation module is used for (1) acquiring the keywords which are identified by the voice identification module and matched with the characteristic data in the keyword data set, and counting the number N 1 of the keywords. (2) The gesture actions which are identified by the video action identification module and used for representing the feedback of the user on the advertisement which is currently played are obtained, and the number N 2 of the gesture actions is counted. (3) The method comprises the steps of obtaining a characteristic action which is recognized by a video action recognition module and reflects the eye attention position change of a certain user in the current advertisement playing process, and calculating the attention time t n of the current user to the currently played advertisement according to the characteristic action, wherein the calculation formula of the attention time t n is as follows:
In the above formula, t 1n represents the direct-view time period of the user with the number n during the current advertisement playing, t 2n represents the eye-closing time period of the user with the number n during the current advertisement playing, t 3n represents the head-down time period of the user with the number n during the current advertisement playing, and t 4n represents the head-turning time period of the user with the number n during the current advertisement playing. (4) And acquiring the number of the three types of expression classification results of each user identified by the image identification module, and calculating the ratio of the number of the three types of expression classification results of each user in the total sample size. (5) acquiring a value of SW. (6) The acceptance evaluation value E n of each user to the current advertisement is calculated by the following formula:
In the above formula, n represents the user number of the current user, E n represents the evaluation value of the user with the number n to the currently played advertisement, E n is more than or equal to 0, and the larger the value of E n is, the higher the acceptance of the user to the currently played multimedia is reflected; The method comprises the steps of representing attention concentration of a user with the number of n on a currently played advertisement, representing an influence factor of voice information feedback on a whole approval evaluation result, representing k 1 on the whole approval evaluation result, representing k 2 on the whole approval evaluation result, representing k 3 on the whole approval evaluation result, representing the influence factor of expression feedback on the whole approval evaluation result, representing the attention concentration on the whole approval evaluation result, representing m 1 on a single keyword in the voice information feedback, representing the score of a single gesture action in the gesture action feedback, representing the attention concentration of m 3, representing the score of a favorite expression, representing the ratio of p 1,n of the user with the number of n to the total image sampled by frames, representing the ratio of the user with the number of n to the favorite expression, representing the ratio of the expression ignored by the total image sampled by frames, representing the ratio of the expression with the number of n by the user with the number of n by frames, representing the ratio of the aversion expression, and the ratio of the expression with the number of n in the total image sampled by frames.
The advertisement playing sequence updating module is used for (1) obtaining the average acceptance evaluation result of each advertisement in all the played advertisement sequence tables in one updating periodThe calculation formula of (2) is as follows:
In the above formula, i represents the number of each advertisement in the advertisement play sequence table. (2) According to individual advertisements And sequencing all the played advertisements in the updating period in the order of the values from large to small to obtain a scoring ranking table of the played advertisements. (3) And acquiring the advertisements to be added and the quantity thereof, deleting the played advertisements with the corresponding quantity after ranking in the scoring ranking table from the advertisement playing sequence table, and adding the advertisements to be added into the advertisement playing sequence table to complete the updating process of the advertisement playing sequence table.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1.一种基于特征识别的用户对广告的认可度评价方法,其特征在于,包括如下步骤:1. A method for evaluating user recognition of advertisements based on feature recognition, characterized in that it comprises the following steps: S1:获取当前播放的广告的特征数据:S1: Get the characteristic data of the currently playing advertisement: 获取播放的各个广告的播放时长T,以及各个广告关联的关键词数据集;Obtain the playing time T of each advertisement played, and the keyword data set associated with each advertisement; S2:获取各个用户对广告播放的反馈数据,包括:S2: Obtaining feedback data from each user on the advertisement playback, including: 获取广告播放期间由广告投放区域内所有用户产生的语音流数据,监控广告投放区域内所有用户的视频流数据,以及由广告投放区域内某一个或多个用户发出的要求切换当前播放的广告的指令;判断是否接收到要求切换当前播放的广告的指令:是则对反映该指令的特征量SW赋值为1;否则对SW赋值为0;Acquire the voice stream data generated by all users in the advertising delivery area during the advertisement playback, monitor the video stream data of all users in the advertising delivery area, and the instructions issued by one or more users in the advertising delivery area to switch the currently playing advertisement; determine whether the instruction to switch the currently playing advertisement is received: if yes, assign 1 to the feature quantity SW reflecting the instruction; otherwise, assign 0 to SW; S3:计算各个用户对当前广告的认可度评价值,包括如下过程:S3: Calculate the recognition evaluation value of each user for the current advertisement, including the following process: S31:对语音流数据进行语音识别,提取出其中与关键词数据集内的特征数据相匹配的关键词,并统计其数量N1S31: Perform speech recognition on the speech stream data, extract keywords that match the feature data in the keyword data set, and count their number N 1 ; S32:对视频流数据进行视频动作识别;提取出其中表征用户对当前播放的广告进行反馈的姿态动作,并统计其数量N2S32: Perform video action recognition on the video stream data; extract gestures and actions representing the user's feedback on the currently playing advertisement, and count their number N 2 ; S33:对视频流数据进行视频动作识别;提取出其中反映各个用户的眼神关注位置变化的特征动作,根据特征动作计算各个用户对当前播放的广告的关注时长tn;其中,n表示当前用户的用户编号;S33: Perform video action recognition on the video stream data; extract characteristic actions reflecting changes in the eye focus position of each user, and calculate the attention time tn of each user on the currently playing advertisement according to the characteristic actions; wherein n represents the user ID of the current user; S34:对视频流数据的分帧图像按照采样频率进行隔帧采样;对隔帧采样的图像进行图像识别;提取出各个用户的面部表情,并将面部表情分类为喜欢、忽视或厌恶;分别统计各个用户的三类表情分类结果的数量,并计算各个用户的三类表情分类结果的数量在该用户总体样本量中的占比;S34: sampling the framed images of the video stream data at alternate frames according to the sampling frequency; performing image recognition on the sampled images at alternate frames; extracting the facial expressions of each user, and classifying the facial expressions into like, ignore or hate; counting the number of the three types of expression classification results of each user respectively, and calculating the proportion of the number of the three types of expression classification results of each user in the total sample size of the user; S35:获取SW的值;S35: Get the value of SW; S36:通过如下的公式计算各个用户对当前广告的认可度评价值EnS36: Calculate the recognition evaluation value E n of each user for the current advertisement by the following formula: 上式中,n表示当前用户的用户编号,En表示编号为n的用户对当前播放的广告的评价值,En≥0,且En的值越大反映用户对当前播放的多媒体的认可度越高;表示编号为n的用户对当前播放的广告的注意力集中度;k1表示语音信息反馈对整体认可度评价结果的影响因子;k2表示姿态动作反馈对整体认可度评价结果的影响因子;k3表示表情反馈对整体认可度评价结果的影响因子;k4表示注意力集中度对整体认可度评价结果的影响因子;m1表示语音信息反馈中单个关键词的得分;m2表示姿态动作反馈中单个姿态动作的得分;m3表示注意力集中度的得分;a表示喜欢表情的得分,p1,n为表征编号为n的用户分类为喜欢的表情在隔帧采样的图像总量中的占比;b表示忽视表情的得分,p2,n为表征编号为n的用户分类为忽视的表情在隔帧采样的图像总量中的占比;c表示厌恶表情的得分,p3,n为表征编号为n的用户分类为厌恶的表情在隔帧采样的图像总量中的占比。In the above formula, n represents the user ID of the current user, En represents the evaluation value of the user ID n on the currently played advertisement, En ≥ 0, and the larger the value of En is, the higher the user's recognition of the currently played multimedia is; represents the attention concentration of the user numbered n on the currently playing advertisement; k1 represents the influence factor of voice information feedback on the overall recognition evaluation result; k2 represents the influence factor of gesture feedback on the overall recognition evaluation result; k3 represents the influence factor of expression feedback on the overall recognition evaluation result; k4 represents the influence factor of attention concentration on the overall recognition evaluation result; m1 represents the score of a single keyword in voice information feedback; m2 represents the score of a single gesture in gesture feedback; m3 represents the score of attention concentration; a represents the score of like expression, p1,n represents the proportion of the expression classified as like by the user numbered n in the total number of images sampled every other frame; b represents the score of ignore expression, p2,n represents the proportion of the expression classified as ignore by the user numbered n in the total number of images sampled every other frame; c represents the score of disgust expression, p3,n represents the proportion of the expression classified as disgust by the user numbered n in the total number of images sampled every other frame. 2.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:步骤S1中,关联词在每个广告投放前已完成设定,每个广告关联的所述关键词数据集内的特征数据至少包括:2. The method for evaluating the user's recognition of advertisements based on feature recognition according to claim 1, characterized in that: in step S1, the associated words are set before each advertisement is released, and the feature data in the keyword data set associated with each advertisement at least includes: (1)反映广告的宣传产品的关键词;(1) Keywords that reflect the product being advertised; (2)反映广告针对的目标客户群体的关键词;(2) Keywords that reflect the target customer group of the advertisement; (3)反映广告的代言人或广告的人物形象的关键词;(3) Keywords that reflect the spokesperson or character image of the advertisement; (4)广告词中的高频或特殊关键词;(4) High-frequency or special keywords in advertising words; (5)广告的时长分类;(5) Classification of advertisement duration; (6)广告的风格分类。(6) Classification of advertising styles. 3.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:步骤S2中,用户发出的要求切换当前播放的广告的指令的方式包括按键输入、语音交互和手势交互;语音交互通过识别由用户发出的要求切换当前播放的广告的语音关键词来实现;手势交互通过识别由用户发出的要求切换当前播放的广告的特征手势来实现;按键输入表示由用户直接通过按键来输入的要求切换当前播放的广告的按键输入指令。3. The method for evaluating user recognition of advertisements based on feature recognition as described in claim 1 is characterized in that: in step S2, the method in which the user issues an instruction requesting to switch the currently playing advertisement includes key input, voice interaction and gesture interaction; the voice interaction is achieved by identifying the voice keywords issued by the user requesting to switch the currently playing advertisement; the gesture interaction is achieved by identifying the characteristic gestures issued by the user requesting to switch the currently playing advertisement; the key input represents a key input instruction requesting to switch the currently playing advertisement input directly by the user through a key. 4.如权利要求3所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:步骤S2中,语音关键词由语音识别算法根据实时的语音流数据识别获取;所述特征手势由视频动作识别算法根据实时的视频流数据获取得到;按键输入指令通过安装在广告播放现场的实体切换按键模块获取。4. The method for evaluating user recognition of advertisements based on feature recognition as described in claim 3 is characterized in that: in step S2, voice keywords are obtained by a voice recognition algorithm based on real-time voice stream data recognition; the feature gestures are obtained by a video action recognition algorithm based on real-time video stream data; and key input instructions are obtained through a physical switch key module installed at the advertisement broadcasting site. 5.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:步骤S2中,用户对广告播放的反馈数据包括以下部分的内容:5. The method for evaluating the user's acceptance of advertisements based on feature recognition according to claim 1, characterized in that: in step S2, the user's feedback data on the advertisement playback includes the following contents: (1)用户观看广告时表情的变化,包括喜欢、忽视或厌恶;(1) Changes in users’ facial expressions when watching an ad, including liking, ignoring, or disliking it; (2)用户针对广告的直接讨论,是否涉及该广告对应的所述关键词数据集中的内容;(2) whether the user's direct discussion of the advertisement involves the content in the keyword data set corresponding to the advertisement; (3)用户观看广告时作出的手势动作;(3) The gestures made by users when watching the advertisement; (4)用户观看某个广告的注意力集中的时间;(4) the amount of time a user focuses on an advertisement; (5)用户是否要求切换当前播放的广告。(5) Whether the user requests to switch the currently playing advertisement. 6.如权利要求5所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:用户观看某个广告的注意力集中的时间通过下式计算:6. The method for evaluating the user's acceptance of advertisements based on feature recognition as claimed in claim 5, wherein the time during which the user's attention is focused on a certain advertisement is calculated by the following formula: 上式中,tn表示编号为n的用户对当前播放的广告的关注时长;t1n表示编号为n的用户在当前广告播放期间的直视时长;t2n表示编号为n的用户在当前广告播放期间的闭眼时长;t3n表示编号为n的用户在当前广告播放期间的低头时长;t4n表示编号为n的用户在当前广告播放期间的转头时长。In the above formula, tn represents the duration of attention of user number n on the currently playing advertisement; t1n represents the duration of direct gaze of user number n during the current advertisement playing; t2n represents the duration of eyes closing of user number n during the current advertisement playing; t3n represents the duration of head lowering of user number n during the current advertisement playing; t4n represents the duration of head turning of user number n during the current advertisement playing. 7.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:步骤S32中,用户对当前播放的广告进行反馈的姿态动作包括用户在广告播放期间产生的点头、鼓掌、手部指向广告播放界面,头部由非直视状态切换至直视状态的抬头或转头动作。7. The method for evaluating user recognition of advertisements based on feature recognition as described in claim 1 is characterized in that: in step S32, the gestures and actions of the user for providing feedback on the currently playing advertisement include nodding, clapping, pointing the hand at the advertisement playing interface, and raising or turning the head when the user switches from a non-direct viewing state to a direct viewing state during the advertisement playing. 8.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:所述语音流数据通过安装在广告播放设备处的拾音器获取。8. The method for evaluating the user's acceptance of advertisements based on feature recognition as claimed in claim 1, wherein the voice stream data is obtained through a microphone installed at the advertisement playing device. 9.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:所述视频流数据通过安装在广告播放设备周围的视频监控设备获取。9. The method for evaluating user acceptance of advertisements based on feature recognition as described in claim 1, wherein the video stream data is obtained through video monitoring equipment installed around the advertisement playback equipment. 10.如权利要求1所述的基于特征识别的用户对广告的认可度评价方法,其特征在于:步骤S34中,表情识别采用经过大量样本训练的神经网络算法完成。10. The method for evaluating the user's acceptance of advertisements based on feature recognition as described in claim 1, characterized in that: in step S34, expression recognition is performed using a neural network algorithm trained with a large number of samples.
CN202111481967.2A 2021-06-21 2021-06-21 A method for evaluating user recognition of advertisements based on feature recognition Active CN114155034B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111481967.2A CN114155034B (en) 2021-06-21 2021-06-21 A method for evaluating user recognition of advertisements based on feature recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110687605.2A CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform
CN202111481967.2A CN114155034B (en) 2021-06-21 2021-06-21 A method for evaluating user recognition of advertisements based on feature recognition

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110687605.2A Division CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform

Publications (2)

Publication Number Publication Date
CN114155034A CN114155034A (en) 2022-03-08
CN114155034B true CN114155034B (en) 2025-01-03

Family

ID=77623437

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111481967.2A Active CN114155034B (en) 2021-06-21 2021-06-21 A method for evaluating user recognition of advertisements based on feature recognition
CN202110687605.2A Active CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110687605.2A Active CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform

Country Status (1)

Country Link
CN (2) CN114155034B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255075A (en) * 2021-06-21 2022-03-29 安徽西柚酷媒信息科技有限公司 Advertisement updating method of advertisement putting equipment
CN117557949A (en) * 2023-04-12 2024-02-13 无锡八英里电子科技有限公司 Escalator safety device based on image recognition edge calculation
CN116109355B (en) * 2023-04-12 2023-06-16 广东玄润数字信息科技股份有限公司 Advertisement delivery analysis method, system and storage medium based on preference data
CN116503113B (en) * 2023-06-27 2023-09-12 深圳依时货拉拉科技有限公司 Vehicle body advertisement operation management method, system, computer equipment and storage medium
CN116823352A (en) * 2023-07-14 2023-09-29 菏泽学义广告设计制作有限公司 Intelligent advertisement design system based on remote real-time interaction
CN118229356A (en) * 2024-04-12 2024-06-21 广州致奥科技有限公司 An advertisement delivery and user behavior interaction system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255075A (en) * 2021-06-21 2022-03-29 安徽西柚酷媒信息科技有限公司 Advertisement updating method of advertisement putting equipment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2116969A4 (en) * 2006-12-28 2011-11-23 Sharp Kk Advertisement distribution system, advertisement distribution server, advertisement distribution method, program, and recording medium
WO2010078539A2 (en) * 2009-01-04 2010-07-08 Robert Thomas Kulakowski Advertising profiling and targeting system
JP2012113355A (en) * 2010-11-19 2012-06-14 Japan Research Institute Ltd Advertisement information provision system, advertisement information provision method and advertisement information provision program
CN102306360A (en) * 2011-06-23 2012-01-04 迈普通信技术股份有限公司 Feedback method of launching effect of unidirectional video advertising and system
US10410245B2 (en) * 2013-05-15 2019-09-10 OpenX Technologies, Inc. System and methods for using a revenue value index to score impressions for users for advertisement placement
CN103888803B (en) * 2014-02-25 2017-05-03 四川长虹电器股份有限公司 Method and system for controlling insertion of advertisement by television program voice
CN104573619A (en) * 2014-07-25 2015-04-29 北京智膜科技有限公司 Method and system for analyzing big data of intelligent advertisements based on face identification
CN206378900U (en) * 2016-10-24 2017-08-04 西安文理学院 A kind of advertisement delivery effect evaluation system based on mobile terminal
CN106971317A (en) * 2017-03-09 2017-07-21 杨伊迪 The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique
CN109874125A (en) * 2019-01-29 2019-06-11 上海博泰悦臻网络技术服务有限公司 The car owner's authorization method and system of bluetooth key, storage medium and vehicle Cloud Server
CN110070393A (en) * 2019-06-19 2019-07-30 成都大象分形智能科技有限公司 Ads on Vehicles interacts jettison system under line based on cloud artificial intelligence
CN111526419A (en) * 2020-04-29 2020-08-11 四川虹美智能科技有限公司 Vending machine advertisement recommendation method
CN111882361A (en) * 2020-07-31 2020-11-03 苏州云开网络科技有限公司 Audience accurate advertisement pushing method and system based on artificial intelligence and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255075A (en) * 2021-06-21 2022-03-29 安徽西柚酷媒信息科技有限公司 Advertisement updating method of advertisement putting equipment

Also Published As

Publication number Publication date
CN113393275B (en) 2021-12-14
CN113393275A (en) 2021-09-14
CN114155034A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN114155034B (en) A method for evaluating user recognition of advertisements based on feature recognition
CN113435924B (en) VOC car owner cloud big data platform
CN111310019B (en) Information recommendation method, information processing method, system and equipment
JP7207836B2 (en) A system for evaluating audience engagement
CN113379460A (en) Advertisement accurate delivery method based on user portrait
WO2019149005A1 (en) Offline interactive advertisement system
CN106971317A (en) The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique
US11632590B2 (en) Computer-implemented system and method for determining attentiveness of user
US8706544B1 (en) Method and system for automatically measuring and forecasting the demographic characterization of customers to help customize programming contents in a media network
KR20100107036A (en) Laugh detector and system and method for tracking an emotional response to a media presentation
CN113377327A (en) Huge curtain MAX intelligent terminal in garage that possesses intelligent voice interaction function
CN107146096B (en) Intelligent video advertisement display method and device
CN104573619A (en) Method and system for analyzing big data of intelligent advertisements based on face identification
CN102129644A (en) Intelligent advertising system having functions of audience characteristic perception and counting
WO2021031600A1 (en) Data collection method and apparatus, computer device, and storage medium
CN106446266A (en) Method for recommending favorite content to user and content recommending system
CN110519617A (en) Video comments processing method, device, computer equipment and storage medium
CN108876430B (en) Advertisement pushing method based on crowd characteristics, electronic equipment and storage medium
CN109978618A (en) Advertisement interacts jettison system under line based on cloud artificial intelligence
CN113469737A (en) Advertisement analysis database creation system
CN108229994A (en) A kind of information-pushing method and device
CN110415023B (en) Elevator advertisement recommendation method, device, equipment and storage medium
CN113506124B (en) Method for evaluating media advertisement putting effect in intelligent business district
CN104835059A (en) Somatosensory interaction technology-based intelligent advertisement delivery system
CN111526419A (en) Vending machine advertisement recommendation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant