[go: up one dir, main page]

CN118312267B - Interaction method, device, equipment and storage medium based on artificial intelligence - Google Patents

Interaction method, device, equipment and storage medium based on artificial intelligence Download PDF

Info

Publication number
CN118312267B
CN118312267B CN202410721137.XA CN202410721137A CN118312267B CN 118312267 B CN118312267 B CN 118312267B CN 202410721137 A CN202410721137 A CN 202410721137A CN 118312267 B CN118312267 B CN 118312267B
Authority
CN
China
Prior art keywords
interaction
interactive
portrait
recommended
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410721137.XA
Other languages
Chinese (zh)
Other versions
CN118312267A (en
Inventor
刘宏
陈军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pinkuo Information Technology Co ltd
Original Assignee
Shenzhen Pinkuo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pinkuo Information Technology Co ltd filed Critical Shenzhen Pinkuo Information Technology Co ltd
Priority to CN202410721137.XA priority Critical patent/CN118312267B/en
Publication of CN118312267A publication Critical patent/CN118312267A/en
Application granted granted Critical
Publication of CN118312267B publication Critical patent/CN118312267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明涉及智能交互的技术领域,公开了一种基于人工智能的交互方法、装置、设备及存储介质,本发明通过持续收集用户的交互信息,利用预训练的模型根据交互信息构建用户的交互画像,基于交互画像进行分析,生成推荐的交互界面和解析框架,实时捕获用户交互数据,并通过解析框架对行为进行解析,根据解析结果进一步修正用户画像,根据更新的用户画像调整推荐交互界面,通过实时更新,系统能够提供更加个性化的用户体验,动态调整的交互界面使用户与系统互动更加高效,解决了现有技术中不熟悉智能终端产品的用户难以高效使用智能终端产品的问题。

The present invention relates to the technical field of intelligent interaction, and discloses an interaction method, device, equipment and storage medium based on artificial intelligence. The present invention continuously collects user interaction information, uses a pre-trained model to build a user interaction portrait according to the interaction information, analyzes based on the interaction portrait, generates a recommended interaction interface and parsing framework, captures user interaction data in real time, and parses the behavior through the parsing framework, further corrects the user portrait according to the parsing result, and adjusts the recommended interaction interface according to the updated user portrait. Through real-time updating, the system can provide a more personalized user experience, and the dynamically adjusted interaction interface makes the user interaction with the system more efficient, which solves the problem in the prior art that users who are not familiar with smart terminal products find it difficult to use smart terminal products efficiently.

Description

Interaction method, device, equipment and storage medium based on artificial intelligence
Technical Field
The present invention relates to the field of intelligent interaction technologies, and in particular, to an interaction method, apparatus, device, and storage medium based on artificial intelligence.
Background
With the development of electronic technology, the functions of intelligent terminal products (such as computers and mobile phones) are more complex, so that many functions of the intelligent terminal products cannot be used smoothly for some unfamiliar users, and therefore users purchase high-cost high-performance intelligent terminal products, and as a result, high-end functions corresponding to the high price of the products are not used.
At present, some intelligent terminal products are provided with functions of recommending functions, so that users can be familiar with the operation mode of the intelligent terminal products quickly, however, the recommended function behaviors of the products are relatively fixed, a large number of redundant functions exist for different users, and meanwhile, the functions of the masses which are required to be used by some users are not integrated into the recommended functions.
Disclosure of Invention
The invention aims to provide an interaction method, device, equipment and storage medium based on artificial intelligence, and aims to solve the problem that users unfamiliar with intelligent terminal products in the prior art are difficult to use the intelligent terminal products efficiently.
The present invention is thus achieved, in a first aspect, by providing an artificial intelligence based interaction method, comprising:
Continuously collecting interaction information of an interaction object, and constructing an interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
Performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object;
And acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame so as to correct the interaction image of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction image.
Preferably, the step of continuously collecting the interaction information of the interaction object and constructing the interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model comprises the following steps:
collecting interaction behaviors of the interaction objects, recording type marks and time marks corresponding to the interaction behaviors, and binding the type marks and the time marks with the interaction behaviors to obtain interaction information of the interaction objects;
performing object feature extraction processing of an interactive object on the interactive information according to a pre-trained user portrait intelligent model so as to obtain a plurality of object features of the interactive object;
Performing object tracing processing on each object feature to obtain an object vector group of the interactive object pointed by each object feature; the object vector group comprises a plurality of object image intervals of which object features point to the interactive object and interval confident degrees corresponding to the object image intervals, wherein the object image intervals are used for describing one interactive image of the interactive object, and the interval confident degrees are used for describing the possibility degree of the interactive object corresponding to the object image intervals;
carrying out integrated analysis processing on each object vector group to obtain the overall certainty factor of each object image interval; the overall certainty factor is a superposition result of the interval certainty factor of each object vector group in the object image interval;
And judging the overall certainty factor of each object image section according to the certainty standard so as to exclude the object image sections which do not meet the certainty standard, and taking the object image sections which meet the certainty standard as the interactive image of the interactive object.
Preferably, the step of pre-training the user portrayal smart model comprises:
acquiring a plurality of sets of training data; the training data comprises interactive information data and object feature data, wherein the interactive information data is used for describing interactive information of an interactive object, and the object feature data is used for describing object features of the interactive object;
Constructing an input layer, a convolution layer, three full connection layers and an output layer;
Substituting each set of the training data into the input layer;
The input level receives the collected training data of each group and transmits the training data of each group to the convolution layer, and the convolution layer is used for carrying out characteristic collection on the training data of each group so as to obtain interactive mapping characteristics of the training data of each group; the interactive mapping feature is used for describing a mapping relation between the interactive information data and the object feature data in the training data, and the mapping relation is used for carrying out mapping processing on the interactive information so as to obtain the object feature corresponding to the interactive information;
The three full connection layers are used for carrying out continuous vector flattening processing on the various interactive mapping features extracted by the convolution layer so as to flatten the various interactive mapping features into one-dimensional vector features; the one-dimensional vector features are used for carrying out basic graphic expression on various interactive mapping features;
The output layer is used for outputting the one-dimensional vector features expanded by the full connection layer.
Preferably, the step of performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object includes:
Analyzing and processing object recommendation functions of each object image section of the interactive image respectively to obtain recommendation functions of each object image section corresponding to the interactive image and function attribution labels corresponding to each recommendation function, and generating corresponding function priority indexes of the recommendation functions according to the overall certainty factor of each object image section;
According to the function attribution labels and the function priority indexes of the recommended functions, listing the recommended functions to obtain a recommended function list; the recommending function list is provided with a parallel structure and a nested structure, and each recommending function is arranged in the recommending function list in the form of the parallel structure or the nested structure;
Generating corresponding function link ports according to the recommended functions; the function link port is used for enabling the interaction object to realize function interaction with the recommendation function;
According to the recommended function list, performing tabulation processing on each function link port to obtain a recommended interaction interface corresponding to the interaction object;
Performing predictive analysis processing on the interaction behavior based on the recommended interaction interface to obtain a plurality of possible interaction behaviors of the interaction object on the recommended interaction interface, and performing expansion analysis processing on the interaction portrait according to various possible interaction behaviors to obtain a plurality of portrait modification orientations of the interaction portrait; wherein the portrait modification points to a modification direction of an interactive portrait for describing the interactive object;
and pointing the various portrait corrections of the interactive portrait to an interactive parsing framework which is used as the interactive portrait together.
Preferably, the step of acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame to correct an interaction image of the interaction object, and performing adjustment processing on the recommended interaction interface based on the corrected interaction image includes:
acquiring real-time interaction information of the interaction object on the recommended interaction interface;
Performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame to obtain correction vectors of the real-time interactive information corresponding to the portrait correction directions; the correction vector is used for describing the tendency degree of the real-time interaction information in each portrait correction direction;
based on the correction vectors of the real-time interaction information corresponding to the image correction directions, correcting the interaction images;
Generating a plurality of new recommended functions based on the revised interactive image, and analyzing and processing list positions of the new recommended functions based on the recommended function list to obtain setting positions of the new recommended functions in the recommended function list;
generating corresponding newly-added link ports according to the newly-added recommending functions, and setting the newly-added link ports corresponding to the newly-added recommending functions at corresponding positions in the recommending interactive interface according to the setting positions of the newly-added recommending functions in the recommending function list.
Preferably, the step of performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis framework to obtain correction vectors of the real-time interactive information corresponding to the portrait correction orientations includes:
According to each portrait correction direction of the interactive analysis frame, analyzing and processing the direction degree of the real-time interactive information to obtain a preliminary vector between the real-time interactive information and each portrait correction direction;
Analyzing and processing mutual rejection degree of the preliminary vectors between the real-time interaction information and each portrait correction direction to obtain rejection parameters between the preliminary vectors; the rejection parameters are used for describing mutual exclusivity among the pointing degrees of the image correction orientations fed back by the preliminary vectors;
And carrying out vector adjustment processing on each preliminary vector based on rejection parameters among the preliminary vectors to obtain each correction vector corresponding to the minimum rejection parameter.
Preferably, the step of correcting the interactive portrait based on the correction vector to which the real-time interactive information corresponds to each portrait correction direction includes:
taking each object image interval fed back by the interactive image as a reference interval;
Generating corresponding target sections according to the image correction directions by taking the reference section as a reference; the target interval is used for describing the object image interval when the correction vector fully points to the portrait correction pointing;
And performing conversion processing on each target section according to each correction vector to obtain a correction section corresponding to each correction vector, performing intersection analysis on the reference section and each correction section to obtain an intersection section, and taking the intersection section as the interaction image after correction processing.
In a second aspect, the present invention provides an interaction device based on artificial intelligence, configured to implement an interaction method based on artificial intelligence according to any one of the first aspect, including:
the portrait construction module is used for continuously collecting interaction information of the interaction object and constructing an interaction portrait of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
The portrait analysis module is used for carrying out multidimensional analysis processing on the interactive objects based on the interactive images to obtain recommended interactive interfaces and interactive analysis frames corresponding to the interactive objects;
The interactive correction module is used for acquiring real-time interactive information of the interactive object on the recommended interactive interface, carrying out interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame so as to correct the interactive image of the interactive object, and carrying out adjustment processing on the recommended interactive interface based on the corrected interactive image.
In a third aspect, the present invention provides a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing an artificial intelligence based interaction method of any of the first aspects when the computer program is executed.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform an artificial intelligence based interaction method according to any of the first aspects.
The invention provides an interaction method based on artificial intelligence, which has the following beneficial effects:
According to the invention, the interactive information of the user is continuously collected, the interactive image of the user is constructed according to the interactive information by utilizing the pre-trained model, analysis is carried out based on the interactive image, a recommended interactive interface and an analysis frame are generated, user interactive data are captured in real time, the behavior is analyzed through the analysis frame, the user image is further corrected according to the analysis result, the recommended interactive interface is adjusted according to the updated user image, more personalized user experience can be provided by the system through real-time updating, the dynamically adjusted interactive interface enables the user to interact with the system more efficiently, and the problem that the user unfamiliar with intelligent terminal products in the prior art is difficult to use the intelligent terminal products efficiently is solved.
Drawings
FIG. 1 is a schematic diagram of steps of an interaction method based on artificial intelligence according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an interaction device based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The implementation of the present invention will be described in detail below with reference to specific embodiments.
Referring to fig. 1 and 2, a preferred embodiment of the present invention is provided.
In a first aspect, the present invention provides an artificial intelligence based interaction method, comprising:
S1: continuously collecting interaction information of an interaction object, and constructing an interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
S2: performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object;
s3: and acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame so as to correct the interaction image of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction image.
Specifically, in step S1 of the embodiment provided by the present invention, the interaction behavior of the user in the interaction terminal, such as clicking, scrolling, browsing, searching, etc., is monitored and recorded in real time, and it is noted that the monitored items also include the content of specific interactions of the user, the number of times, frequency, order, etc. of various interaction behaviors.
More specifically, the interactive terminal may be a personal computer, a smart phone, or other smart terminal products capable of executing software programs.
More specifically, using log systems, event tracking, and other monitoring tools to collect data, useful information may be further extracted as features from the raw interaction data, which may include user behavior patterns, preferences, and the like.
More specifically, a machine learning model is trained using the historical data and the extracted features to construct a user representation model, real-time interaction data is input into the trained user representation model, and a real-time user interaction representation is generated.
It can be appreciated that by continuously collecting user interaction information, the intelligent model can more accurately understand the behavior and preference of the user, so that the accuracy of the user portrait is improved, and the system can be helped to provide more personalized services and contents by the user portrait refinement.
Specifically, in step S2 of the embodiment provided by the present invention, multi-angle analysis is performed on the features in the interactive image, and the historical data and the real-time interactive data of the user are combined to predict the possible future demands and behaviors of the user.
More specifically, according to the characteristics in the interactive image and the prediction of the user demand, a personalized interactive interface is designed, wherein the interactive interface is a port for setting functional links, so that the user can directly use various functions of the interactive terminal through the ports without searching the functions through the interface in the original design of the interactive terminal.
More specifically, a framework is developed for the recommended interactive interface, so that the interactive behavior of the user can be analyzed in real time, more and more accurate user portraits of the user fed back by the real-time interactive information of the user in the recommended interactive interface are identified, and the recommended interactive interface and the interactive analysis framework are continuously optimized according to the user feedback and the interactive data.
Specifically, in step S3 of the embodiment provided by the present invention, behavior data of a user on a recommended interactive interface is obtained in real time, including browsing and using the recommended interactive interface by the user.
More specifically, the real-time data is analyzed by using the interactive parsing framework to identify the behavior patterns and intentions of the user, and to process and interpret the interactive behavior of the user, and the existing interactive portraits are revised and updated according to the real-time interactive information of the user, so as to ensure that the portraits reflect the latest user preferences and behaviors.
More specifically, according to the revised interactive portraits, the recommended interactive interface is dynamically adjusted, such as changing the ordering of the recommended content, adjusting the layout of the UI elements, and the like, and simultaneously, a user feedback mechanism, such as scoring, commenting or direct feedback buttons, can be integrated in the interface, so that users can directly provide comments on the recommended content or interface design, and the user portraits and the interface can be further refined by utilizing the feedback.
More specifically, iterative optimization is performed according to the monitoring analysis result, the interactive analysis frame and the recommended interactive interface are continuously adjusted, and a development flow of rapid iteration is maintained so as to ensure the sensitivity and the adaptability of the system.
It can be appreciated that by monitoring and updating the user portraits in real time, the system can more accurately predict user needs, provide a more personalized interactive experience, and the user portraits updated in real time can more accurately reflect the current preferences of the users, thereby improving the accuracy of the recommendation system.
The invention provides an interaction method based on artificial intelligence, which has the following beneficial effects:
According to the invention, the interactive information of the user is continuously collected, the interactive image of the user is constructed according to the interactive information by utilizing the pre-trained model, analysis is carried out based on the interactive image, a recommended interactive interface and an analysis frame are generated, user interactive data are captured in real time, the behavior is analyzed through the analysis frame, the user image is further corrected according to the analysis result, the recommended interactive interface is adjusted according to the updated user image, more personalized user experience can be provided by the system through real-time updating, the dynamically adjusted interactive interface enables the user to interact with the system more efficiently, and the problem that the user unfamiliar with intelligent terminal products in the prior art is difficult to use the intelligent terminal products efficiently is solved.
Preferably, the step of continuously collecting the interaction information of the interaction object and constructing the interaction image of the interaction object according to the interaction information through a pre-trained user portrait intelligent model comprises the following steps:
S11: collecting interaction behaviors of the interaction objects, recording type marks and time marks corresponding to the interaction behaviors, and binding the type marks and the time marks with the interaction behaviors to obtain interaction information of the interaction objects;
s12: performing object feature extraction processing of an interactive object on the interactive information according to a pre-trained user portrait intelligent model so as to obtain a plurality of object features of the interactive object;
S13: performing object tracing processing on each object feature to obtain an object vector group of the interactive object pointed by each object feature; the object vector group comprises a plurality of object image intervals of which object features point to the interactive object and interval confident degrees corresponding to the object image intervals, wherein the object image intervals are used for describing one interactive image of the interactive object, and the interval confident degrees are used for describing the possibility degree of the interactive object corresponding to the object image intervals;
S14: carrying out integrated analysis processing on each object vector group to obtain the overall certainty factor of each object image interval; the overall certainty factor is a superposition result of the interval certainty factor of each object vector group in the object image interval;
S15: and judging the overall certainty factor of each object image section according to the certainty standard so as to exclude the object image sections which do not meet the certainty standard, and taking the object image sections which meet the certainty standard as the interactive image of the interactive object.
Specifically, the behavior of the user is monitored and recorded, including clicking, browsing and the like, the captured behavior is marked with a time stamp and a type label, each behavior is ensured to have a definite time and type context, and the time stamp and the type label are bound with the corresponding user behavior to form structured interaction data.
More specifically, the pre-trained user portrait intelligent model is utilized to analyze interaction data, extract object features of the user, wherein the object features can comprise interests, preferences, behavior patterns and the like of the user, trace each feature to source, determine contribution of the feature to the user portrait and form an object vector group, each object feature corresponds to a portrait section and a certainty factor, the portrait section describes tendency of the user in a certain aspect, and the certainty factor represents probability of the tendency.
More specifically, all the object vector groups are integrated, the confident degrees of the image sections are overlapped to form the overall confident degrees, and the step is the key of synthesizing the user interaction image and aims at forming a comprehensive and multi-dimensional user characteristic representation.
More specifically, the overall certainty factor is evaluated according to the set certainty criteria, those image intervals which do not meet the criteria are eliminated, it is ensured that the final user interaction image only includes those features with high certainty, all image intervals which meet the certainty criteria are integrated into the user interaction image, and this image represents the behavior pattern and preference of the user for subsequent personalized recommendation and interaction interface adjustment.
It can be understood that through detailed behavior tracking and feature extraction, the constructed user portrait is more comprehensive, the real demands and preferences of the user can be reflected more accurately, the accurate user portrait can greatly improve the correlation of a recommendation system, so that the user satisfaction degree and participation degree are improved, and the user interaction interface can be adjusted in real time according to the user portrait, so that more personalized and attractive user experience is provided.
Preferably, the step of pre-training the user portrayal smart model comprises:
s121: acquiring a plurality of sets of training data; the training data comprises interactive information data and object feature data, wherein the interactive information data is used for describing interactive information of an interactive object, and the object feature data is used for describing object features of the interactive object;
S122: constructing an input layer, a convolution layer, three full connection layers and an output layer;
S123: substituting each set of the training data into the input layer;
S124: the input level receives the collected training data of each group and transmits the training data of each group to the convolution layer, and the convolution layer is used for carrying out characteristic collection on the training data of each group so as to obtain interactive mapping characteristics of the training data of each group; the interactive mapping feature is used for describing a mapping relation between the interactive information data and the object feature data in the training data, and the mapping relation is used for carrying out mapping processing on the interactive information so as to obtain the object feature corresponding to the interactive information;
S125: the three full connection layers are used for carrying out continuous vector flattening processing on the various interactive mapping features extracted by the convolution layer so as to flatten the various interactive mapping features into one-dimensional vector features; the one-dimensional vector features are used for carrying out basic graphic expression on various interactive mapping features;
s126: the output layer is used for outputting the one-dimensional vector features expanded by the full connection layer.
Specifically, sets of interaction information data and object feature data are prepared, which should represent the interaction behavior of the user and its corresponding features.
More specifically, a neural network model architecture is designed that includes an input layer, a convolution layer, three full-connection layers, and an output layer, each of which is designed with a specific function and connection mode, and training data is sent to the input layer, where the data needs to be subjected to a certain preprocessing, such as normalization.
More specifically, in the convolution layer, feature extraction is performed on input data to obtain interactive mapping features, local features and modes in the data are identified by utilizing a filter of the convolution layer, the features extracted by the convolution layer are further abstracted and integrated through three layers of full-connection layers, and the layers map and flatten high-dimensional features into one-dimensional vectors, so that the processing of an output layer is facilitated.
More specifically, the output layer is responsible for outputting one-dimensional vector features flattened by the full link layer, which feature vectors will be used for the next interactive portrait construction or other related tasks.
It can be understood that the convolution layer can effectively extract local features and modes in the interactive data, so that understanding of the model on user behaviors is improved, the full-connection layer converts high-dimensional features into one-dimensional vectors, the features are comprehensively represented in final user images, the model can be better generalized to unseen data through training of a large amount of data, accuracy of prediction and classification is improved, the model can construct more detailed and accurate user images based on complex data relations, and compared with a traditional feature engineering method, the automatic feature extraction method can obtain more accurate feature expression more rapidly.
Preferably, the step of performing multidimensional analysis processing on the interactive object based on the interactive image to obtain a recommended interactive interface and an interactive analysis frame corresponding to the interactive object includes:
S21: analyzing and processing object recommendation functions of each object image section of the interactive image respectively to obtain recommendation functions of each object image section corresponding to the interactive image and function attribution labels corresponding to each recommendation function, and generating corresponding function priority indexes of the recommendation functions according to the overall certainty factor of each object image section;
S22: according to the function attribution labels and the function priority indexes of the recommended functions, listing the recommended functions to obtain a recommended function list; the recommending function list is provided with a parallel structure and a nested structure, and each recommending function is arranged in the recommending function list in the form of the parallel structure or the nested structure;
S23: generating corresponding function link ports according to the recommended functions; the function link port is used for enabling the interaction object to realize function interaction with the recommendation function;
S24: according to the recommended function list, performing tabulation processing on each function link port to obtain a recommended interaction interface corresponding to the interaction object;
s25: performing predictive analysis processing on the interaction behavior based on the recommended interaction interface to obtain a plurality of possible interaction behaviors of the interaction object on the recommended interaction interface, and performing expansion analysis processing on the interaction portrait according to various possible interaction behaviors to obtain a plurality of portrait modification orientations of the interaction portrait; wherein the portrait modification points to a modification direction of an interactive portrait for describing the interactive object;
S26: and pointing the various portrait corrections of the interactive portrait to an interactive parsing framework which is used as the interactive portrait together.
Specifically, different object image sections in the user portrait are analyzed, corresponding functions are recommended for each section, function attribution labels are allocated, and a function priority index is generated based on the certainty factor of the object image sections.
More specifically, the recommended functions are ordered and listed according to the attribution labels and the priority index, and a recommended function list with a parallel structure and a nested structure is formed.
More specifically, a corresponding function link port is generated for each recommended function, and interactive functions can be directly realized through the function link ports, so that the situation that a user needs to find and use the functions according to the inherent design of the interactive terminal in the traditional design is avoided, and the recommended interactive interface for the user is designed by listing the function link ports according to a recommended function list.
More specifically, through predicting and analyzing the interaction behavior of the recommended interface, the possible interaction behavior of the user is deduced, further analysis and correction guidance are carried out on the user portrait according to the predictions, portrait correction is directed to be integrated, and an interaction analysis frame is formed and used for guiding the correction direction of the user interaction portrait.
It can be understood that the customized interaction interface aiming at the user portrait is created according to the specific portrait recommendation personalized function of the user, so that the intuitiveness and usability of the interface are improved, and the user experience is enhanced.
Meanwhile, an interaction analysis frame corresponding to the recommended interaction interface is generated, further analysis and reaction can be carried out on the interaction behavior of the user on the recommended interaction interface, a direction is provided for continuous optimization of the user portrait, and the user portrait is dynamically updated along with the change of the user behavior.
Preferably, the step of acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis frame to correct an interaction image of the interaction object, and performing adjustment processing on the recommended interaction interface based on the corrected interaction image includes:
S31: acquiring real-time interaction information of the interaction object on the recommended interaction interface;
S32: performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame to obtain correction vectors of the real-time interactive information corresponding to the portrait correction directions; the correction vector is used for describing the tendency degree of the real-time interaction information in each portrait correction direction;
S33: based on the correction vectors of the real-time interaction information corresponding to the image correction directions, correcting the interaction images;
S34: generating a plurality of new recommended functions based on the revised interactive image, and analyzing and processing list positions of the new recommended functions based on the recommended function list to obtain setting positions of the new recommended functions in the recommended function list;
S35: generating corresponding newly-added link ports according to the newly-added recommending functions, and setting the newly-added link ports corresponding to the newly-added recommending functions at corresponding positions in the recommending interactive interface according to the setting positions of the newly-added recommending functions in the recommending function list.
Specifically, real-time interaction data of a user on a recommended interaction interface is collected, and the real-time interaction information is analyzed by utilizing an interaction analysis framework to determine the tendency degree of the real-time interaction information in each portrait correction direction, namely a correction vector.
More specifically, the interactive portrait of the user is updated and revised in real time according to the revised vector, a new recommended function is generated according to the revised portrait of the user, the proper position of the new recommended function in the existing recommended function list is analyzed and determined, a corresponding new link port is created for the new recommended function, and the ports are arranged to the corresponding positions in the recommended interactive interface according to the positions in the function list.
It can be understood that through real-time interaction data analysis, the user portraits can be dynamically updated to reflect the latest preferences and behaviors of the user, the recommended interaction interface can be adjusted according to the dynamically updated user portraits so as to better meet the personalized demands of the user, by correcting the user portraits, the system can more accurately predict the user demands, thereby providing more fitting function recommendation, optimizing the layout of the recommended interaction interface according to the importance of the newly added function and the user demands, improving the operation convenience of the user, and through continuous interaction analysis and interface adjustment, the system aims at providing a smoother and more visual user experience.
Preferably, the step of performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis framework to obtain correction vectors of the real-time interactive information corresponding to the portrait correction orientations includes:
S321: according to each portrait correction direction of the interactive analysis frame, analyzing and processing the direction degree of the real-time interactive information to obtain a preliminary vector between the real-time interactive information and each portrait correction direction;
S322: analyzing and processing mutual rejection degree of the preliminary vectors between the real-time interaction information and each portrait correction direction to obtain rejection parameters between the preliminary vectors; the rejection parameters are used for describing mutual exclusivity among the pointing degrees of the image correction orientations fed back by the preliminary vectors;
S323: and carrying out vector adjustment processing on each preliminary vector based on rejection parameters among the preliminary vectors to obtain each correction vector corresponding to the minimum rejection parameter.
Specifically, for each portrait modification orientation defined in the interactive parsing framework, real-time interactive information is analyzed, and preliminary vectors characterizing the degrees of these orientations are generated.
More specifically, the repellency between different preliminary vectors, i.e. whether there is mutual exclusion or conflict of the user behavior trends represented by two or more vectors, is analyzed, and a repellency parameter is calculated, which describes the degree of repellency between the user behavior trends represented by the respective preliminary vectors.
More specifically, the preliminary vectors are adjusted according to the rejection parameters to reduce mutual exclusivity between the vectors, resulting in final correction vectors having minimized rejection parameters.
It can be appreciated that by considering the rejection between behavior trends, a more accurate user behavior correction vector can be generated, the accuracy of the user portrait is improved, analysis of rejection parameters and adjustment of vectors are helpful for improving the decision efficiency of a recommendation system, error recommendation is reduced, and the adjusted correction vector can better reflect the real preference of the user, so that more personalized user experience is provided.
Preferably, the step of correcting the interactive portrait based on the correction vector to which the real-time interactive information corresponds to each portrait correction direction includes:
S331: taking each object image interval fed back by the interactive image as a reference interval;
s332: generating corresponding target sections according to the image correction directions by taking the reference section as a reference; the target interval is used for describing the object image interval when the correction vector fully points to the portrait correction pointing;
S333: and performing conversion processing on each target section according to each correction vector to obtain a correction section corresponding to each correction vector, performing intersection analysis on the reference section and each correction section to obtain an intersection section, and taking the intersection section as the interaction image after correction processing.
Specifically, the existing object image sections in the interactive image are used as reference sections, and these sections represent the current image state of the user.
More specifically, depending on the image correction direction, a series of target sections representing ideal states that the user image should reach if the correction vector is fully directed in a particular image correction direction are generated using the reference section as a starting point.
More specifically, the target interval is transformed using the correction vector to obtain a corrected interval, which involves scaling, shifting or other transformation to reflect the user representation changes under the influence of the correction vector.
More specifically, intersection analysis is performed on the reference section and all the correction sections, and a common part of the sections, namely the intersection section, is determined, which represents a user portrait after all correction factors are considered, and the intersection section obtained through analysis is used as an interactive portrait after correction processing, and the portrait more accurately reflects the current interaction situation and preference of the user.
It can be understood that through the process, the interactive image of the user can be ensured to be continuously updated, the latest behavior and preference of the user are reflected in real time, and the more accurate user image can improve the matching degree of a recommendation system, so that the correlation of the recommended content and the satisfaction degree of the user are improved.
Referring to fig. 2, in a second aspect, the present invention provides an interaction device based on artificial intelligence, for implementing an interaction method based on artificial intelligence according to any one of the first aspect, including:
the portrait construction module is used for continuously collecting interaction information of the interaction object and constructing an interaction portrait of the interaction object according to the interaction information through a pre-trained user portrait intelligent model;
The portrait analysis module is used for carrying out multidimensional analysis processing on the interactive objects based on the interactive images to obtain recommended interactive interfaces and interactive analysis frames corresponding to the interactive objects;
The interactive correction module is used for acquiring real-time interactive information of the interactive object on the recommended interactive interface, carrying out interactive behavior analysis processing on the real-time interactive information according to the interactive analysis frame so as to correct the interactive image of the interactive object, and carrying out adjustment processing on the recommended interactive interface based on the corrected interactive image.
In this embodiment, for specific implementation of each module in the above embodiment of the apparatus, please refer to the description in the above embodiment of the method, and no further description is given here.
In a third aspect, the present invention provides a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing an artificial intelligence based interaction method of any of the first aspects when the computer program is executed.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform an artificial intelligence based interaction method according to any of the first aspects.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (7)

1.一种基于人工智能的交互方法,其特征在于,包括:1. An interactive method based on artificial intelligence, characterized by comprising: 持续采集交互对象的交互信息,并通过预先训练的用户画像智能模型根据所述交互信息构建所述交互对象的交互画像;Continuously collect interaction information of the interaction object, and construct an interaction profile of the interaction object according to the interaction information through a pre-trained user profile intelligent model; 基于所述交互画像对所述交互对象进行多维度的分析处理,得到对应所述交互对象的推荐交互界面和交互解析框架;Performing multi-dimensional analysis and processing on the interactive object based on the interactive portrait to obtain a recommended interactive interface and an interactive analysis framework corresponding to the interactive object; 获取所述交互对象在所述推荐交互界面上的实时交互信息,根据所述交互解析框架对所述实时交互信息进行交互行为解析处理,以对所述交互对象的交互画像进行修正处理,并基于修正处理后的所述交互画像对所述推荐交互界面进行调整处理;Acquiring real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis framework to correct the interaction portrait of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction portrait; 持续采集交互对象的交互信息,并通过预先训练的用户画像智能模型根据所述交互信息构建所述交互对象的交互画像的步骤包括:The steps of continuously collecting interaction information of the interaction object and constructing an interaction profile of the interaction object according to the interaction information through a pre-trained user profile intelligent model include: 采集所述交互对象的交互行为,并记录对应所述交互行为的类型标记和时间标记,将所述类型标记和所述时间标记与所述交互行为进行绑定处理,以得到所述交互对象的交互信息;Collecting the interactive behavior of the interactive object, and recording the type tag and the time tag corresponding to the interactive behavior, and binding the type tag and the time tag with the interactive behavior to obtain the interactive information of the interactive object; 根据预先训练的用户画像智能模型对所述交互信息进行交互对象的对象特征提取处理,以得到所述交互对象的若干个对象特征;Performing object feature extraction processing on the interaction information according to a pre-trained user portrait intelligent model to obtain a plurality of object features of the interaction object; 对各个所述对象特征分别进行对象溯源处理,以得到各个所述对象特征指向的所述交互对象的对象向量组;其中,所述对象向量组包括对象特征指向所述交互对象的若干对象形象区间与对应各个所述对象形象区间的区间确信度,所述对象形象区间用于描述所述交互对象的一种交互画像,所述区间确信度用于描述所述交互对象对应所述对象形象区间的可能性程度;Performing object tracing processing on each of the object features respectively to obtain an object vector group of the interactive object pointed to by each of the object features; wherein the object vector group includes a number of object image intervals pointed to by the object features and interval confidences corresponding to each of the object image intervals, the object image intervals are used to describe an interactive portrait of the interactive object, and the interval confidences are used to describe the possibility that the interactive object corresponds to the object image intervals; 对各个所述对象向量组进行整合分析处理,以得到各个所述对象形象区间的整体确信度;其中,所述整体确信度为各个所述对象向量组在所述对象形象区间的所述区间确信度的叠加结果;Performing integrated analysis on each of the object vector groups to obtain the overall confidence of each of the object image intervals; wherein the overall confidence is the superposition result of the interval confidences of each of the object vector groups in the object image interval; 根据确信标准对各个所述对象形象区间的整体确信度进行判断处理,以排除不符合所述确信标准的所述对象形象区间,并将符合所述确信标准的所述对象形象区间共同作为所述交互对象的交互画像;The overall confidence of each of the object image intervals is judged according to the confidence standard to exclude the object image intervals that do not meet the confidence standard, and the object image intervals that meet the confidence standard are collectively used as the interactive portraits of the interactive objects; 基于所述交互画像对所述交互对象进行多维度的分析处理,得到对应所述交互对象的推荐交互界面和交互解析框架的步骤包括:The step of performing multi-dimensional analysis and processing on the interactive object based on the interactive portrait to obtain a recommended interactive interface and an interactive analysis framework corresponding to the interactive object includes: 对所述交互画像的各个所述对象形象区间分别进行对象推荐功能的分析处理,以得到对应所述交互画像的各个所述对象形象区间的推荐功能和对应各个所述推荐功能的功能归属标签,并根据各个所述对象形象区间的整体确信度生成对应的所述推荐功能的功能优先指数;Performing object recommendation function analysis and processing on each of the object image intervals of the interactive portrait respectively, so as to obtain the recommended functions corresponding to each of the object image intervals of the interactive portrait and the function attribution labels corresponding to each of the recommended functions, and generating the function priority index of the corresponding recommended function according to the overall confidence of each of the object image intervals; 根据各个所述推荐功能的功能归属标签和功能优先指数,对各个所述推荐功能进行列表化处理,以得到推荐功能列表;其中,所述推荐功能列表具有并列结构与嵌套结构,各个所述推荐功能以所述并列结构或所述嵌套结构的形式设置在所述推荐功能列表中;According to the function attribution label and function priority index of each of the recommended functions, each of the recommended functions is tabulated to obtain a recommended function list; wherein the recommended function list has a parallel structure and a nested structure, and each of the recommended functions is arranged in the recommended function list in the form of the parallel structure or the nested structure; 根据各个所述推荐功能生成对应的功能链接端口;其中,所述功能链接端口用于供所述交互对象实现与所述推荐功能之间的功能交互;Generate a corresponding function link port according to each of the recommended functions; wherein the function link port is used for the interactive object to realize functional interaction with the recommended function; 根据所述推荐功能列表,对各个所述功能链接端口进行列表化处理,得到对应所述交互对象的推荐交互界面;According to the recommended function list, each of the function link ports is tabulated to obtain a recommended interaction interface corresponding to the interaction object; 基于所述推荐交互界面进行交互行为的预测分析处理,得到所述交互对象在所述推荐交互界面上的若干种可能的交互行为,并根据若干种所述可能的交互行为对所述交互画像进行拓展分析处理,得到所述交互画像的若干种画像修正指向;其中,所述画像修正指向用于描述所述交互对象的交互画像的修正方向;Based on the recommended interaction interface, a predictive analysis process of the interaction behavior is performed to obtain several possible interaction behaviors of the interaction object on the recommended interaction interface, and an extended analysis process is performed on the interaction portrait according to the several possible interaction behaviors to obtain several portrait correction directions of the interaction portrait; wherein the portrait correction direction is used to describe the correction direction of the interaction portrait of the interaction object; 将所述交互画像的各种所述画像修正指向共同作为所述交互画像的交互解析框架;All the portrait correction directions of the interaction portrait are collectively used as the interaction parsing framework of the interaction portrait; 获取所述交互对象在所述推荐交互界面上的实时交互信息,根据所述交互解析框架对所述实时交互信息进行交互行为解析处理,以对所述交互对象的交互画像进行修正处理,并基于修正处理后的所述交互画像对所述推荐交互界面进行调整处理的步骤包括:The steps of obtaining real-time interaction information of the interaction object on the recommended interaction interface, performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis framework to correct the interaction portrait of the interaction object, and adjusting the recommended interaction interface based on the corrected interaction portrait include: 获取所述交互对象在所述推荐交互界面上的实时交互信息;Acquire real-time interaction information of the interaction object on the recommendation interaction interface; 根据所述交互解析框架对所述实时交互信息进行交互行为解析处理,以得到所述实时交互信息对应各个所述画像修正指向的修正向量;其中,所述修正向量用于描述所述实时交互信息在各个所述画像修正指向上的倾向程度;Performing interactive behavior analysis processing on the real-time interactive information according to the interactive analysis framework to obtain correction vectors corresponding to each of the portrait correction directions of the real-time interactive information; wherein the correction vectors are used to describe the degree of inclination of the real-time interactive information in each of the portrait correction directions; 基于所述实时交互信息对应各个所述画像修正指向的修正向量,对所述交互画像进行修正处理;Based on the correction vectors corresponding to the correction points of the portraits in the real-time interaction information, the interaction portraits are corrected; 基于修正处理后的所述交互画像生成若干个新增推荐功能,并基于所述推荐功能列表对各个所述新增推荐功能进行列表位置的分析处理,得到各个所述新增推荐功能在所述推荐功能列表中的设置位置;Generate a number of newly added recommended functions based on the modified interaction portrait, and analyze the list position of each newly added recommended function based on the recommended function list to obtain the setting position of each newly added recommended function in the recommended function list; 根据各个所述新增推荐功能生成对应的新增链接端口,并根据各个所述新增推荐功能在所述推荐功能列表中的设置位置将对应各个所述新增推荐功能的各个所述新增链接端口设置在所述推荐交互界面中对应的位置。A corresponding new link port is generated according to each of the new recommended functions, and each of the new link ports corresponding to each of the new recommended functions is set at a corresponding position in the recommendation interaction interface according to the setting position of each of the new recommended functions in the recommended function list. 2.如权利要求1所述的一种基于人工智能的交互方法,其特征在于,所述用户画像智能模型的预先训练的步骤包括:2. The artificial intelligence-based interaction method according to claim 1, wherein the step of pre-training the user portrait intelligent model comprises: 获取若干组训练数据;其中,所述训练数据包括交互信息数据和对象特征数据,所述交互信息数据用于描述交互对象的交互信息,所述对象特征数据用于描述所述交互对象的对象特征;Acquire several sets of training data; wherein the training data includes interaction information data and object feature data, the interaction information data is used to describe the interaction information of the interaction object, and the object feature data is used to describe the object feature of the interaction object; 构建输入层、卷积层、三层全连接层以及输出层;Construct the input layer, convolutional layer, three fully connected layers, and output layer; 将各组所述训练数据代入至所述输入层;Substituting each group of the training data into the input layer; 所述输入层级接收采集到的各组所述训练数据,并将各组所述训练数据传输至所述卷积层,所述卷积层用于对各组所述训练数据进行特征采集,以获取各组所述训练数据的交互映射特征;其中,所述交互映射特征用于描述所述训练数据中的所述交互信息数据与所述对象特征数据的映射关系,所述映射关系用于对所述交互信息进行映射处理,以得到对应所述交互信息的所述对象特征;The input layer receives each group of the collected training data, and transmits each group of the training data to the convolution layer, and the convolution layer is used to collect features of each group of the training data to obtain interactive mapping features of each group of the training data; wherein the interactive mapping features are used to describe the mapping relationship between the interactive information data and the object feature data in the training data, and the mapping relationship is used to map the interactive information to obtain the object features corresponding to the interactive information; 三层所述全连接层用于对所述卷积层提取出的各种所述交互映射特征进行连续的向量展平处理,以将各种所述交互映射特征展平为一维向量特征;所述一维向量特征用于对各种所述交互映射特征进行基础的图形表达;The three fully connected layers are used to perform continuous vector flattening processing on the various interactive mapping features extracted by the convolutional layer, so as to flatten the various interactive mapping features into one-dimensional vector features; the one-dimensional vector features are used to perform basic graphical expressions on the various interactive mapping features; 所述输出层用于输出展开的所述一维向量特征。The output layer is used to output the expanded one-dimensional vector features. 3.如权利要求1所述的一种基于人工智能的交互方法,其特征在于,根据所述交互解析框架对所述实时交互信息进行交互行为解析处理,以得到所述实时交互信息对应各个所述画像修正指向的修正向量的步骤包括:3. The artificial intelligence-based interaction method according to claim 1, characterized in that the step of performing interaction behavior analysis processing on the real-time interaction information according to the interaction analysis framework to obtain the correction vector corresponding to each of the portrait correction directions of the real-time interaction information comprises: 根据所述交互解析框架的各个所述画像修正指向,分别对所述实时交互信息进行指向程度的分析处理,以得到所述实时交互信息与各个所述画像修正指向之间的初步向量;According to each of the portrait correction directions of the interactive analysis framework, respectively analyzing and processing the direction of the real-time interactive information to obtain a preliminary vector between the real-time interactive information and each of the portrait correction directions; 对所述实时交互信息与各个所述画像修正指向之间的初步向量进行互相之间的排斥度的分析处理,得到各个所述初步向量之间的排斥参数;其中,所述排斥参数用于描述各个所述初步向量所反馈的对各个所述画像修正指向的指向程度之间的互斥性;Performing mutual repulsion analysis and processing on the preliminary vectors between the real-time interaction information and each of the portrait correction directions to obtain repulsion parameters between the preliminary vectors; wherein the repulsion parameters are used to describe the mutual exclusivity between the degrees of directionality of each of the portrait correction directions fed back by each of the preliminary vectors; 基于各个所述初步向量之间的排斥参数,对各个所述初步向量进行向量调整处理,得到对应最小的所述排斥参数的各个所述修正向量。Based on the repulsion parameter between the preliminary vectors, vector adjustment processing is performed on the preliminary vectors to obtain the correction vectors corresponding to the minimum repulsion parameter. 4.如权利要求1所述的一种基于人工智能的交互方法,其特征在于,基于所述实时交互信息对应各个所述画像修正指向的修正向量,对所述交互画像进行修正处理的步骤包括:4. The artificial intelligence-based interaction method according to claim 1, characterized in that the step of correcting the interaction portrait based on the correction vector corresponding to each correction direction of the portrait in the real-time interaction information comprises: 将所述交互画像所反馈的各个所述对象形象区间作为基准区间;Using each of the object image intervals fed back by the interactive portrait as a reference interval; 以所述基准区间为基准,根据各个所述画像修正指向分别生成对应的各个目标区间;其中,所述目标区间用于描述所述修正向量完全指向所述画像修正指向时的所述对象形象区间;Taking the reference interval as a reference, generating corresponding target intervals according to the portrait correction directions respectively; wherein the target interval is used to describe the object image interval when the correction vector completely points to the portrait correction direction; 根据各个所述修正向量对各个所述目标区间进行转换处理,以得到对应各个所述修正向量的修正区间,对所述基准区间和各个所述修正区间进行交集分析出来,以得到交集区间,并将所述交集区间作为修正处理后的所述交互画像。Each of the target intervals is converted according to each of the correction vectors to obtain a correction interval corresponding to each of the correction vectors, and the baseline interval and each of the correction intervals are subjected to intersection analysis to obtain an intersection interval, and the intersection interval is used as the interaction portrait after correction. 5.一种基于人工智能的交互装置,其特征在于,用于实现权利要求1-4任意一项所述的一种基于人工智能的交互方法,包括:5. An artificial intelligence-based interactive device, characterized in that it is used to implement an artificial intelligence-based interactive method according to any one of claims 1 to 4, comprising: 画像构建模块,用于持续采集交互对象的交互信息,并通过预先训练的用户画像智能模型根据所述交互信息构建所述交互对象的交互画像;A portrait construction module is used to continuously collect interaction information of an interaction object, and construct an interaction portrait of the interaction object according to the interaction information through a pre-trained user portrait intelligent model; 画像解析模块,用于基于所述交互画像对所述交互对象进行多维度的分析处理,得到对应所述交互对象的推荐交互界面和交互解析框架;A portrait analysis module, used to perform multi-dimensional analysis and processing on the interactive object based on the interactive portrait, and obtain a recommended interactive interface and an interactive analysis framework corresponding to the interactive object; 交互修正模块,用于获取所述交互对象在所述推荐交互界面上的实时交互信息,根据所述交互解析框架对所述实时交互信息进行交互行为解析处理,以对所述交互对象的交互画像进行修正处理,并基于修正处理后的所述交互画像对所述推荐交互界面进行调整处理。An interaction correction module is used to obtain real-time interaction information of the interactive object on the recommended interaction interface, perform interaction behavior analysis on the real-time interaction information according to the interaction analysis framework, correct the interaction portrait of the interactive object, and adjust the recommended interaction interface based on the corrected interaction portrait. 6.一种计算机设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至4任一项所述的一种基于人工智能的交互方法。6. A computer device comprising a memory and a processor, wherein the memory stores a computer program that can be run on the processor, and wherein the processor implements an artificial intelligence-based interaction method as described in any one of claims 1 to 4 when executing the computer program. 7.一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序在被处理器运行时使得所述处理器执行如权利要求1-4中任一项所述的一种基于人工智能的交互方法。7. A computer-readable storage medium, characterized in that a computer program is stored thereon, and when the computer program is executed by a processor, the processor executes an artificial intelligence-based interaction method as described in any one of claims 1 to 4.
CN202410721137.XA 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence Active CN118312267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410721137.XA CN118312267B (en) 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410721137.XA CN118312267B (en) 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN118312267A CN118312267A (en) 2024-07-09
CN118312267B true CN118312267B (en) 2024-08-13

Family

ID=91727641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410721137.XA Active CN118312267B (en) 2024-06-05 2024-06-05 Interaction method, device, equipment and storage medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN118312267B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118396127B (en) * 2024-06-28 2024-09-13 深圳品阔信息技术有限公司 Artificial intelligence-based conversation generation method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115248896A (en) * 2022-07-25 2022-10-28 数效(深圳)科技有限公司 User portrait optimization method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10782986B2 (en) * 2018-04-20 2020-09-22 Facebook, Inc. Assisting users with personalized and contextual communication content
CN113781082B (en) * 2020-11-18 2023-04-07 京东城市(北京)数字科技有限公司 Method and device for correcting regional portrait, electronic equipment and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115248896A (en) * 2022-07-25 2022-10-28 数效(深圳)科技有限公司 User portrait optimization method

Also Published As

Publication number Publication date
CN118312267A (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN113688304B (en) Search recommendation model training method, search result sorting method and device
WO2025001486A1 (en) Recommendation method and apparatus
CN118312267B (en) Interaction method, device, equipment and storage medium based on artificial intelligence
CN113569867B (en) Image processing method, device, computer equipment and storage medium
CN119065749A (en) Interface updating method, device, equipment and computer readable storage medium
Long et al. Multi-task learning for collaborative filtering
CN119167105A (en) A digital education resource matching method
Song et al. Diffcl: A diffusion-based contrastive learning framework with semantic alignment for multimodal recommendations
CN119962653A (en) A method and device for solving operations research problems based on large models
Wen et al. DGNet: A handwritten mathematical formula recognition network based on deformable convolution and global context attention
Wang et al. Cooperation of experts: Fusing heterogeneous information with large margin
CN117170648A (en) Robot flow automation component recommendation method, device, equipment and storage medium
CN119336965B (en) Product carbon footprint information visualization system and method
CN120765339A (en) A personalized clothing design recommendation system and method based on AI and big data
EP4589429A1 (en) Automated training data generation for digital assistant
EP4336380A2 (en) A user-centric ranking algorithm for recommending content items
CN119782622A (en) A user behavior sequence recommendation method, system and storage medium based on a combination of a large model and a graph neural network
CN117672407B (en) Method for predicting metabolic site and metabolic product of compound P450 enzyme based on graph neural network
WO2026000661A1 (en) Interaction method and apparatus based on artificial intelligence, and device and storage medium
Gurevitch et al. LXR: Learning to eXplain Recommendations
Liu et al. TSESRec: A transformer-facilitated set extension model for session-based recommendation: C. Liu et al.
CN119396872B (en) Method for determining recommendation result based on user requirement and electronic equipment
CN119743634B (en) Man-machine interaction method and device for intelligent equipment, computer equipment and storage medium
Patel et al. Enhanced Cross-Domain Recommendation System Using Collaborative Filtering and Transfer Learning Methods
Xia et al. Enhanced multi-Scale Dynamic Facial Expression Recognition via Conditional Random Fields: M. Xia et al.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant