CN112306326A - Online self-service conversation method and device, computer equipment and computer readable medium - Google Patents
Online self-service conversation method and device, computer equipment and computer readable medium Download PDFInfo
- Publication number
- CN112306326A CN112306326A CN202011209905.1A CN202011209905A CN112306326A CN 112306326 A CN112306326 A CN 112306326A CN 202011209905 A CN202011209905 A CN 202011209905A CN 112306326 A CN112306326 A CN 112306326A
- Authority
- CN
- China
- Prior art keywords
- conversation
- module
- page
- preset
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000004044 response Effects 0.000 claims abstract description 13
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 abstract description 6
- 238000013507 mapping Methods 0.000 abstract description 5
- 239000000047 product Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000005587 bubbling Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Human Computer Interaction (AREA)
- General Business, Economics & Management (AREA)
- Technology Law (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The application belongs to the technical field of intelligent decision making, and provides an online self-service conversation method, an online self-service conversation device, computer equipment and a computer readable storage medium. According to the method and the device, the module information contained in the page module corresponding to the online operation is acquired in response to the online operation of the user, the conversation scene corresponding to the online operation is identified according to the module information, the preset dialogues corresponding to the conversation scene are acquired based on the conversation scene, the conversation is pushed according to the preset dialogues, the conversation scene corresponding to the module where the user is located is identified, the module scene where the user is located serves as the important parameter of the pushing of the dialogues corresponding to the conversation, the mapping relation between the conversation scene and the dialogues is established according to the identified conversation scene, the dialogues which are most suitable for the scene are recalled according to the dialogues labels, different scenes are achieved, the dialogues are different in purpose, and the conversation accuracy and the conversation communication efficiency are improved.
Description
Technical Field
The present application relates to the field of intelligent decision making technologies, and in particular, to an online self-service conversation method, apparatus, computer device, and computer-readable storage medium.
Background
At present, when self-service conversation is carried out on line through a robot, generally, after a user actively clicks the robot, the robot makes a corresponding preset reaction according to a question asked by a client. For example, in a robot in the insurance industry, a general user does not know what kind of problem the robot can solve, and even if the user actively clicks the robot, the robot can answer the user according to the preset reaction corresponding to the problem only according to the problem of the user, so that the efficiency of self-service conversation is low, and the effect is poor.
Disclosure of Invention
The application provides an online self-service conversation method, an online self-service conversation device, computer equipment and a computer readable storage medium, and can solve the problem of low online self-service conversation efficiency in the prior art.
In a first aspect, the present application provides an online self-service dialogue method, including: responding to the online operation of a user, wherein the page module is a page module selected by the user in a plurality of page modules on an online operation page, and module information contained in the page module corresponding to the online operation is acquired; recognizing a conversation scene corresponding to the online operation according to the module information; and acquiring a preset dialect corresponding to the conversation scene based on the conversation scene, and pushing the conversation according to the preset dialect.
In a second aspect, the present application further provides an online self-service conversation device, including: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for responding to the online operation of a user, and the page module is a page module selected by the user in a plurality of page modules on an online operation page and acquiring module information contained in the page module corresponding to the online operation; the identification unit is used for identifying a conversation scene corresponding to the online operation according to the module information; and the pushing unit is used for acquiring a preset dialect corresponding to the conversation scene based on the conversation scene and pushing the conversation according to the preset dialect.
In a third aspect, the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the online self-service conversation method when executing the computer program.
In a fourth aspect, the present application further provides a computer readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of the online self-service dialog method.
The application provides an online self-service conversation method, an online self-service conversation device, computer equipment and a computer readable storage medium. According to the method and the device, the module information contained in the page module corresponding to the online operation is acquired in response to the online operation of the user, the conversation scene corresponding to the online operation is identified according to the module information, the preset dialogues corresponding to the conversation scene are acquired based on the conversation scene, the conversation is pushed according to the preset dialogues, the conversation scene corresponding to the module where the user is located is identified, the module scene where the user is located serves as the important parameter of the pushing of the dialogues corresponding to the conversation, the mapping relation between the conversation scene and the dialogues is established according to the identified conversation scene, the dialogues which are most suitable for the scene are recalled according to the dialogues labels, different scenes are achieved, the dialogues are different in purpose, and the conversation accuracy and the conversation communication efficiency are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an online self-service conversation method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a first sub-flow of an online self-service conversation method according to an embodiment of the present application;
fig. 3 is a second sub-flowchart of an online self-service session method according to an embodiment of the present application;
fig. 4 is a third sub-flowchart of the online self-service dialogue method according to the embodiment of the present application;
fig. 5 is a fourth sub-flowchart of the online self-service dialogue method according to the embodiment of the present application;
fig. 6 is a fifth sub-flowchart of the online self-service dialog method according to the embodiment of the present application;
FIG. 7 is a schematic block diagram of an online self-service dialog device according to an embodiment of the present application; and
fig. 8 is a schematic block diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations on a line, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations on a line, elements, components, and/or groups thereof.
Referring to fig. 1, fig. 1 is a schematic flow chart of an online self-service conversation method according to an embodiment of the present disclosure. As shown in FIG. 1, the method includes the following steps S11-S13:
s11, responding to the user online operation, and acquiring module information contained in the page module corresponding to the online operation, wherein the page module is a page module selected by the user in the plurality of page modules on the online operation page.
Specifically, in response to an online operation of a user, module information may be directly obtained, or module information included in a page module corresponding to the online operation and module information included in the page module corresponding to the online operation may be obtained, where the page module is a module operated by the user in a plurality of modules included in an online operation page, the page information includes a page ID, and the module information includes a module ID, so as to obtain module information included in the page module corresponding to the online operation.
Generally, a website has a plurality of web pages, each page has a plurality of modules, each page corresponds to a page ID and a page theme, each module can also correspond to a module ID and a module theme, that is, a page corresponds to a large theme, a module corresponds to a small theme included in the large theme, that is, corresponds to a specific scene, and specific dialog contents correspond to specific contents included in the scene, so that one module corresponds to a session scene. The dialog scene corresponding to the module can be judged according to the module to which the user slides. When a user enters a webpage and performs online operation on the webpage, if the user performs online operation through an input device such as a desktop computer or a notebook computer and the like and including a mouse, or performs online operation on a smart terminal such as a smart phone and the like through gesture actions and the like, the online operation of the user on the webpage can be monitored, and page information included in a page corresponding to the online operation is acquired along with the online operation of the user, wherein the page information can include a page ID of the page, a page module ID of the user mouse and the like. For example, when the self-service dialogue method based on scene recognition is applied to the marketing of insurance products through the intelligent robot in the webpage, for all insurance products on sale, after the user enters the page and before entering the robot dialog interface, because the online operation on the page is involved before the user enters the page and the robot conversation interface, after the online operation of the user on the page is monitored, the front end of the page can send page information such as a page ID corresponding to the page where the user is located and a module ID of a module where the module theme is located to the background of the robot, so that the background of the robot can obtain the page information and module information of the page corresponding to the online operation, and according to the page information and module information, for example, and identifying the conversation scene corresponding to the module where the user is located according to the page ID, the module content and other module information.
And S12, identifying the conversation scene corresponding to the online operation according to the module information.
Specifically, the dialog scene corresponding to the online operation may be identified according to the page information and the module information corresponding to the online operation. After the page information and the module information of the online operation of the user are acquired, for example, after the page ID and the module ID are acquired, because the page information and the module information have a preset corresponding relationship with a conversation scene, the conversation scene corresponding to the online operation can be identified according to the page information and the module information. For example, in a web page for insurance sales, for an insurance product, there may be a benefit demonstration scene, an insured person information filling scene, a guarantee details scene, and the like, each of which is displayed by a separate module in the page, each module may correspond to a page ID, a module ID on the page, and the like, the page information and the module information corresponding to each module and the scene corresponding to the module have a preset corresponding relationship, and according to the page information and the module information, a scene corresponding to the module may be obtained, the scene is a benefit demonstration scene, an insured person information filling scene, a guarantee details scene, and the like, and further a conversation scene corresponding to a conversation is determined according to the scene corresponding to the module. In one example, as shown in table 1 below, each scene corresponds to preset page information:
table 1
S13, acquiring a preset dialect corresponding to the conversation scene based on the conversation scene, and pushing the conversation according to the preset dialect.
Specifically, for each scene, the scene is a dialog scene where a corresponding dialog is located, a preset dialect corresponding to the scene is set according to a theme and content associated with the scene, the preset dialect associated with the scene is called after the scene corresponding to the online operation is identified according to a mapping relation between the scene and the preset dialect, the dialect most suitable for the scene can be recalled according to a dialect label, for example, the dialect most suitable for the scene is recalled based on characteristics such as semantics and hot degree, and the dialog is pushed according to the preset dialect, so that a scene-based recommended dialect is realized, and for different scenes corresponding to different pages, the dialects are different respectively, so that the accurate correspondence between the dialect and the scene is achieved, and the accuracy and the communication efficiency of the dialog are improved.
In the embodiment of the application, the module information contained in the page module corresponding to the online operation is acquired by responding to the online operation of the user, the conversation scene corresponding to the online operation is identified according to the module information, the preset dialogues corresponding to the conversation scene are acquired based on the conversation scene, the conversation is pushed according to the preset dialogues, the conversation scene corresponding to the module where the user is located is identified, the module scene where the user is located is used as the important parameter pushed by the dialogues corresponding to the conversation scene, the mapping relation between the conversation scene and the dialogues is established according to the identified conversation scene, the dialogues most suitable for the scene are recalled according to the dialogues label, different scenes and different purposes are realized, compared with the traditional technology, the specific scene where the user enters the robot is not used as one parameter, when a user enters the robot from different insurance product pages and different positions of the pages (for example, the information module for filling the applicant and the detail guaranteeing module are different, and the questions asked by the user are possibly different), the prediction questions (guessing by the robot) seen by the user are not different, and the accuracy of conversation and the efficiency of conversation communication are improved.
Further, please refer to table 2, different dialects with preset priorities may also be set for the same scene, the priorities may be set according to the sequence of the service logic of the specific service, and according to the preset priority corresponding to the dialects and the communication plan with the user, the push is performed according to the sequence of the preset priority corresponding to the dialects from high to low.
Table 2
Referring to fig. 2, fig. 2 is a schematic sub-flow chart of an online self-service conversation method according to an embodiment of the present application. In this embodiment, the module information includes module content, the module content includes contents such as text, picture, voice, and table, and the step of acquiring the module information included in the page module corresponding to the online operation in response to the online operation of the user includes:
s21, detecting a sliding track of the on-line operation;
s22, identifying the current staying target position of the sliding track;
s23, identifying a target page area to which the target position belongs according to the target position;
s24, acquiring a module ID of the target page area according to the target page area;
and S25, acquiring the module content corresponding to the module ID according to the module ID.
The online operation may be a sliding track of a mouse or a sliding track of a gesture operation on the smart device.
Specifically, when a user performs online operation on a page, the sliding track of the online operation may be detected by monitoring a specific area of the page where the user is located, a target position where the sliding track currently stays is identified, a target page area to which the target position belongs is identified according to the target position, a module ID to which the target page area belongs is acquired according to the target page area, and module content corresponding to the module ID may be acquired based on a preset association relationship between the module ID and the module content according to the module ID, so that module information included in a page module corresponding to the online operation is acquired in response to the online operation of the user.
Referring to fig. 3, fig. 3 is a second sub-flowchart of an online self-service dialog method according to an embodiment of the present application. In this embodiment, the step of identifying, according to the target position, a target page area to which the target position belongs includes:
s31, identifying a target page to which the target position belongs according to the target position;
s32, determining a target module contained in the target page to which the target position belongs according to the target page;
and S33, taking the target module as a target page area.
Specifically, for the situation that the same web page includes different modules to correspond to different scenes, different scenes may be described by the different modules, and when the target page area to which the target position belongs is identified according to the target position, the target page to which the target position belongs is identified according to the target position, and then the target module included in the target page to which the target position belongs is determined according to the target page, and the target module is used as the target page area. For example, for all insurance products sold in insurance, after a user enters a page and before the user enters a robot conversation interface, the front end of the page sends a product ID corresponding to the page where the user is located and a module corresponding to a page position (for example, a module corresponding to a benefit demonstration scene, a module corresponding to a insured person information filling scene, or a module corresponding to a guarantee detail scene) to a robot background in a parameter form, so that the robot background can identify a specific scene of the module where the user is located, thereby accurately identifying the scene where the user is located, and providing accurate dialect according to the scene.
Referring to fig. 4, fig. 4 is a third sub-flow diagram of an online self-service dialog method according to an embodiment of the present application. In this embodiment, the step of identifying, according to the target position, a target page to which the target position belongs includes:
s41, acquiring a corresponding relation between a preset page and a preset page position range corresponding to the preset page;
s42, judging whether the target position is included in the preset page position range or not according to the target position;
s43, if the target position is included in the preset page position range, judging that the preset page is a target page;
and S44, if the target position is not included in the preset page position range, judging that the preset page is not the target page.
Specifically, a corresponding preset page position range is preset for each page, that is, what position belongs to which page, after the target position is obtained, a corresponding relationship between the preset page and the preset page position range corresponding to the preset page is obtained, whether the target position is included in the preset page position range is judged according to the target position, if the target position is included in the preset page position range, the preset page is judged to be the target page, and if the target position is not included in the preset page position range, the preset page is judged not to be the target page until the target page corresponding to the target position is identified.
Referring to fig. 5, fig. 5 is a fourth sub-flow diagram of an online self-service dialog method according to an embodiment of the present application. In this embodiment, the step of acquiring, according to the module ID, the module content corresponding to the module ID includes:
s51, acquiring a text corresponding to the module ID according to the module ID;
and S52, carrying out named entity recognition on the text to recognize the named entities contained in the text.
The Named Entity Recognition (NER), also called "proper name Recognition", refers to recognizing entities with specific meaning in text, and mainly includes names of people, places, organizations, proper nouns, and the like.
Specifically, after a target page is determined, a module where a user is located is determined according to the target page, a module ID corresponding to the module is obtained, a text corresponding to the module ID is obtained according to the module ID and based on a mapping relation between the module ID and module content, named entity recognition is performed on the text to recognize named entities contained in the text, a scene (namely a conversation scene) corresponding to the target page is recognized according to the named entities, a preset conversation corresponding to the scene is obtained based on the scene, and a conversation is pushed according to the preset conversation.
Further, the step of identifying the dialog scene corresponding to the online operation according to the module information includes:
screening out a target named entity matched with the theme corresponding to the page module according to the named entity;
and determining a conversation scene corresponding to the page module according to the target named entity, so as to identify the conversation scene corresponding to the online operation.
Specifically, after the module ID of the module where the user is located is obtained, the module corresponding to the module ID is the target module where the user is located, then multiple named entities in the target module are obtained, and the theme of the target module can also be obtained, a target named entity matched with the theme corresponding to the page module is screened out according to the named entities, and the session scene corresponding to the page module is determined according to the target named entity, so that the session scene corresponding to the online operation is identified. For example, monitoring an area corresponding to a specific page module of a page where a user is located, performing named entity identification on the page module, identifying a named entity in the page module, acquiring a part in which the user is most interested, and predicting a problem which is most likely to be presented by the user or recommending a problem in which the user is most interested according to a collaborative recommendation principle. The collaborative recommendation, which may be referred to as collaborative filtering, is to recommend information interested by a user by using the preferences of a group with mutual interests and common experiences, and individuals give responses (such as scores) to the information to a considerable extent through a collaborative mechanism and record the responses to filter the information, so as to help others to filter the information.
Referring to fig. 6, fig. 6 is a fifth sub-flowchart of an online self-service dialog method according to an embodiment of the present application. In this embodiment, before the step of obtaining a preset dialect corresponding to the dialog scene based on the dialog scene and pushing a dialog according to the preset dialect, the method further includes:
s61, counting the stay time of the sliding track at the target position;
s62, judging whether the staying time is larger than or equal to a preset time threshold value or not;
s63, if the stay time is larger than or equal to a preset time threshold, executing the steps of acquiring a preset dialect corresponding to the scene based on the scene and pushing a dialog according to the preset dialect;
s64, if the staying time is less than the preset time threshold, not executing the step of obtaining the preset dialect corresponding to the scene based on the scene, and pushing the dialog according to the preset dialect, and may further count the staying time of the sliding track at the target location until the staying time is greater than or equal to the preset time threshold, or the user leaves the target location.
Specifically, not only when the user clicks the session interface, the preset dialogues corresponding to the session scene are obtained based on the session scene corresponding to the module where the user is located, and the dialog is pushed to the user according to the preset dialogues, but also the preset dialogues corresponding to the session scene can be actively pushed to the user, so that the user is helped to know the content corresponding to the webpage module in an active push-talk mode, the user can be judged to be interested in the webpage content of the target position when the staying time of the user at the target position exceeds a preset time threshold, at this time, the user is involved in the understanding of the webpage content, the preset dialogues corresponding to the session scene are obtained based on the scene, the dialog is pushed according to the preset dialogues, and the staying time of the user at the target position can be judged by counting the staying time of the sliding track at the target position, judging whether the stay time is larger than or equal to a preset time threshold, if the stay time is larger than or equal to the preset time threshold, executing the step of obtaining a preset dialogue corresponding to the scene based on the scene, and pushing the dialogue according to the preset dialogue, if the stay time is smaller than the preset time threshold, not executing the step of obtaining the preset dialogue corresponding to the scene based on the scene, and pushing the dialogue according to the preset dialogue, and further counting the stay time of the sliding track at the target position until the stay time is larger than or equal to the preset time threshold, or the user leaves the target position. For example, when self-service is provided through an intelligent robot on a webpage, a user can be prompted to obtain help in an active bubbling mode, so that not only can the user see scene-based recommendations after clicking the robot, but also when the user stays in a certain webpage or a certain module in the webpage for a certain time, the bubbling active guidance client is triggered to avoid the situation that the user is interested in a certain part of the webpage but suspects whether the robot can understand own problems without actively asking questions, and the problem that when the user often asks the robot or has some psychological thresholds, after the most probable problems of the user are insights, the user is prompted to actively bubble, the user is more likely to be suspicious at the moment, the psychological thresholds are reduced, the user is guided to find the robot for consultation when being questioned, and the service efficiency of self-service conversation is improved.
Further, in an embodiment, the step of obtaining a preset dialect corresponding to the conversation scene based on the conversation scene and pushing a conversation according to the preset dialect includes:
and acquiring a preset question option corresponding to the conversation scene based on the conversation scene, and pushing the preset question option to a user for the user to select so as to carry out conversation.
Specifically, the problem that the user most possibly asks in the scene can be predicted by combining the scene, or the problem corresponding to the content that the user wants to know is displayed to the user in advance, the user can see the answer only by clicking, the online operation input by the user is reduced, the experience is more intelligent, the conversation accuracy and the conversation efficiency are improved, compared with the prior art, the user does not know what problem the robot can solve unless the user actively clicks the robot, the self-service conversation mode which can only be passively responded to is lacked in active guidance, and the service efficiency of the self-service conversation can be improved through active guidance.
Further, before the step of obtaining a preset dialect corresponding to the dialog scene based on the dialog scene and pushing a dialog according to the preset dialect, the method further includes:
receiving dialog information input by a user;
judging whether the dialogue information is clear or not;
if the conversation information is unclear, predicting complete conversation information corresponding to the conversation information based on the conversation scene, executing a preset conversation corresponding to the conversation scene based on the conversation scene according to the complete conversation information, and pushing a conversation according to the preset conversation.
Specifically, due to the difference of language habit expressions used by users, it is impossible that all users have a conversation using a uniform standard language, and it is necessary to cover all language expressions, so as to provide services as accurate as possible for all-around users, after receiving the conversation information input by the users, the conversation information may be text information or voice information, if the conversation information is voice information, the voice information needs to be converted into text information through voice recognition, then the text information is judged as follows, and then whether the conversation information is clear and complete, i.e., whether the intention described by the information of the users can be understood is judged, if the conversation information is clear and complete, according to the conversation information of the users directly, based on the conversation scene, a preset conversation corresponding to the conversation scene is obtained, and the conversation is pushed according to the preset conversation, if the conversation information is unclear, predicting complete conversation information corresponding to the conversation information based on the conversation scene so as to completely supplement the conversation information of the user, executing the steps of obtaining a preset conversation corresponding to the conversation scene based on the conversation scene according to the complete conversation information, and pushing the conversation according to the preset conversation, so that the user can be more accurately understood on a higher probability, and the service efficiency of self-service conversation can be improved by actively predicting the intention of the conversation information of the user. For example, on a display page of an insurance product, the intention of a user can be known in a zero-index automatic completion entity mode, and because the user scene is known, when the user puts forward an unknown question, such as ' what can be saved ', the robot can automatically complete the conversation intention of which product can be saved according to the specific scene corresponding to the page where the user is located, for example, the page where the user is in the A risk category, the robot can complete the conversation intention of which product can be saved according to the specific scene which the user inquires, so that the user can complete the question of ' the A risk category, the response accuracy in conversation is improved, and the service efficiency of self-service conversation is improved. According to the embodiment of the application, the input and the intention of the user can be more easily understood by the robot through active insight based on a conversation scene and automatic completion of the entity, the response accuracy rate of the robot is improved, the user experience is better, compared with the prior art, a passive response mode based on self-service conversation is adopted, the guidance and the entity completion to the user are lacked, the user who actually asks questions can be more open in the question method, the subject language and the like can be omitted, only passive conversation can be carried out according to the actual question asking content of the user, the robot recognition rate is lower, the efficiency of self-service conversation is reduced, the user experience is influenced, the service efficiency of self-service conversation can be improved, and the user experience is improved.
It should be noted that, the online self-service dialog method described in each of the above embodiments may recombine the technical features included in different embodiments as needed to obtain a combined implementation, but all of the embodiments are within the protection scope claimed in the present application.
Referring to fig. 7, fig. 7 is a schematic block diagram of an online self-service dialog apparatus according to an embodiment of the present disclosure. Corresponding to the online self-service conversation method, the embodiment of the application also provides an online self-service conversation device. As shown in fig. 7, the online self-service dialogue device includes a unit for executing the online self-service dialogue method, and the online self-service dialogue device may be configured in a computer device. Specifically, referring to fig. 7, the online self-service dialogue device 70 includes an acquisition unit 71, a recognition unit 72, and a pushing unit 73.
The acquiring unit 71 is configured to, in response to an online operation of a user, acquire module information included in a page module corresponding to the online operation, where the page module is a page module selected by the user among a plurality of page modules on an online operation page;
the identifying unit 72 is configured to identify a dialog scene corresponding to the online operation according to the module information;
and the pushing unit 73 is configured to obtain a preset dialect corresponding to the conversation scene based on the conversation scene, and push the conversation according to the preset dialect.
In one embodiment, the obtaining unit 71 includes:
a detection subunit, configured to detect a sliding trajectory of the on-line operation;
the first identification subunit is used for identifying the current staying target position of the sliding track;
the second identification subunit is used for identifying a target page area to which the target position belongs according to the target position;
the first obtaining subunit is configured to obtain, according to the target page area, a module ID to which the target page area belongs;
and the second acquisition subunit is used for acquiring the module content corresponding to the module ID according to the module ID.
In one embodiment, the second identification subunit comprises:
the third identification subunit is used for identifying a target page to which the target position belongs according to the target position;
a first determining subunit, configured to determine, according to the target page, a target module included in the target page to which the target position belongs;
and the second determining subunit is used for taking the target module as a target page area.
In one embodiment, the third identification subunit comprises:
the third acquiring subunit is used for acquiring a corresponding relation between a preset page and a preset page position range corresponding to the preset page;
the judging subunit is used for judging whether the target position is included in the preset page position range or not according to the target position;
and the judging subunit is used for judging that the preset page is the target page if the target position is included in the preset page position range.
In one embodiment, the second acquiring subunit includes:
the fourth acquiring subunit is configured to acquire, according to the module ID, a text corresponding to the module ID;
and the molecule subunit is used for carrying out named entity recognition on the text so as to recognize the named entities contained in the text.
In one embodiment, the online self-service conversation device 70 further includes:
the statistical unit is used for counting the staying time of the sliding track at the target position;
the judging unit is used for judging whether the staying time is larger than or equal to a preset time threshold value or not;
and the execution unit is used for executing the steps of acquiring a preset dialect corresponding to the scene based on the scene and pushing the dialog according to the preset dialect if the stay time is greater than or equal to a preset time threshold.
In an embodiment, the pushing unit 73 is specifically configured to obtain a preset question option corresponding to the dialog scene based on the dialog scene, and push the preset question option to the user for the user to select for performing a dialog.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation processes of the online self-service dialog apparatus and each unit may refer to the corresponding descriptions in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided herein.
Meanwhile, the division and connection modes of the units in the online self-service conversation device are only used for illustration, in other embodiments, the online self-service conversation device may be divided into different units as required, and the units in the online self-service conversation device may also adopt different connection sequences and modes to complete all or part of the functions of the online self-service conversation device.
The above-described online self-service dialogue apparatus may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 8.
Referring to fig. 8, fig. 8 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a computer device such as a desktop computer or a server, or may be a component or part of another device.
Referring to fig. 8, the computer device 500 includes a processor 502, a memory, and a network interface 505 connected by a system bus 501, wherein the memory may include a non-volatile storage medium 503 and an internal memory 504, and the memory may also be a volatile computer-readable storage medium.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform an online self-service session method as described above.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can execute an online self-help dialogue method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 8 is a block diagram of only a portion of the configuration relevant to the present teachings and does not constitute a limitation on the computer device 500 to which the present teachings may be applied, and that a particular computer device 500 may include more or less components than those shown, or combine certain components, or have a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 8, and are not described herein again.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps: responding to online operation of a user, and acquiring module information contained in a page module corresponding to the online operation, wherein the page module is a page module selected by the user in a plurality of page modules on an online operation page; recognizing a conversation scene corresponding to the online operation according to the module information; and acquiring a preset dialect corresponding to the conversation scene based on the conversation scene, and pushing the conversation according to the preset dialect.
In an embodiment, when the processor 502 implements the step of obtaining module information included in the page module corresponding to the online operation in response to the online operation of the user, the following steps are specifically implemented:
detecting a sliding track of the on-line operation;
identifying a target position where the sliding track stays currently;
identifying a target page area to which the target position belongs according to the target position;
acquiring a module ID of the target page area according to the target page area;
and acquiring the module content corresponding to the module ID according to the module ID.
In an embodiment, when the processor 502 implements the step of identifying the target page area to which the target position belongs according to the target position, the following steps are specifically implemented:
identifying a target page to which the target position belongs according to the target position;
determining a target module contained in the target page to which the target position belongs according to the target page;
and taking the target module as a target page area.
In an embodiment, when the processor 502 implements the step of identifying the target page to which the target position belongs according to the target position, the following steps are specifically implemented:
acquiring a corresponding relation between a preset page and a preset page position range corresponding to the preset page;
judging whether the target position is included in the preset page position range or not according to the target position;
and if the target position is included in the preset page position range, judging that the preset page is the target page.
In an embodiment, when the processor 502 implements the step of obtaining the module content corresponding to the module ID according to the module ID, the following steps are specifically implemented:
acquiring a text corresponding to the module ID according to the module ID;
and carrying out named entity recognition on the text to recognize the named entities contained in the text.
In an embodiment, before the step of obtaining a preset dialect corresponding to the dialog scene based on the dialog scene and pushing a dialog according to the preset dialect, the processor 502 further implements the following steps:
counting the stay time of the sliding track at the target position;
judging whether the residence time is greater than or equal to a preset time threshold value or not;
and if the stay time is larger than or equal to a preset time threshold, executing the step of acquiring a preset dialect corresponding to the scene based on the scene and pushing a dialog according to the preset dialect.
In an embodiment, when the processor 502 implements the steps of obtaining a preset dialect corresponding to the dialog scene based on the dialog scene, and pushing a dialog according to the preset dialect, the following steps are specifically implemented:
and acquiring a preset question option corresponding to the conversation scene based on the conversation scene, and pushing the preset question option to a user for the user to select so as to carry out conversation.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the processes in the method for implementing the above embodiments may be implemented by a computer program, and the computer program may be stored in a computer readable storage medium. The computer program is executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a computer-readable storage medium. The computer readable storage medium may be a non-volatile computer readable storage medium, or may be a volatile computer readable storage medium, and the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program causes the processor to execute the steps of the online self-help dialogue method described in the above embodiments.
The computer readable storage medium may be an internal storage unit of the aforementioned device, such as a hard disk or a memory of the device. The computer readable storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the apparatus.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The storage medium is an entity and non-transitory storage medium, and may be various entity storage media capable of storing computer programs, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a magnetic disk, or an optical disk.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a personal computer, a terminal, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. An online self-service dialogue method, comprising:
responding to online operation of a user, and acquiring module information contained in a page module corresponding to the online operation, wherein the page module is a page module selected by the user in a plurality of page modules on an online operation page;
recognizing a conversation scene corresponding to the online operation according to the module information;
and acquiring a preset dialect corresponding to the conversation scene based on the conversation scene, and pushing the conversation according to the preset dialect.
2. The online self-service dialogue method according to claim 1, wherein the step of obtaining module information included in the page module corresponding to the online operation in response to the online operation of the user comprises:
detecting a sliding track of the on-line operation;
identifying a target position where the sliding track stays currently;
identifying a target page area to which the target position belongs according to the target position;
acquiring a module ID of the target page area according to the target page area;
and acquiring the module content corresponding to the module ID according to the module ID.
3. The online self-service conversation method according to claim 2, wherein the step of identifying the target page area to which the target position belongs according to the target position comprises:
identifying a target page to which the target position belongs according to the target position;
determining a target module contained in the target page to which the target position belongs according to the target page;
and taking the target module as a target page area.
4. The online self-service conversation method according to claim 3, wherein the step of identifying the target page to which the target position belongs according to the target position comprises:
acquiring a corresponding relation between a preset page and a preset page position range corresponding to the preset page;
judging whether the target position is included in the preset page position range or not according to the target position;
and if the target position is included in the preset page position range, judging that the preset page is the target page.
5. The online self-service dialogue method according to claim 2, wherein the step of obtaining the module content corresponding to the module ID according to the module ID comprises:
acquiring a text corresponding to the module ID according to the module ID;
and carrying out named entity recognition on the text to recognize the named entities contained in the text.
6. The online self-service conversation method according to claim 2, wherein before the step of obtaining a preset conversation corresponding to the conversation scene based on the conversation scene and pushing the conversation according to the preset conversation, the method further comprises:
counting the stay time of the sliding track at the target position;
judging whether the residence time is greater than or equal to a preset time threshold value or not;
and if the stay time is larger than or equal to a preset time threshold, executing the step of acquiring a preset dialect corresponding to the scene based on the scene and pushing a dialog according to the preset dialect.
7. The online self-service conversation method according to any one of claims 1 to 6, wherein the step of obtaining a preset conversation corresponding to the conversation scene based on the conversation scene and pushing a conversation according to the preset conversation comprises:
and acquiring a preset question option corresponding to the conversation scene based on the conversation scene, and pushing the preset question option to a user for the user to select so as to carry out conversation.
8. An online self-service dialogue device, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for responding to the online operation of a user and acquiring module information contained in a page module corresponding to the online operation;
the identification unit is used for identifying a conversation scene corresponding to the online operation according to the module information;
and the pushing unit is used for acquiring a preset dialect corresponding to the conversation scene based on the conversation scene and pushing the conversation according to the preset dialect.
9. A computer device, comprising a memory and a processor coupled to the memory; the memory is used for storing a computer program; the processor is adapted to run the computer program to perform the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when being executed by a processor, realizes the steps of the method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011209905.1A CN112306326A (en) | 2020-11-03 | 2020-11-03 | Online self-service conversation method and device, computer equipment and computer readable medium |
PCT/CN2021/091277 WO2022095377A1 (en) | 2020-11-03 | 2021-04-30 | Online self-service dialogue method and apparatus, computer device, and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011209905.1A CN112306326A (en) | 2020-11-03 | 2020-11-03 | Online self-service conversation method and device, computer equipment and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112306326A true CN112306326A (en) | 2021-02-02 |
Family
ID=74333134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011209905.1A Pending CN112306326A (en) | 2020-11-03 | 2020-11-03 | Online self-service conversation method and device, computer equipment and computer readable medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112306326A (en) |
WO (1) | WO2022095377A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536111A (en) * | 2021-06-11 | 2021-10-22 | 北京十一贝科技有限公司 | Insurance knowledge content recommendation method and device and terminal equipment |
WO2022095377A1 (en) * | 2020-11-03 | 2022-05-12 | 平安科技(深圳)有限公司 | Online self-service dialogue method and apparatus, computer device, and computer readable medium |
CN117078270A (en) * | 2023-10-17 | 2023-11-17 | 彩讯科技股份有限公司 | Intelligent interaction method and device for network product marketing |
WO2024230570A1 (en) * | 2023-05-10 | 2024-11-14 | 上海任意门科技有限公司 | Artificial intelligence device dialog control method, apparatus and device, and medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119002744A (en) * | 2023-11-21 | 2024-11-22 | 北京字跳网络技术有限公司 | Method, apparatus, device and storage medium for information interaction |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008136530A (en) * | 2006-11-30 | 2008-06-19 | Daiichikosho Co Ltd | Automatic recording data output system |
CN105162892A (en) * | 2015-10-15 | 2015-12-16 | 戚克明 | Language technique exercise treatment method, apparatus and system, and language technique exercise supervision method |
CN110580122A (en) * | 2019-08-21 | 2019-12-17 | 阿里巴巴集团控股有限公司 | Question-answer type interactive processing method, device and equipment |
CN111858872A (en) * | 2020-04-10 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Question-answer interaction method and device, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102624675B (en) * | 2011-01-27 | 2014-08-06 | 腾讯科技(深圳)有限公司 | Self-service customer service system and method |
KR20140143610A (en) * | 2013-06-07 | 2014-12-17 | 엘지전자 주식회사 | Mobile terminal and operation method thereof |
CN106897884A (en) * | 2017-01-24 | 2017-06-27 | 武汉奇米网络科技有限公司 | The method and system of quick guiding visitor consulting |
CN108805694B (en) * | 2018-05-24 | 2023-11-17 | 广州金翰网络科技有限公司 | Credit consultation service method, apparatus, device and computer readable storage medium |
CN112306326A (en) * | 2020-11-03 | 2021-02-02 | 平安科技(深圳)有限公司 | Online self-service conversation method and device, computer equipment and computer readable medium |
-
2020
- 2020-11-03 CN CN202011209905.1A patent/CN112306326A/en active Pending
-
2021
- 2021-04-30 WO PCT/CN2021/091277 patent/WO2022095377A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008136530A (en) * | 2006-11-30 | 2008-06-19 | Daiichikosho Co Ltd | Automatic recording data output system |
CN105162892A (en) * | 2015-10-15 | 2015-12-16 | 戚克明 | Language technique exercise treatment method, apparatus and system, and language technique exercise supervision method |
CN110580122A (en) * | 2019-08-21 | 2019-12-17 | 阿里巴巴集团控股有限公司 | Question-answer type interactive processing method, device and equipment |
CN111858872A (en) * | 2020-04-10 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Question-answer interaction method and device, electronic equipment and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022095377A1 (en) * | 2020-11-03 | 2022-05-12 | 平安科技(深圳)有限公司 | Online self-service dialogue method and apparatus, computer device, and computer readable medium |
CN113536111A (en) * | 2021-06-11 | 2021-10-22 | 北京十一贝科技有限公司 | Insurance knowledge content recommendation method and device and terminal equipment |
CN113536111B (en) * | 2021-06-11 | 2024-06-07 | 北京十一贝科技有限公司 | Recommendation method and device for insurance knowledge content and terminal equipment |
WO2024230570A1 (en) * | 2023-05-10 | 2024-11-14 | 上海任意门科技有限公司 | Artificial intelligence device dialog control method, apparatus and device, and medium |
CN117078270A (en) * | 2023-10-17 | 2023-11-17 | 彩讯科技股份有限公司 | Intelligent interaction method and device for network product marketing |
CN117078270B (en) * | 2023-10-17 | 2024-02-02 | 彩讯科技股份有限公司 | Intelligent interaction method and device for network product marketing |
Also Published As
Publication number | Publication date |
---|---|
WO2022095377A1 (en) | 2022-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112306326A (en) | Online self-service conversation method and device, computer equipment and computer readable medium | |
CN109145280B (en) | Information pushing method and device | |
US10453082B2 (en) | Accredited advisor management system | |
US10572778B1 (en) | Machine-learning-based systems and methods for quality detection of digital input | |
US11163778B2 (en) | Integrating virtual and human agents in a multi-channel support system for complex software applications | |
US10380380B1 (en) | Protecting client personal data from customer service agents | |
CN109817312A (en) | A kind of medical treatment guidance method and computer equipment | |
CN110020009B (en) | Online question and answer method, device and system | |
CN110597952A (en) | Information processing method, server, and computer storage medium | |
CN111985249A (en) | Semantic analysis method and device, computer-readable storage medium and electronic equipment | |
US11775674B2 (en) | Apparatus and method for recommending user privacy control | |
KR102041259B1 (en) | Apparatus and Method for Providing reading educational service using Electronic Book | |
CN111428032B (en) | Content quality evaluation method and device, electronic equipment and storage medium | |
US20140358631A1 (en) | Method and apparatus for generating frequently asked questions | |
CN108268450B (en) | Method and apparatus for generating information | |
CN105488039A (en) | Query method and device | |
US20230267475A1 (en) | Systems and methods for automated context-aware solutions using a machine learning model | |
CN110597965B (en) | Emotion polarity analysis method and device for article, electronic equipment and storage medium | |
CN113434653A (en) | Method, device and equipment for processing query statement and storage medium | |
KR102500949B1 (en) | System for providing mentoring services and operating method thereof | |
CN107590676A (en) | Method, apparatus, equipment and the computer-readable storage medium provided personalized service | |
WO2020124962A1 (en) | Product recommendation method and apparatus based on data analysis and terminal device | |
CN110929014B (en) | Information processing method, information processing device, electronic equipment and storage medium | |
US20130230248A1 (en) | Ensuring validity of the bookmark reference in a collaborative bookmarking system | |
US12047652B2 (en) | Information processing method, apparatus, and computer storage medium for real-time video applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210202 |
|
RJ01 | Rejection of invention patent application after publication |