[go: up one dir, main page]

CN112686052A - Test question recommendation method, test question training method, electronic equipment and storage device - Google Patents

Test question recommendation method, test question training method, electronic equipment and storage device Download PDF

Info

Publication number
CN112686052A
CN112686052A CN202011582885.2A CN202011582885A CN112686052A CN 112686052 A CN112686052 A CN 112686052A CN 202011582885 A CN202011582885 A CN 202011582885A CN 112686052 A CN112686052 A CN 112686052A
Authority
CN
China
Prior art keywords
sample
test question
question
representation
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011582885.2A
Other languages
Chinese (zh)
Other versions
CN112686052B (en
Inventor
凌超
沙晶
付瑞吉
王士进
魏思
胡国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202011582885.2A priority Critical patent/CN112686052B/en
Publication of CN112686052A publication Critical patent/CN112686052A/en
Application granted granted Critical
Publication of CN112686052B publication Critical patent/CN112686052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses test question recommendation and training methods of relevant models, electronic equipment and a storage device, wherein the training method of the test question recommendation model comprises the following steps: acquiring a first sample test question pair belonging to a user; acquiring an initial user representation of a user, and acquiring an initial test question pair representation of a first sample test question pair; inputting the initial user representation and the initial test question pair representation into a test question recommendation model to obtain a final user representation of the user and a final test question pair representation of the first sample test question pair; wherein the end-user representation comprises: semantic information of the user and semantic information of a first sample question pair of the user; predicting the sample adaptation degree of the user and the sample choice in each group of first sample question pairs by using the final user representation and the final test question pair representation; and adjusting the network parameters of the test question recommendation model based on the sample adaptation degree. According to the scheme, the accuracy of test question recommendation can be improved.

Description

Test question recommendation method, test question training method, electronic equipment and storage device
Technical Field
The present application relates to the field of natural language processing technologies, and in particular, to a test question recommendation method, a test question training method, an electronic device, and a storage device.
Background
The practical teaching is an important teaching thought in teaching, and the practical situations of users such as students and examinees are combined in the teaching process, and a targeted teaching strategy is adopted according to the difference of the users. In the teaching process, the test questions are indispensable important links, and the user can continuously consolidate and deepen learning through the test questions. In summary, it is necessary to recommend test questions to users in a targeted manner during the teaching process to match the personalized differences of different users. In view of this, how to improve the accuracy of test question recommendation becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem text mainly solved by the application is to provide a test question recommendation method, a test question training method, electronic equipment and a storage device of a relevant model, and the accuracy of test question recommendation can be improved.
In order to solve the above problem, a first aspect of the present application provides a method for training a test question recommendation model, including: acquiring a first sample test question pair belonging to a user; each group of first sample question pairs comprises a sample question and a sample choice question, wherein the sample choice question is a test question selected by a user from a plurality of recommended questions of the sample question; acquiring an initial user representation of a user, and acquiring an initial test question pair representation of a first sample test question pair; inputting the initial user representation and the initial test question pair representation into a test question recommendation model to obtain a final user representation of the user and a final test question pair representation of the first sample test question pair; wherein the end-user representation comprises: semantic information of the user and semantic information of a first sample question pair of the user; predicting the sample adaptation degree of the user and the sample choice in each group of first sample question pairs by using the final user representation and the final test question pair representation; and adjusting the network parameters of the test question recommendation model based on the sample adaptation degree.
In order to solve the above problem, a second aspect of the present application provides a test question recommendation method, including: acquiring an original question and a plurality of candidate test questions of a target user, and acquiring a final user representation of the target user; the final user representation is obtained by using the training method of the test question recommendation model in the first aspect, each candidate test question and each original candidate test question are respectively used as a group of test question pairs, and the test question pair representation of each group of test question pairs is obtained; the prediction adaptation degree of the target user and each candidate test question is obtained by using the final user representation and each group of test question pair representation; and recommending candidate test questions to the target user based on the prediction adaptation degree.
In order to solve the above problem text, a third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the memory stores program instructions, and the processor is configured to execute the program instructions to implement the training method for the test question recommendation model in the first aspect or implement the test question recommendation method in the second aspect.
In order to solve the above problem, a fourth aspect of the present application provides a storage device, which stores program instructions capable of being executed by a processor, where the program instructions are used to implement the method for training the test question recommendation model in the first aspect or implement the method for recommending test questions in the second aspect.
According to the scheme, the first sample test question pairs belonging to the user are obtained, each first sample test question pair comprises a sample question and a sample question, the sample question is selected by the user from a plurality of recommended questions of the sample question, so that initial user representation of the user is obtained, the initial test question pair representation of the first sample test question pairs is obtained, the initial user representation and the initial test question pair representation are input into a test question recommendation model, the final user representation of the user and the final test question pair representation of the first sample test question pairs are obtained, and the final user representation comprises: on the basis of the semantic information of the user and the semantic information of the first sample question pair of the user, the sample adaptation degree of the user and the sample choice of each group of first sample question pairs is obtained through prediction by using the final user representation and the final test question pair representation, and the network parameters of the test question recommendation model are adjusted based on the sample adaptation degree. Because the end user represents the semantic information containing the user and the semantic information belonging to the first sample test question pair of the user, and the sample choice questions belonging to the first sample test question pair of the user are matched with the user, the end user represents the semantic information which not only can contain the semantic information of the end user but also can further contain the semantic information of the matched test questions, thereby being beneficial to improving the accuracy of representing the predicted sample adaptation degree by using the end user, further being beneficial to improving the accuracy of adjusting network parameters, namely improving the accuracy of a test question recommendation model, and finally being beneficial to improving the accuracy of test question recommendation.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a training method for a test question recommendation model according to the present application;
FIG. 2 is a block diagram of an embodiment of a user and a first pair of sample questions belonging to the user;
FIG. 3 is a block diagram of an embodiment of a user question versus interaction diagram;
FIG. 4 is a block diagram of an embodiment of a relational prediction model;
FIG. 5 is a schematic flow chart diagram illustrating an embodiment of a method for training a relational prediction model;
FIG. 6 is a flowchart illustrating an embodiment of a method for recommending test questions according to the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a memory device according to the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for recommending test questions according to the present application. Specifically, the method may include the steps of:
step S11: and acquiring a first sample test question pair belonging to the user.
In the embodiments of the present disclosure and other embodiments of the present disclosure, the user may include a student at a school in each stage of a kindergarten, a primary school, a junior middle school, a high school, a university, etc., or a social person such as a company employee, etc., and is not limited herein. In addition, the specific disciplines related to the test questions can be set according to the practical application. For example, may include, but is not limited to: the disciplines of mathematics, physics, chemistry, biology, etc. are not limited herein.
In the embodiment of the present disclosure, each group of first sample question pairs may include a sample question and a sample question, where the sample question is a test question selected by a user from a plurality of recommended questions of the sample question. For example, the sample question may be a test question 1 in a certain teaching unit, the recommended test question may include other test questions (e.g., a test question 2, a test question 3, etc.) belonging to the same teaching unit as the test question 1, and in the case that the user selects the test question 3, the test question 3 may be used as a sample question, for convenience of description, the first sample test question pair composed of the test question 1 and the test question 3 may be denoted as i1-i3, and the other cases may be analogized, which is not illustrated herein.
It should be noted that the specific number of users is not limited. That is, there may be one user, or there may be two, three, four, and so on, and the setting may be specifically performed according to the actual application situation. In one implementation scenario, in order to enable each user to distinguish semantic information of a first sample question pair (i.e., negative example samples) that is not adapted to the user during the training process, and further enhance learning of semantic information of the first sample question pair (i.e., positive example samples) that is adapted to the user, thereby further improving the accuracy of question recommendation, the specific number of users may be set to be multiple.
In one implementation scenario, please refer to fig. 2 in combination, fig. 2 is a schematic diagram of an embodiment of a user and a first sample question belonging to the user. As shown in fig. 2, the specific number of users may be 3, which may be respectively denoted as u1, u2 and u3 for convenience of description, and the first sample question pair belonging to the user u1 includes: i1-i3, the first sample question pair belonging to user u2 includes: i1-i3, i2-i5 and i4-i6, and the first sample question pair belonging to the user u3 comprises: the specific meanings of i1-i3, i3-i4, i4-i6, i2-i5, i4-i6 and i3-i4 can refer to the related description, and are not described herein again. The above example does not therefore limit the user in the actual application process and the first sample question pair belonging to the user, and may be specifically set according to the actual situation, which is not limited herein.
Step S12: an initial user representation of the user is obtained, and an initial test question pair representation of a first sample test question pair is obtained.
In one implementation scenario, the initial user representation of the user may be a random vector. For example, the initial user representation may be randomly initialized to a vector of dimension d.
In another implementation scenario, the initial user representation of the user may also be derived from the extracted user attribute features. User attributes may specifically include, but are not limited to: user age, user gender, user grade (e.g., first grade, second grade, etc.), user school segment (e.g., primary school, junior middle school, high school, etc.), school grade (e.g., provincial grade, city grade, district grade, etc.), and so forth, without limitation.
In one implementation scenario, the initial test question pair representation of the first sample test question pair may be obtained by vector mapping the sample question and the sample question. Specifically, the sample question may be segmented to obtain a plurality of question words in the sample question, and the sample question is segmented to obtain a plurality of question words in the sample question, so that vector mapping is performed on the plurality of question word words to obtain a question word vector representation of the plurality of question words, and vector mapping is performed on the plurality of question words to obtain a question word vector representation of the plurality of question words, on this basis, a combination of the question word vector representations of the plurality of question words may be used as a question vector representation, and finally, the question vector representation and the question vector representation may be fused to obtain an initial test question pair representation.
In a specific implementation scenario, in order to improve the word segmentation accuracy, the sample question and the sample question may be preprocessed before performing word segmentation. The pretreatment may specifically include at least one of: converting the test questions in the image format into the text format, eliminating characters irrelevant to the test questions, converting the test question formula into a preset format, and the like, without limitation. Specifically, the test question in the image format may be converted into the text format by using an OCR (Optical Character Recognition) method or the like. Furthermore, the test question independent characters may include, but are not limited to: web page tags (e.g., </body >, etc.), test question scores (e.g., 2, 5, etc.), and characters such as linefeed characters, tab characters, etc., without limitation. The preset format may include, but is not limited to: LaTex format, etc. For example, since the formula often has rich syntactic and semantic information, the formula is usually in mathml (physical Markup language) format in the web page, so that it can be uniformly converted into a LaTex format. In addition, when the test question includes an image, the image can be directly removed if the presence or absence of the image does not affect the comprehension of the test question.
In another specific implementation scenario, considering that the test questions of the subjects such as mathematics and physics often include both a formula and a text, the test questions may be identified first to obtain the formula and the text in the test questions, and on the basis, different word segmentation methods may be adopted to perform word segmentation on the text and the formula respectively. In particular, for text, word tools such as LTP, crust, etc. may be employed; and the formula can be segmented based on rules, for example, the relation characters such as equal sign ═ and greater sign > "can be segmented on both sides. Taking "angle ABC ═ 60" as an example, text "angle" and formula "ABC ═ 60" can be recognized, so that the text "angle" is segmented to obtain word "angle", the formula "ABC ═ 60" is segmented to obtain "ABC", "=" 60 ", so that the final" angle ABC ═ 60 "segmentation is: "corner", "ABC", "═ 60". Other cases may be analogized, and no one example is given here.
In another specific implementation scenario, word vector mapping tools such as word2vec may be specifically used to perform vector mapping on the plurality of topic word words and the plurality of topic words, and finally, each topic word or each topic word may be mapped as a d-dimensional vector.
In another implementation scenario, in order to improve the accuracy of the representation of the initial test question pair, a first sample semantic representation of the sample question may be extracted, and a second sample semantic representation of the sample question may be extracted, where the first sample semantic representation includes semantic information of a plurality of test question attributes of the sample question, and the second sample semantic representation includes semantic information of a plurality of test question attributes of the sample question, so that the first sample semantic representation and the second sample semantic representation may be fused to obtain the initial test question pair representation. According to the method, the first sample semantic representation of the sample questions is extracted, the second sample semantic representation of the sample questions is extracted, the first sample semantic representation contains semantic information of a plurality of test question attributes of the sample questions, the second sample semantic representation contains semantic information of a plurality of test question attributes of the sample questions, on the basis, the first sample semantic representation and the second sample semantic representation are fused to obtain the initial test question pair representation, the sample questions and the semantic information of the sample questions at the level of the plurality of test question attributes can be fused in the initial test question pair representation, and therefore the accuracy of the initial test question pair representation can be improved.
In a specific implementation scenario, the plurality of test question attributes may specifically include, but are not limited to: the main knowledge points of the test questions, the solution mode of the test questions, the question setting mode of the test questions, the difficulty of the test questions, etc., are not limited herein. The test question main knowledge points represent knowledge points mainly studied by the test question, and taking a mathematical test question as an example, the test question main knowledge points may include but are not limited to: trigonometric functions, the Weber theorem, etc., without limitation. The test question solving mode represents a skill means for solving test questions, taking mathematical test questions as an example, the test question solving mode can include but is not limited to: substitution, elimination, hypothesis, etc., without limitation. The test question setting method represents an object to be solved by the test question, taking a mathematical test question as an example, the test question setting method may include, but is not limited to: maximum, minimum, etc., and are not limited herein. The difficulty of the test questions indicates the difficulty of answering the test questions, such as high difficulty, medium difficulty and low difficulty, which is not limited herein. Through setting up above-mentioned test question attribute, can follow a plurality of angles, a plurality of aspect and extract the semantic information of test question to can be favorable to further richening the initial test question to the semantic information that shows, promote the accuracy of initial test question to showing.
In another specific implementation scenario, in order to improve the efficiency of semantic extraction, attribute semantic extraction networks may be trained in advance, and each test question attribute corresponds to one attribute semantic extraction network, so that the attribute semantic extraction networks corresponding to a plurality of test question attributes may be used to extract semantic information, and the semantic information extracted by the attribute semantic extraction networks corresponding to a plurality of test question attributes may be fused to obtain sample semantic representation. For example, for a sample original question, original question word vector representations of original question words in the sample original question may be obtained first, and then the original question word vector representations are input into an attribute semantic extraction network corresponding to a plurality of test question attributes (e.g., a test question main knowledge point, a test question solving way, a test question setting way, test question difficulty, etc.), so that semantic information (e.g., test question main knowledge point semantic information, test question solving way semantic information, test question difficulty information, etc.) corresponding to a plurality of test question attributes (e.g., a test question main knowledge point, a test question solving way, a test question setting way, test question difficulty, etc.) may be obtained respectively, and then the semantic information may be spliced to obtain a first sample semantic representation of the sample original question. In addition, the obtaining manner of the original term vector representation may specifically refer to the related description, and is not described herein again. For sample questions, a similar processing step to the sample question above may be employed to obtain a second sample semantic representation of the sample question. The attribute semantic extraction network may specifically include, but is not limited to: CNN (Convolutional Neural Networks), etc., and is not limited herein. In addition, the specific training process of the attribute semantic extraction network may refer to the steps in the following related disclosure embodiments, which are not repeated herein.
In another specific implementation scenario, the first sample semantic representation and the second sample semantic representation may be spliced to fuse the first sample semantic representation and the second sample semantic representation to obtain an initial test question pair representation.
Step S13: and inputting the initial user representation and the initial test question pair representation into the test question recommendation model to obtain a final user representation of the user and a final test question pair representation of the first sample test question pair.
In the disclosed embodiment, the end-user representation includes: semantic information of the user and semantic information of a first sample question pair belonging to the user. The semantic information of the user itself may include semantic information related to only the attribute of the user itself, and the semantic information of different users may be different, and specifically may include but is not limited to: age semantic information, gender semantic information, grade semantic information, school district semantic information, grade semantic information, and the like, which are not limited herein.
In an implementation scenario, the test question recommendation model may perform semantic extraction on the initial user representation and the initial test question pair representation respectively to extract deep semantic information represented by the initial user and deep semantic information represented by the initial test question pair respectively, and on this basis, the deep semantic information of the user and the deep semantic information of the first sample test question pair belonging to the user may be fused to obtain the final user representation.
In another implementation scenario, to improve efficiency, the test question recommendation model may specifically include a Graph Neural Network (GNN), so that the initial user representation and the initial test question pair representation may be utilized to construct the user test question interaction diagram, and further, the user test question interaction diagram may be input into the Graph Neural Network to obtain the final user representation and the final test question pair representation. In the mode, the test question recommendation model is set to comprise the graph neural network, the initial user representation and the initial test question pair representation are utilized to construct the user test question pair interaction graph, so that the user test question interaction graph is input into the graph neural network, and the final user representation and the final test question pair representation are obtained, so that the graph neural network can be utilized to aggregate test question pair semantic information and user semantic information, the information aggregation efficiency and accuracy can be improved, and the accuracy of the final user representation and the accuracy of the final test question pair representation can be improved.
In a specific implementation scenario, please refer to fig. 2 and fig. 3 in combination, and fig. 3 is a schematic diagram of a framework of an embodiment of a user question-interaction diagram. As shown in fig. 2 and fig. 3, fig. 3 is a test question pair interaction diagram obtained by unfolding fig. 2 around a user u2, and in addition, the test question pair interaction diagram can also be unfolded around users u1 and u3 in fig. 2, which is not described herein again in detail. The information aggregation process of the graph neural network is briefly described below with reference to fig. 3: in the semantic information aggregation process, the semantic information of each layer of nodes can be acquired in sequence. For example, in the layer 0, the semantic information of the user u2 is the initial user representation of the user u2, the semantic information of the test question pair i1-i3 is the initial test question pair representation of the test question pair i1-i3, the semantic information of the test question pair i2-i5 is the initial test question pair representation of the test question pair i2-i5, the semantic information of the test question pair i4-i6 is the initial test question pair representation of the test question pair i4-i6, and for convenience of description, the semantic information of the user in the layer 0 may be recorded as the initial user representation of the user u2, the semantic information of the user in the layer 0 may be recorded as the
Figure BDA0002866307540000071
And the semantic information of the test question at the 0 th layer is recorded as
Figure BDA0002866307540000072
At the k-th layer, all the test questions belonging to the user can be matched with the semantic information of the previous layer (namely, the k-1 layer) during information aggregation
Figure BDA0002866307540000073
And semantic information of the user itself at the upper layer (i.e., k-1 layer)
Figure BDA0002866307540000074
After regularization, obtaining semantic information of the user at the k-th layer through an activation function (such as sigmoid and the like)
Figure BDA0002866307540000075
Specifically, it can be expressed as:
Figure BDA0002866307540000076
in the above formula (1), σ () represents an activation function,
Figure BDA0002866307540000077
represents the network parameters of the graph neural network at the k-th layer, | N (v) | represents the set of test question pairs belonging to the user, | N (u) | N (v) | represents the total number of pairs of the user and test question, and | represents the product operation.
After the multi-layer information aggregation, semantic information of each layer can be obtained. On this basis, semantic information of the user on all layers can be fused (e.g., connected) as the end-user representation of the user
Figure BDA0002866307540000078
And fusing (e.g., concatenating) the semantic information of the test question pair at all layers as the final semantic representation of the test question pair
Figure BDA0002866307540000079
Referring to fig. 3, semantic information of the user u2 at the 0 th layer, the 1 st layer, and the 2 nd layer may be fused to finally obtain an end user representation of the user u2, and so on in other cases, which is not illustrated here.
In another specific implementation scenario, the test question recommendation model needs to be obtained through several times of training, and in the training process of this time, the final user obtained through the last training can be represented as the initial user representation of the corresponding user, so that the accuracy of the final user representation can be continuously improved in several times of training processes. For example, in the 2 nd training process, the final user representation obtained by the test question recommendation model trained in the 1 st training may be used as the initial user representation of the corresponding user, and so on in other cases, which is not described herein again.
In yet another specific implementation scenario, the initial test question pair representation of the first sample test question pair remains unchanged during several training sessions of the test question recommendation model. For a specific obtaining process, reference may be made to the foregoing related description, which is not described herein again. In this case, after the 1 st training of the test question recommendation model obtains the initial test question pair representation, in each training process, the initial test question pair obtained by the 1 st training can be directly represented as the initial test question pair representation of the current training, so that the complexity of the training of the test question recommendation model can be favorably reduced.
Step S14: and predicting the sample adaptation degree of the user and the sample choice in each group of first sample question pairs by using the final user representation and the final test question pair representation.
In an implementation scenario, in a case that there is only one user, the final user representation of the user may be multiplied by the final test question pair representations of the respective first test question pairs, so as to obtain a sample adaptation degree of the user to the sample choice in the corresponding first test question pair.
In another implementation scenario, when there are multiple users, for each user, the final user representation of the user may be multiplied by the final test question representations of all the first sample test question pairs, so as to obtain the sample adaptation degree of the sample choice between the user and the corresponding first sample test question pair. Referring to FIG. 2 in conjunction, the end user of user u2 may be represented
Figure BDA0002866307540000081
Final test question pair representation with first sample test question pair i1-i3 respectively
Figure BDA0002866307540000082
Final test question pair representation of first sample test question pair i2-i5
Figure BDA0002866307540000083
Final test question pair representation of first sample test question pair i3-i4
Figure BDA0002866307540000084
Final test question pair representation of first sample test question pair i4-i6
Figure BDA0002866307540000085
And multiplying to obtain the sample adaptation degree of the user u2 and the sample topic i3 in the first sample topic pair i1-i3, the sample adaptation degree of the user u2 and the sample topic i5 in the first sample topic pair i2-i5, the sample adaptation degree of the user u2 and the sample topic i4 in the first sample topic pair i3-i4, and the sample adaptation degree of the user u2 and the sample topic i6 in the first sample topic pair i4-i 6. Other cases may be analogized, and no one example is given here.
Step S15: and adjusting the network parameters of the test question recommendation model based on the sample adaptation degree.
In an implementation scenario, in only one case of the user, the actual adaptation degree of the sample choice between the user and the first sample question pair belonging to the user may be set to a first value (e.g., 1), so that the network parameter of the test question recommendation model may be adjusted by using the difference between the sample adaptation degree and the actual adaptation degree.
In another implementation scenario, under the condition that there are multiple users, the actual adaptation degrees of the multiple users to the sample choice in each group of first sample question pairs can be obtained, the actual adaptation degree of the user to the sample choice in the first sample question pair belonging to the user is a first numerical value (e.g., 1), the actual adaptation degree of the user to the sample choice in the first sample question pair not belonging to the user is a second numerical value (e.g., 0), and the first numerical value is greater than the second numerical value, so that the network parameters of the test question recommendation model can be adjusted by using the difference between the sample adaptation degrees and the actual adaptation degrees. In the above way, under the condition that a plurality of users exist, by acquiring the actual adaptation degree of the plurality of users to the sample choice in each group of first sample test question, the actual adaptation degree of the user to the sample choice in the first sample question pair belonging to the user is a first numerical value, the actual adaptation degree of the user to the sample choice in the first sample question pair not belonging to the user is a second numerical value, and the first numerical value is larger than the second numerical value, therefore, the network parameters of the test question recommendation model are adjusted by utilizing the difference between the sample adaptation degree and the actual adaptation degree, which is beneficial to the training process, enabling each user to discern semantic information for a first sample question pair (i.e. negative examples) that does not fit the user, and furthermore, the semantic information of the first sample question pair (namely, the sample of the right case) adapted to the user is further learned in an enhanced manner, so that the accuracy of the recommendation of the test questions is further improved.
According to the scheme, the first sample test question pairs belonging to the user are obtained, each first sample test question pair comprises a sample question and a sample question, the sample question is selected by the user from a plurality of recommended questions of the sample question, so that initial user representation of the user is obtained, the initial test question pair representation of the first sample test question pairs is obtained, the initial user representation and the initial test question pair representation are input into a test question recommendation model, the final user representation of the user and the final test question pair representation of the first sample test question pairs are obtained, and the final user representation comprises: on the basis of the semantic information of the user and the semantic information of the first sample question pair of the user, the sample adaptation degree of the user and the sample choice of each group of first sample question pairs is obtained through prediction by using the final user representation and the final test question pair representation, and the network parameters of the test question recommendation model are adjusted based on the sample adaptation degree. Because the end user represents the semantic information containing the user and the semantic information belonging to the first sample test question pair of the user, and the sample choice questions belonging to the first sample test question pair of the user are matched with the user, the end user represents the semantic information which not only can contain the semantic information of the end user but also can further contain the semantic information of the matched test questions, thereby being beneficial to improving the accuracy of representing the predicted sample adaptation degree by using the end user, further being beneficial to improving the accuracy of adjusting network parameters, namely improving the accuracy of a test question recommendation model, and finally being beneficial to improving the accuracy of test question recommendation.
Referring to fig. 4, fig. 4 is a schematic diagram of a framework of an embodiment of a relational prediction model. It should be noted that the relationship prediction model is used for predicting the test question relationship among the test questions, so that the relationship prediction model is beneficial to not only recommending the test questions aiming at the original questions, but also further predicting the test question relationship among the original questions and the recommended test questions in the test question recommendation process, outputting the test question relationship to the user based on the predicted test question relationship, and further being beneficial to improving the interpretability of the test question recommendation. As shown in fig. 4, the relationship prediction model may specifically include an attribute semantic extraction network, an interactive semantic extraction network, and a test question relationship prediction network corresponding to a plurality of test question attributes, and the following describes the training process of the relationship prediction model in detail with reference to the flow diagram shown in fig. 5:
step S51: and acquiring a plurality of groups of second sample test question pairs, and acquiring a third sample test question marked with actual test question attributes.
In the embodiment of the present disclosure, the second sample question pair includes a first sample question and a second sample question, and the second sample question pair is labeled with an actual question relationship between the first sample question and the second sample question. It should be noted that the test question relationship may include, but is not limited to: the context variables, the questions variables, the condition variables, the similarities, the repeated questions, the irrelevance, etc., are not limited herein. The situation variation means that the background materials of the two test questions are similar; the question setting variable indicates that the question setting modes of the two test questions are similar; the condition variation means that the problem solving conditions of the two test problems are similar; by similar, it is meant that the two test questions are substantially identical in terms of, for example, context, questions, conditions, etc.; the repeated questions mean that the two test questions are completely the same; irrelevant means that none of the two test questions are similar in terms of, for example, context, questions, conditions, etc.
In one implementation scenario, the test question attributes may specifically include, but are not limited to: the main knowledge points of the test questions, the solution mode of the test questions, the question setting mode of the test questions, the difficulty of the test questions, and the like may be referred to the related description in the foregoing embodiments, and are not described herein again.
Step S52: and training the attribute semantic extraction network for a plurality of times by using a third sample test question until a preset condition is met.
Referring to fig. 4, the relational prediction model may include attribute semantic extraction networks corresponding to a plurality of test question attributes, respectively. For example, it may include: an attribute semantic extraction network corresponding to the test question main knowledge point, an attribute semantic extraction network corresponding to the test question solving mode, an attribute semantic extraction network corresponding to the test question setting mode, an attribute semantic extraction network corresponding to the test question difficulty and the like, which are not limited herein.
Taking the attribute semantic extraction network corresponding to the main knowledge point of the test question as an example, please continue to refer to fig. 4, a word segmentation may be performed on a third sample test question to obtain a plurality of words, and then a vector mapping may be performed on the plurality of words to obtain a third vector representation of the third sample test question, and the specific process of the word segmentation and the vector mapping may refer to the related description in the foregoing disclosed embodiment, and is not described herein again. On the basis, the third vector can be used for representing the attribute semantic extraction network corresponding to the input test question main knowledge point to obtain semantic information related to the third sample test question and the test question main knowledge point, so that the semantic information can be input into a full connection layer for prediction to obtain a prediction main knowledge point of the third sample test question, and the network parameters of the attribute semantic extraction network corresponding to the test question main knowledge point can be adjusted by using the difference between the actual main knowledge point marked by the third sample test question and the prediction main knowledge point obtained through prediction.
In an implementation scenario, the training process of the attribute semantic extraction network corresponding to the attributes of other test questions may refer to the above process and so on, and no example is given here.
In another implementation scenario, the attribute semantic extraction network may specifically include, but is not limited to, a convolutional neural network, so that a hidden layer of the convolutional neural network may be output as semantic information related to the subject main knowledge point.
In another implementation scenario, a loss value between the actual principal knowledge point labeled by the third sample test question and the predicted principal knowledge point obtained through prediction may be specifically calculated through a cross entropy loss function, so that the loss value may be used to adjust the attribute semantics corresponding to the test question principal knowledge point to extract the network parameters of the network.
In yet another implementation scenario, the preset condition may include any one of: the loss value is less than a preset loss threshold (e.g., 0.1, etc.), and the training number reaches a preset number threshold (e.g., 1000, etc.), which is not limited herein.
In the embodiment of the present disclosure, after the attribute semantic extraction network is trained for several times until the preset condition is satisfied, the attribute semantic extraction network may be considered to be converged. Because the number of the third sample test questions marked with the actual test question attributes is large in a real scene, the network performance of the attribute semantic extraction network can be greatly improved by depending on a large number of the third sample test questions marked with the actual test question attributes, on the basis, even if the second sample test questions marked with the actual test question relationships are rare, the overall effect of the relationship prediction model can be ensured as much as possible, and the dependence of model training on few samples can be greatly reduced.
Step S53: and extracting a first sample test question representation of the first sample test question and a second sample test question representation of the second sample test question by using the attribute semantic extraction network.
With continued reference to fig. 4, as shown in fig. 4, the first sample question and the second sample question may be input into the left branch and the right branch of fig. 4, respectively, to obtain a first sample question representation and a second sample question representation. It should be noted that the word vector mapping and the attribute semantic extraction network of the left branch and the right branch are the same, and the difference between the two branches is only that the input data is different. Specifically, the word segmentation may be performed on the first sample test question to obtain a plurality of words, and then the vector mapping may be performed on the plurality of words to obtain a first vector representation of the first sample test question, and the specific process of the word segmentation and the vector mapping may refer to the related description in the foregoing disclosure, which is not described herein again. Therefore, the first vector representation can be respectively input into the attribute semantic extraction networks corresponding to the plurality of test question attributes to obtain semantic information (namely, rectangles filled with different shades in fig. 4) of the plurality of test question attributes, and finally, the combination of the semantic information of the plurality of test question attributes is used as the first sample test question representation. The process of acquiring the second sample test question is specifically referred to the process of acquiring the first sample test question, and is not described herein again.
Step S54: and extracting the sample interactive semantic representation of the second sample test question pair by using the interactive semantic extraction network.
In an embodiment of the disclosure, the sample interactive semantic representation includes: semantic relatedness between sample words in the first sample test question and sample words in the second sample test question.
With continued reference to fig. 4, in order to improve the accuracy of the test question relationship prediction, the context information of the test question may be further encoded to help distinguish the word ambiguity. Specifically, the first vector representation including each first test question word in the first test question may be input into a Bidirectional long-short term Memory network (Bi-LSTM), so that a forward semantic representation and a backward semantic representation of each sample word may be obtained, and on this basis, the forward semantic representation and the backward semantic representation of each sample word may be fused (e.g., added) respectively to obtain a sample word representation of each sample word. In addition, the second vector representation containing each sample term in the second sample question may be input into the two-way long-short term memory network, and the sample term representation of each sample term is obtained through a similar process as described above. For convenience of description, the sample word representation of each sample word in the first sample question may be denoted as [ h ]a1,ha2,…,han]And the sample word representation of each sample word in the second sample test question is marked as [ hb1,hb2,…,hbm]。
In one implementation scenario, the semantic relatedness may include a semantic relatedness between a sample term representation of each sample term in the first sample question and a sample global representation of the second sample question. In the above manner, the semantic relevance is set to include: the semantic relevance between the sample word representation of each sample word in the first sample test question and the sample global representation of the second sample test question can be modeled from the dimension of each sample word in the first sample test question and the global level of the second sample test question, and the significance of each sample word in the first sample test question can be highlighted.
In another implementation scenario, the semantic relevance may further include a semantic relevance between a sample term representation of each sample term in the first sample question and a sample local representation of the second sample question. In the above manner, the semantic relevance is further set to include: the semantic relevance between the sample word representation of each sample word in the first sample test question and the sample local representation of the second sample test question can be further established from the dimension of each sample word in the first sample test question and the local layer of the second sample test question, and the problem that the semanteme is different in word surface and the semanteme is the same is favorably relieved.
In yet another implementation scenario, the semantic relatedness may further include a semantic relatedness between the sample term representation of each sample term in the second sample question and the sample global representation of the first sample question. In the above manner, the semantic relevance is set to include: the semantic relevance between the sample word representation of each sample word in the second sample test question and the sample global representation of the first sample test question can be modeled from the dimension of each sample word in the second sample test question and the global level of the first sample test question, and the significance of each sample word in the second sample test question can be highlighted.
In yet another implementation scenario, the semantic relatedness may further include a semantic relatedness between the sample term representation of each sample term in the second sample question and the sample local representation of the first sample question. In the above manner, the semantic relevance is set to include: the semantic relevancy between the sample word representation of each sample word in the second sample test question and the sample local representation of the first sample test question can be further established from the dimension of each sample word in the second sample test question and the global level of the first sample test question, and the problem that the semantemes are different in word surface and the semantemes are the same is favorably solved.
It should be noted that the semantic correlation may specifically be cosine similarity. In addition, the semantic relevancy is modeled from two dimensions such as the dimension of each sample word in the first sample test question, the dimension of each sample word in the second sample test question and the like, and a plurality of angles such as global semantics, local semantics and the like, so that the richness of sample interaction semantic representation can be favorably improved greatly, and the accuracy of test question relation prediction can be favorably improved.
In one particular implementation scenario, the sample word representation of the sample word at the end of the sample question may be taken as a sample global representation. Specifically, in the case that the sample global representation is a sample global representation of the first sample question, the sample question is the first sample question, that is, the sample word of the sample word at the end of the first sample question may be represented as the sample global representation of the first sample question, and the sample word of the first sample question may be represented as [ h [a1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, h can beanA sample global representation as a first sample question; alternatively, in the case where the sample global representation is a sample global representation of the second sample question, the sample word representation of the sample word positioned at the end of the second sample question may be represented as the sample global representation of the second sample question, and the sample word representation [ h ] of the first sample question may be represented as the sample global representation of the second sample questiona1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, h can bebmAs a sample global representation of the second sample question.
In another specific implementation scenario, the average of the sample word representations of all sample words in the sample question may be used as the sample global representation. Specifically, in the case that the sample global representation is the sample global representation of the first sample question, the sample question is the first sample question, that is, the average value of the sample word representations of all the sample words in the first sample question can be used as the average valueFor the sample global representation of the first sample question, the sample word of the first sample question is represented as [ ha1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, h can bea1,ha2,…,hanAs a sample global representation of the first sample question; alternatively, in the case where the sample global representation is a sample global representation of the second sample question, the average of the sample word representations of all sample words in the second sample question may be used as the sample global representation of the second sample question, and the sample word representation [ h ] of the first sample question may be used as the sample global representation of the second sample questiona1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, h can beb1,hb2,…,hbmAs a sample global representation of the second sample question.
Therefore, in the process of acquiring the semantic correlation between the sample word representation of each sample word in the first sample test question and the sample global representation of the second sample test question, or in the process of acquiring the semantic correlation between the sample word representation of each sample word in the second sample test question and the sample global representation of the first sample test question, the sample global representation does not change along with the sample words.
In yet another specific implementation scenario, a sample target word closest to the sample reference word position may be determined in the sample question and a sample word representation of the sample target word may be used as the sample local representation. Specifically, in the case that the sample part is represented as the sample part of the first sample test question, the sample test question is the first sample test question, the sample target word is the sample word in the first sample test question, and the reference sample word is the sample word in the second sample test question. The sample word of the first sample question is used for representing [ ha1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, for the reference sample word in the second sample questionb1, the sample word of the sample target word (e.g., a1) closest to the reference sample word b1 in the first sample question may be represented (e.g., h)a1) A sample local representation as a first sample question; or, in the case that the sample part is represented as the sample part of the second sample test question, the sample test question is the second sample test question, the sample target word is the sample word in the second sample test question, the reference sample word is the sample word in the first sample test question, and the sample word of the first sample test question is represented as [ h ]a1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, for the reference sample word a1 in the first sample question, the sample word (e.g., h) of the sample target word (e.g., b1) closest to the reference sample word a1 in the second sample question may be represented (e.g., h)b1) And the sample local representation is used as a sample local representation of the second sample test question.
In yet another specific implementation scenario, a sample target word with the largest semantic correlation with the sample reference word may be determined in the sample question, and the sample word representation of the sample target word may be used as the sample local representation. Specifically, in the case that the sample part is represented as the sample part of the first sample test question, the sample test question is the first sample test question, the sample target word is the sample word in the first sample test question, the reference sample word is the sample word in the second sample test question, and the sample word of the first sample test question is represented as [ h [a1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, for the reference sample word b1 in the second sample question, the sample word (e.g., h) of the sample target word (e.g., a2) with the largest semantic relevance between the sample target word and the reference sample word b1 in the first sample question may be representeda2) A sample local representation as a first sample question; or, in the case that the sample part is represented as the sample part of the second sample test question, the sample test question is the second sample test question, the sample target word is the sample word in the second sample test question, and the reference sample word is the second sample test questionThe sample words in the first sample question are represented by the sample words of the first sample question [ h [ ]a1,ha2,…,han]And a sample word representation of a second sample question hb1,hb2,…,hbm]For example, for reference sample word a1 in the first sample question, the sample word (e.g., h) of the sample target word (e.g., b2) having the greatest semantic relevance to reference sample word a1 in the second sample question may be representedb2) And the sample local representation is used as a sample local representation of the second sample test question.
It can be seen that in the process of obtaining the semantic correlation between the sample word representation of each sample word in the first sample test question and the sample local representation of the second sample test question, or in the process of obtaining the semantic correlation between the sample word representation of each sample word in the second sample test question and the sample local representation of the first sample test question, the sample local representation may vary with the sample word.
Step S55: and inputting the first sample test question representation, the second sample test question representation and the sample interactive semantic representation into a test question relation prediction network to obtain a sample test question relation between the first sample test question and the second sample test question.
Referring to fig. 4, as shown in fig. 4, the first sample test question representation, the second sample test question representation and the sample interactive semantic representation may be specifically spliced, and the spliced semantic representation is input to the test question relationship prediction network to obtain the sample test question relationship between the first sample test question and the second sample test question.
In one implementation scenario, the test question relationship prediction network may specifically include, but is not limited to: full connectivity layer, softmax, etc., without limitation.
In another implementation scenario, the test question relationship prediction network may output prediction probability values of several preset test question relationships. The plurality of preset test question relationships may include, but are not limited to: the context variables, the query variables, the condition variables, the similarities, the repeated questions, the irrelevancies, and the like may be referred to the related descriptions, and are not described herein again.
Step S56: and adjusting network parameters of the interactive semantic extraction network and the test question relation prediction network based on the difference between the actual test question relation and the sample test question relation.
Specifically, the loss value can be obtained by processing the predicted probability values of the actual test question relationship and the plurality of preset test question relationships by using the preset loss function, so that the network parameters of the interactive semantic extraction network and the test question relationship prediction network can be adjusted by using the loss value.
In one implementation scenario, the actual test question relationship may be encoded as a one-hot vector label, and several preset test question relationships include: the situation variables, the question variables, the condition variables, the similarities, the repeated questions and the irrelevance are taken as examples, the actual test question relationship can be coded into a vector label of 1 × 6, the element of the position corresponding to the actual test question relationship is set as 1, and other elements are set as 0. Taking the actual test question relationship as a "situation variation" as an example, the actual test question relationship may be encoded as a one-hot vector label [100000], and the rest may be analogized, which is not illustrated herein.
In another implementation scenario, the preset loss function may include, but is not limited to: cross entropy loss functions, etc., and are not limited herein.
It should be noted that, in order to improve the accuracy of the relationship prediction model as much as possible, the interactive semantic extraction network and the test question relationship prediction network may be trained for several times by using the second sample test question until the preset condition is satisfied. That is, in the case where the preset condition is not satisfied, the above-described step S53 and subsequent steps may be re-executed. The preset condition may include any one of: the loss value is less than a preset loss threshold (e.g., 0.1, etc.), and the training number reaches a preset number threshold (e.g., 1000, etc.), which is not limited herein.
Different from the embodiment, the semantic information of the plurality of test question attributes is extracted by using the attribute semantic extraction network corresponding to the plurality of test question attributes respectively, the attribute semantic extraction network corresponding to the plurality of test question attributes is contained in the relation prediction model, the relation prediction model further comprises an interactive semantic extraction network and a test question relation prediction network, so that by obtaining a plurality of groups of second sample test question pairs and obtaining a third sample test question marked with actual test question attributes, the second sample test question pairs comprise a first sample test question and a second sample test question, the second sample test question pairs are marked with actual test question relations between the first sample test question and the second sample test question, the third sample test question is used for carrying out a plurality of times of training on the attribute semantic extraction network until preset conditions are met, and then the attribute semantic extraction network is used for extracting the first sample test question representation of the first sample test question and the second sample test question representation of the second sample test question, and extracting sample interactive semantic representation of the second sample test question pair by using the interactive semantic extraction network, wherein the sample interactive semantic representation comprises: on the basis of the semantic correlation between the sample words in the first sample test questions and the sample words in the second sample test questions, the first sample test question representation, the second sample test question representation and the sample interactive semantic representation are input into a test question relation prediction network to obtain sample test question relations between the first sample test questions and the second sample test questions, and network parameters of an interactive semantic extraction network and the test question relation prediction network are adjusted based on the difference between the actual test question relations and the sample test question relations, so that the training can be performed in stages in the training process of a relation prediction model, the first stage can depend on a large number of third sample test questions labeled with actual test question attributes to train an attribute semantic extraction network, thereby greatly improving the network performance of the attribute semantic extraction network, so that even in the second stage training process, the second sample test questions labeled with the actual test question relations are rare, the overall effect of the relation prediction model can be ensured as much as possible, and the dependency of model training on few samples can be greatly reduced.
Referring to fig. 6, fig. 6 is a flowchart illustrating an embodiment of a method for recommending test questions according to the present application. Specifically, the method may include the steps of:
step S61: and acquiring the original questions and a plurality of candidate test questions of the target user, and acquiring the final user representation of the target user.
In the embodiment of the present disclosure, the end-user representation is obtained by using any one of the above training methods of the test question recommendation model. Specifically, after the training of the test question recommendation model is finished, the end user representation of each user can be obtained, and on the basis, the end user representation of the target user can be obtained through screening.
In one implementation scenario, the target user may be any one of several users. Specifically, after the user finishes the current test question, the user may use the current test question as an original question and use the user as a target user, so that the test question can be recommended to the user based on the current test question. In addition, the current test question may be a test question that the user made a mistake, or a test question that the user made a right, which is not limited herein.
In one implementation scenario, the candidate test questions may be all test questions in the question bank, or may be test questions of the same teaching unit as the original questions, which is not limited herein.
Step S62: and respectively taking each candidate test question and the original test question as a group of test question pairs, and obtaining the test question pair representation of each group of test question pairs.
For example, the question a and the candidate question 1, the candidate question 2, and the candidate question 3 may respectively constitute a question pair 1 (i.e., the question a and the candidate question 1), a question pair 2 (i.e., the question a and the candidate question 2), and a question pair 3 (i.e., the question a and the candidate question 3). Other cases may be analogized, and no one example is given here.
In addition, a first semantic representation of the original questions in the test question pairs can be extracted, a second semantic representation of the candidate test questions in the test question pairs can be extracted, the first semantic representation comprises semantic information of a plurality of test question attributes of the original questions, the second semantic representation comprises semantic information of a plurality of test question attributes of the candidate test questions, and therefore the first semantic representation and the second semantic representation can be fused to obtain test question pair representation. Reference may be made to the related steps in the embodiments of the foregoing disclosure, which are not described herein again.
In order to improve the semantic extraction efficiency, the attribute semantic extraction network corresponding to the test question attributes in fig. 4 may be used to extract semantic information of the test question attributes, which may specifically refer to the relevant description in the foregoing disclosed embodiment, and is not described herein again.
Step S63: and obtaining the prediction adaptation degree of the target user and each candidate test question respectively by using the final user representation and each test question pair representation.
Specifically, the final user representation may be multiplied by the test question pair representations of the test question pairs, respectively, to obtain the predicted adaptation degree of the target user and the candidate test question in the test question pair. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Step S64: and recommending candidate test questions to the target user based on the prediction adaptation degree.
In one implementation scenario, candidate test questions may be recommended to the target user in an order from a large prediction adaptation degree to a small prediction adaptation degree. Specifically, the candidate test questions may be sequentially arranged according to the descending order of the prediction adaptation degrees, so as to be selected by the user. Still taking the candidate test question 1, the candidate test question 2 and the candidate test question 3 as examples, if the prediction adaptation degree corresponding to the candidate test question 1 is 0.9, the prediction adaptation degree corresponding to the candidate test question 2 is 0.8 and the prediction adaptation degree corresponding to the candidate test question 3 is 0.85, the user can be recommended according to the sequence of the candidate test question 1, the candidate test question 3 and the candidate test question 2.
In another implementation scenario, in order to enhance the reliability of test question recommendation, each group of test question pairs may be predicted to obtain a predicted test question relationship between the original questions and the candidate test questions in the test question pairs, so that candidate test questions may be recommended to the target user and the predicted test question relationship between the candidate test questions and the original questions may be output in a descending order of prediction adaptation degree. In the above manner, the prediction test question relationship between the original questions and the candidate test questions in the test question pairs is obtained by predicting each group of test question pairs, the candidate test questions are recommended to the target user and the prediction test question relationship between the candidate test questions and the original questions is output according to the sequence of the prediction adaptation degrees from large to small, and the recommended prediction test question relationship between the candidate test questions and the original questions can be output while the test questions are recommended to the target user, so that the interpretability of test question recommendation can be enhanced, and the reliability of test question recommendation can be improved.
In a specific implementation scenario, the relationship of the prediction test questions may be predicted by a relationship prediction model, and the relationship prediction model may include: and the attribute semantic extraction network, the interactive semantic extraction network and the test question relation prediction network correspond to a plurality of test question attributes. Specifically, the attribute semantic extraction network may be used to extract a first test question representation of a question and a second test question representation of a candidate test question, and the interactive semantic extraction network may be used to extract an interactive semantic representation of a test question pair, where the interactive semantic representation includes: and inputting the first test question representation, the second test question representation and the interactive semantic representation into a test question relation prediction network to obtain a predicted test question relation between the original question and the candidate test question. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
It should be noted that the semantic relevance may include at least one of the following: semantic relevancy between the word representation of each word in the original question and the global representation of the candidate test question, and semantic relevancy between the word representation of each word in the original question and the local representation of the candidate test question; and/or, the semantic relatedness may include at least one of: semantic relatedness between the word representation of each word in the candidate test question and the global representation of the original question, and semantic relatedness between the word representation of each word in the candidate test question and the local representation of the original question. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Furthermore, the obtaining of the global representation may comprise at least one of: representing the words of the words at the tail of the test question as global representation, and taking the average value of the word representations of all the words in the test question as global representation; the test questions refer to the original questions when the global representation is a global representation of the original questions, and the test questions refer to the candidate test questions when the global representation is a global representation of the candidate test questions. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
Furthermore, the step of obtaining the local representation may comprise at least one of: determining a target word closest to the position of the reference word in the test question, and representing the word of the target word as local representation; determining a target word with the maximum semantic relevance with the reference word in the test question, and representing the word of the target word as local representation; under the condition that the local expression is the local expression of the original question, the test question refers to the original question, the target words are words in the original question, and the reference words are words in the candidate test question; and under the condition that the local expression is the local expression of the candidate test question, the test question refers to the candidate test question, the target word is a word in the candidate test question, and the reference word is a word in the original question. Reference may be made to the related description in the foregoing embodiments, which are not repeated herein.
According to the scheme, the original questions and the candidate test questions of the target user are obtained, the end user representation of the target user is obtained by the aid of the training method of the test question recommendation model of any item, accuracy of the end user representation can be improved, the candidate test questions and the original are used as a group of test question pairs respectively, test question pair representations of the group of test question pairs are obtained, prediction adaptation degree of the target user and the candidate test questions is obtained by the aid of the end user representation and the group of test question pair representations, the candidate test questions are recommended to the target user based on the prediction adaptation degree, and accuracy of test question recommendation can be improved beneficially.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an embodiment of an electronic device 70 according to the present application. The electronic device 70 includes: a memory 71 and a processor 72 coupled to each other, wherein the memory 71 stores program instructions, and the processor 72 is configured to execute the program instructions to implement the steps in any of the above-mentioned embodiments of the test question recommendation method, or implement the steps in any of the above-mentioned embodiments of the test question recommendation method. Specifically, the electronic device 70 may include, but is not limited to: desktop computers, notebook computers, tablet computers, mobile phones, servers, and the like, without limitation.
Specifically, the processor 72 is configured to control itself and the memory 71 to implement the steps in any one of the above-mentioned training method embodiments of the test question recommendation model, or implement the steps in any one of the above-mentioned test question recommendation method embodiments. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
In some disclosed embodiments, the processor 72 is configured to obtain a first pair of sample questions belonging to the user; each group of first sample question pairs comprises a sample question and a sample choice question, wherein the sample choice question is a test question selected by a user from a plurality of recommended questions of the sample question; the processor 72 is configured to obtain an initial user representation of the user, and obtain an initial test question pair representation of the first sample test question pair; the processor 72 is configured to input the initial user representation and the initial test question pair representation into the test question recommendation model to obtain a final user representation of the user and a final test question pair representation of the first sample test question pair; wherein the end-user representation comprises: semantic information of the user and semantic information of a first sample question pair belonging to the user; the processor 72 is configured to predict, by using the final user representation and the final test question pair representation, a sample adaptation degree of a sample choice between the user and each group of the first sample test question pairs; the processor 72 is configured to adjust network parameters of the test question recommendation model based on the sample adaptation degree.
According to the scheme, the first sample test question pairs belonging to the user are obtained, each first sample test question pair comprises a sample question and a sample question, the sample question is selected by the user from a plurality of recommended questions of the sample question, so that initial user representation of the user is obtained, the initial test question pair representation of the first sample test question pairs is obtained, the initial user representation and the initial test question pair representation are input into a test question recommendation model, the final user representation of the user and the final test question pair representation of the first sample test question pairs are obtained, and the final user representation comprises: on the basis of the semantic information of the user and the semantic information of the first sample question pair of the user, the sample adaptation degree of the user and the sample choice of each group of first sample question pairs is obtained through prediction by using the final user representation and the final test question pair representation, and the network parameters of the test question recommendation model are adjusted based on the sample adaptation degree. Because the end user represents the semantic information containing the user and the semantic information belonging to the first sample test question pair of the user, and the sample choice questions belonging to the first sample test question pair of the user are matched with the user, the end user represents the semantic information which not only can contain the semantic information of the end user but also can further contain the semantic information of the matched test questions, thereby being beneficial to improving the accuracy of representing the predicted sample adaptation degree by using the end user, further being beneficial to improving the accuracy of adjusting network parameters, namely improving the accuracy of a test question recommendation model, and finally being beneficial to improving the accuracy of test question recommendation.
In some disclosed embodiments, the test question recommendation model includes a graph neural network, and the processor 72 is configured to construct a user test question interaction graph using the initial user representation and the initial test question pair representation; the processor 72 is configured to input the user question interaction diagram into the graph neural network, resulting in an end user representation and an end question pair representation.
Different from the embodiment, the test question recommendation model is set to comprise the graph neural network, the initial user representation and the initial test question pair representation are utilized to construct the user test question pair interaction graph, so that the user test question interaction graph is input into the graph neural network, and the final user representation and the final test question pair representation are obtained, therefore, the graph neural network can be utilized to aggregate the test question pair semantic information and the user semantic information, the information aggregation efficiency and accuracy can be improved, and the accuracy of the final user representation and the final test question pair representation can be improved.
In some disclosed embodiments, there are multiple users, and the processor 72 is configured to obtain actual adaptation degrees of the multiple users to the sample questions in each group of the first sample question pairs; the actual adaptation degree of the user to the sample choice in the first sample question pair belonging to the user is a first numerical value, the actual adaptation degree of the user to the sample choice in the first sample question pair not belonging to the user is a second numerical value, and the first numerical value is larger than the second numerical value; the processor 72 is configured to adjust the network parameters of the test question recommendation model by using the difference between the sample adaptation degree and the actual adaptation degree.
Different from the foregoing embodiment, in the case that there are multiple users, by obtaining the actual adaptation degree of the multiple users to the sample questions in each group of the first sample question, the actual adaptation degree of the user to the sample choice in the first sample question pair belonging to the user is a first numerical value, the actual adaptation degree of the user to the sample choice in the first sample question pair not belonging to the user is a second numerical value, and the first numerical value is larger than the second numerical value, therefore, the network parameters of the test question recommendation model are adjusted by utilizing the difference between the sample adaptation degree and the actual adaptation degree, which is beneficial to the training process, enabling each user to discern semantic information for a first sample question pair (i.e. negative examples) that does not fit the user, and furthermore, the semantic information of the first sample question pair (namely, the sample of the right case) adapted to the user is further learned in an enhanced manner, so that the accuracy of the recommendation of the test questions is further improved.
In some disclosed embodiments, the test question recommendation model is obtained through several training sessions, the initial test question pair representation of the first test question pair remains unchanged during several training sessions of the test question recommendation model, and/or the test question recommendation model is obtained through several training sessions, and the processor 72 is configured to represent the last trained end user as the initial user representation of the corresponding user.
Different from the previous embodiment, the representation of the initial test question pair of the first sample test question pair is kept unchanged in a plurality of training processes of the test question recommendation model, so that the complexity of training the test question recommendation model can be reduced; and the final user representation obtained by the last training is used as the initial user representation of the corresponding user, so that the accuracy of the final user representation can be continuously improved in the training process of a plurality of times.
In some disclosed embodiments, the processor 72 is configured to extract a first sample semantic representation of the sample topic and extract a second sample semantic representation of the sample topic; the first sample semantic representation comprises semantic information of a plurality of test question attributes of a sample question, and the second sample semantic representation comprises semantic information of a plurality of test question attributes of a sample question; the processor 72 is configured to fuse the first sample semantic representation and the second sample semantic representation to obtain an initial test question pair representation.
Different from the embodiment, the semantic information of the sample question and the semantic information of the sample question at the level of the plurality of test question attributes can be fused in the representation of the initial test question pair, so that the accuracy of the representation of the initial test question pair can be improved.
In some disclosed embodiments, the semantic information of the plurality of test question attributes is extracted by using attribute semantic extraction networks corresponding to the plurality of test question attributes, respectively, the attribute semantic extraction networks corresponding to the plurality of test question attributes are included in the relationship prediction model, the relationship prediction model further includes an interactive semantic extraction network and a test question relationship prediction network, and the processor 72 is configured to obtain a plurality of sets of second sample test question pairs and obtain a third sample test question labeled with an actual test question attribute; the second sample test question pair comprises a first sample test question and a second sample test question, and the second sample test question pair is marked with an actual test question relation between the first sample test question and the second sample test question; the processor 72 is configured to train the attribute semantic extraction network for a plurality of times by using the third sample test question until a preset condition is met; the processor 72 is configured to extract a first sample question representation of the first sample question and a second sample question representation of the second sample question using the attribute semantic extraction network; and, the processor 72 is configured to extract a sample interactive semantic representation of the second sample question pair using an interactive semantic extraction network; wherein the sample interaction semantic representation comprises: semantic relevance between sample words in the first sample test question and sample words in the second sample test question; the processor 72 is configured to input the first sample test question representation, the second sample test question representation, and the sample interactive semantic representation into the test question relationship prediction network to obtain a sample test question relationship between the first sample test question and the second sample test question; the processor 72 is configured to adjust network parameters of the interactive semantic extraction network and the test question relationship prediction network based on a difference between the actual test question relationship and the sample test question relationship.
Different from the embodiment, the semantic information of the plurality of test question attributes is extracted by using the attribute semantic extraction network corresponding to the plurality of test question attributes respectively, the attribute semantic extraction network corresponding to the plurality of test question attributes is contained in the relation prediction model, the relation prediction model further comprises an interactive semantic extraction network and a test question relation prediction network, so that by obtaining a plurality of groups of second sample test question pairs and obtaining a third sample test question marked with actual test question attributes, the second sample test question pairs comprise a first sample test question and a second sample test question, the second sample test question pairs are marked with actual test question relations between the first sample test question and the second sample test question, the third sample test question is used for carrying out a plurality of times of training on the attribute semantic extraction network until preset conditions are met, and then the attribute semantic extraction network is used for extracting the first sample test question representation of the first sample test question and the second sample test question representation of the second sample test question, and extracting sample interactive semantic representation of the second sample test question pair by using the interactive semantic extraction network, wherein the sample interactive semantic representation comprises: on the basis of the semantic correlation between the sample words in the first sample test questions and the sample words in the second sample test questions, the first sample test question representation, the second sample test question representation and the sample interactive semantic representation are input into a test question relation prediction network to obtain sample test question relations between the first sample test questions and the second sample test questions, and network parameters of an interactive semantic extraction network and the test question relation prediction network are adjusted based on the difference between the actual test question relations and the sample test question relations, so that the training can be performed in stages in the training process of a relation prediction model, the first stage can depend on a large number of third sample test questions labeled with actual test question attributes to train an attribute semantic extraction network, thereby greatly improving the network performance of the attribute semantic extraction network, so that even in the second stage training process, the second sample test questions labeled with the actual test question relations are rare, the overall effect of the relation prediction model can be ensured as much as possible, and the dependency of model training on few samples can be greatly reduced.
In some disclosed embodiments, the semantic relatedness includes at least one of: the semantic relevance between the sample word representation of each sample word in the first sample test question and the sample global representation of the second sample test question, and the semantic relevance between the sample word representation of each sample word in the first sample test question and the sample local representation of the second sample test question; and/or, the semantic relatedness comprises at least one of: the sample word representation of each sample word in the second sample question is semantically correlated with the sample global representation of the first sample question, and the sample word representation of each sample word in the second sample question is semantically correlated with the sample local representation of the first sample question.
Different from the embodiment, the semantic relevancy is modeled from two dimensions such as the dimension of each sample word in the first sample test question, the dimension of each sample word in the second sample test question and the like, and a plurality of angles such as global semantics, local semantics and the like, so that the richness of sample interaction semantic representation can be greatly improved, and the accuracy of test question relation prediction can be improved.
In some disclosed embodiments, the processor 72 is configured to use the sample word representation of the sample word at the end of the sample question as a sample global representation; the processor 72 is configured to use an average of the sample word representations of all sample words in the sample question as a sample global representation; and under the condition that the sample global representation is the sample global representation of the first sample test question, the sample test question is the first sample test question, and under the condition that the sample global representation is the sample global representation of the second sample test question, the sample test question is the second sample test question.
In some disclosed embodiments, the processor 72 is configured to determine a sample target word in the sample question that is closest in position to the sample reference word, and to use the sample word representation of the sample target word as the sample local representation; the processor 72 is configured to determine a sample target word with the largest semantic correlation with the sample reference word in the sample test question, and represent the sample word of the sample target word as a sample local representation; the method comprises the steps that under the condition that a sample part is represented as a sample part of a first sample test question, the sample test question is the first sample test question, a sample target word is a sample word in the first sample test question, a reference sample word is a sample word in a second sample test question, under the condition that the sample part is represented as a sample part of the second sample test question, the sample test question is the second sample test question, the sample target word is a sample word in the second sample test question, and the reference sample word is a sample word in the first sample test question.
In some disclosed embodiments, the processor 72 is configured to obtain a topic and a number of candidate questions for a target user, and obtain an end-user representation of the target user; wherein, the end user representation is obtained by utilizing the steps in the embodiment of the training method of the test question recommendation model; the processor 72 is configured to respectively use each candidate test question and each question as a group of test question pairs, and obtain test question pair representations of each group of test question pairs; the processor 72 is configured to obtain a predicted adaptation degree between the target user and each candidate test question by using the final user representation and each test question pair representation; the processor 72 is configured to recommend candidate test questions to the target user based on the predicted adaptation degree.
Different from the embodiment, the target user question and the candidate test questions are obtained, the final user representation of the target user is obtained, and the final user representation is obtained by the training method of the test question recommendation model of any item, so that the accuracy of the final user representation can be improved, the candidate test questions and the original are respectively used as a group of test question pairs, the test question pair representation of each group of test question pairs is obtained, the prediction adaptation degree of the target user and each candidate test question is obtained by the final user representation and each group of test question pair representation, the candidate test questions are recommended to the target user based on the prediction adaptation degree, and the accuracy of the test question recommendation can be improved.
In some disclosed embodiments, the processor 72 is configured to predict each group of test question pairs respectively to obtain a predicted test question relationship between the original questions and the candidate test questions in the test question pairs, and the processor 72 is configured to recommend the candidate test questions to the target user and output the predicted test question relationship between the candidate test questions and the original questions according to a descending order of the predicted adaptation degree.
Different from the embodiment, the method and the device have the advantages that the method and the device predict each group of test question pairs respectively to obtain the predicted test question relationship between the original questions and the candidate test questions of the test question pairs, recommend the candidate test questions to the target user and output the predicted test question relationship between the candidate test questions and the original questions according to the sequence of the prediction adaptation degrees from large to small, and can output the recommended predicted test question relationship between the candidate test questions and the original questions while recommending the test questions to the target user, so that the interpretability of test question recommendation can be enhanced, and the reliability of test question recommendation can be improved.
Please refer to fig. 8 and fig. 8, which are schematic block diagrams illustrating an embodiment of a memory device 80 according to the present application. The storage device 80 stores program instructions 81 capable of being executed by the processor, and the program instructions 81 are used for implementing steps in any one of the above-mentioned test question recommendation model training method embodiments or implementing steps in any one of the above-mentioned test question recommendation method embodiments.
According to the scheme, the accuracy of test question recommendation can be improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A training method of a test question recommendation model is characterized by comprising the following steps:
acquiring a first sample test question pair belonging to a user; each group of the first sample question pairs comprises a sample question and a sample choice question, wherein the sample choice question is a test question selected by the user in a plurality of recommended questions of the sample question;
acquiring an initial user representation of the user, and acquiring an initial test question pair representation of the first sample test question pair;
inputting the initial user representation and the initial test question pair representation into a test question recommendation model to obtain a final user representation of the user and a final test question pair representation of the first test question pair; wherein the end-user representation comprises: the semantic information of the user and the semantic information of the first sample question pair belonging to the user;
predicting the sample adaptation degree of the user and the sample choice in each group of the first sample question pairs by using the final user representation and the final test question pair representation;
and adjusting the network parameters of the test question recommendation model based on the sample adaptation degree.
2. The method of claim 1, wherein the test question recommendation model comprises a graph neural network; inputting the initial user representation and the initial test question pair representation into a test question recommendation model to obtain a final user representation of the user and a final test question pair representation of the first sample test question pair, wherein the method comprises the following steps:
constructing a user test question interaction graph by using the initial user representation and the initial test question pair representation;
and inputting the user test question interaction diagram into the diagram neural network to obtain the final user representation and the final test question pair representation.
3. The method of claim 1, wherein there are a plurality of users, and the adjusting the network parameters of the test question recommendation model based on the sample adaptation degree comprises:
acquiring actual adaptation degrees of the plurality of users to the sample questions in each group of the first sample question pairs respectively; the actual adaptation degree of the sample questions in the first sample question pair belonging to the user is a first numerical value, the actual adaptation degree of the sample questions in the first sample question pair not belonging to the user is a second numerical value, and the first numerical value is larger than the second numerical value;
and adjusting the network parameters of the test question recommendation model by using the difference between the sample adaptation degree and the actual adaptation degree.
4. The method of claim 1, wherein the test question recommendation model is obtained through a plurality of training processes, and an initial test question pair representation of the first test question pair remains unchanged during the plurality of training processes of the test question recommendation model;
and/or the test question recommendation model is obtained through a plurality of times of training; the obtaining an initial user representation of the user comprises:
and taking the final user representation obtained by the last training as an initial user representation corresponding to the user.
5. The method of claim 1, wherein obtaining an initial test question pair representation of the first sample test question pair comprises:
extracting a first sample semantic representation of the sample topic, and extracting a second sample semantic representation of the sample topic; the first sample semantic representation comprises semantic information of a plurality of test question attributes of the sample original question, and the second sample semantic representation comprises semantic information of a plurality of test question attributes of the sample selected question;
and fusing the first sample semantic representation and the second sample semantic representation to obtain the initial test question pair representation.
6. The method according to claim 5, wherein the semantic information of the plurality of test question attributes is extracted by using attribute semantic extraction networks corresponding to the plurality of test question attributes, the attribute semantic extraction networks corresponding to the plurality of test question attributes are included in a relationship prediction model, the relationship prediction model further comprises an interactive semantic extraction network and a test question relationship prediction network, and the training of the relationship prediction model comprises:
acquiring a plurality of groups of second sample test question pairs, and acquiring third sample test questions marked with actual test question attributes; the second sample test question pair comprises a first sample test question and a second sample test question, and the second sample test question pair is marked with an actual test question relation between the first sample test question and the second sample test question;
training the attribute semantic extraction network for a plurality of times by using the third sample test question until a preset condition is met;
extracting a first sample test question representation of the first sample test question and a second sample test question representation of the second sample test question by using the attribute semantic extraction network; and the number of the first and second groups,
extracting sample interactive semantic representations of the second sample test question pairs by using the interactive semantic extraction network; wherein the sample interaction semantic representation comprises: semantic relatedness between sample words in the first sample test question and sample words in the second sample test question;
inputting the first sample test question representation, the second sample test question representation and the sample interactive semantic representation into the test question relation prediction network to obtain a sample test question relation between the first sample test question and the second sample test question;
and adjusting network parameters of the interactive semantic extraction network and the test question relation prediction network based on the difference between the actual test question relation and the sample test question relation.
7. The method of claim 6, wherein the semantic relatedness comprises at least one of: a sample word representation of each sample word in the first sample question and a sample global representation of the second sample question, wherein the sample word representation of each sample word in the first sample question and the sample local representation of the second sample question are semantically related;
and/or, the semantic relatedness comprises at least one of: the sample word representation of each sample word in the second sample question is related to the semantic relevance between the sample global representation of the first sample question, and the sample word representation of each sample word in the second sample question is related to the semantic relevance between the sample local representation of the first sample question.
8. The method of claim 7, wherein the step of obtaining the sample global representation comprises at least one of:
taking the sample word representation of the sample word positioned at the tail end of the sample test question as the sample global representation;
taking the average value of the sample word expressions of all sample words in the sample test questions as the sample global expression;
wherein the sample question is the first sample question if the sample global representation is a sample global representation of the first sample question, and the sample question is the second sample question if the sample global representation is a sample global representation of the second sample question.
9. The method according to claim 7, wherein the step of obtaining the sample local representation comprises at least one of:
determining a sample target word closest to the position of the sample reference word in the sample test question, and representing the sample word of the sample target word as the local representation of the sample;
determining a sample target word with the largest semantic relevance with a sample reference word in a sample test question, and representing the sample word of the sample target word as the sample local representation;
wherein, in a case that the sample part is represented as the sample part of the first sample test question, the sample test question is the first sample test question, the sample target word is the sample word in the first sample test question, the reference sample word is the sample word in the second sample test question, in a case that the sample part is represented as the sample part of the second sample test question, the sample test question is the second sample test question, the sample target word is the sample word in the second sample test question, and the reference sample word is the sample word in the first sample test question.
10. A test question recommendation method is characterized by comprising the following steps:
the method comprises the steps of obtaining an original question and a plurality of candidate test questions of a target user, and obtaining a final user representation of the target user; wherein the end-user representation is obtained by a training method of the test question recommendation model according to any one of claims 1 to 9;
respectively taking each candidate test question and each original question as a group of test question pairs, and obtaining test question pair representation of each group of test question pairs;
obtaining the prediction adaptation degree of the target user and each candidate test question respectively by using the final user representation and each test question pair representation;
and recommending the candidate test questions to the target user based on the prediction adaptation degree.
11. The method of claim 10, wherein prior to said recommending the candidate test question to the target user based on the degree of predictive adaptation, the method further comprises:
predicting the test question pairs of each group respectively to obtain a predicted test question relationship between the original questions and the candidate test questions in the test question pairs;
the recommending the candidate test questions to the target user based on the prediction adaptation degree comprises the following steps:
and recommending the candidate test questions to the target user according to the sequence of the prediction adaptation degrees from large to small, and outputting the prediction test question relationship between the candidate test questions and the original questions.
12. An electronic device, comprising a memory and a processor coupled to each other, wherein the memory stores program instructions, and the processor is configured to execute the program instructions to implement the training method of the test question recommendation model according to any one of claims 1 to 9, or to implement the test question recommendation method according to any one of claims 10 to 11.
13. A storage device storing program instructions executable by a processor to implement a method of training a test question recommendation model according to any one of claims 1 to 9 or a method of implementing a test question recommendation according to any one of claims 10 to 11.
CN202011582885.2A 2020-12-28 2020-12-28 Test question recommendation and related model training method, electronic equipment and storage device Active CN112686052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011582885.2A CN112686052B (en) 2020-12-28 2020-12-28 Test question recommendation and related model training method, electronic equipment and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011582885.2A CN112686052B (en) 2020-12-28 2020-12-28 Test question recommendation and related model training method, electronic equipment and storage device

Publications (2)

Publication Number Publication Date
CN112686052A true CN112686052A (en) 2021-04-20
CN112686052B CN112686052B (en) 2023-12-01

Family

ID=75453560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011582885.2A Active CN112686052B (en) 2020-12-28 2020-12-28 Test question recommendation and related model training method, electronic equipment and storage device

Country Status (1)

Country Link
CN (1) CN112686052B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037571A (en) * 2021-10-27 2022-02-11 南京谦萃智能科技服务有限公司 Test question expansion method and related device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016179938A1 (en) * 2015-05-14 2016-11-17 百度在线网络技术(北京)有限公司 Method and device for question recommendation
CN108182275A (en) * 2018-01-24 2018-06-19 上海互教教育科技有限公司 A kind of mathematics variant training topic supplying system and correlating method
WO2020237869A1 (en) * 2019-05-31 2020-12-03 平安科技(深圳)有限公司 Question intention recognition method and apparatus, computer device, and storage medium
CN112069295A (en) * 2020-09-18 2020-12-11 科大讯飞股份有限公司 Similar question recommendation method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016179938A1 (en) * 2015-05-14 2016-11-17 百度在线网络技术(北京)有限公司 Method and device for question recommendation
CN108182275A (en) * 2018-01-24 2018-06-19 上海互教教育科技有限公司 A kind of mathematics variant training topic supplying system and correlating method
WO2020237869A1 (en) * 2019-05-31 2020-12-03 平安科技(深圳)有限公司 Question intention recognition method and apparatus, computer device, and storage medium
CN112069295A (en) * 2020-09-18 2020-12-11 科大讯飞股份有限公司 Similar question recommendation method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
于冰;袁贝贝;庄可馨;潜静;倪晓燕;章美仁;: "应用卷积神经网络的学习推荐系统", 福建电脑, no. 04 *
何彬;李心宇;陈蓓蕾;夏盟;曾致中;: "基于属性关系深度挖掘的试题知识点标注模型", 南京信息工程大学学报(自然科学版), no. 06 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037571A (en) * 2021-10-27 2022-02-11 南京谦萃智能科技服务有限公司 Test question expansion method and related device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112686052B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN109522553B (en) Named entity identification method and device
CN111241237B (en) Intelligent question-answer data processing method and device based on operation and maintenance service
CN112131366A (en) Method, device and storage medium for training text classification model and text classification
CN107818164A (en) A kind of intelligent answer method and its system
CN114565104A (en) Language model pre-training method, result recommendation method and related device
CN113239169A (en) Artificial intelligence-based answer generation method, device, equipment and storage medium
CN106126619A (en) A kind of video retrieval method based on video content and system
CN112100332A (en) Word embedding expression learning method and device and text recall method and device
CN112287069B (en) Information retrieval method and device based on voice semantics and computer equipment
WO2021082086A1 (en) Machine reading method, system, device, and storage medium
Das et al. Sentence embedding models for similarity detection of software requirements
CN112581327B (en) Knowledge graph-based law recommendation method and device and electronic equipment
CN110968725B (en) Image content description information generation method, electronic device and storage medium
CN114297399B (en) Knowledge graph generation method, system, storage medium and electronic device
CN113761887A (en) Matching method and device based on text processing, computer equipment and storage medium
CN113392179A (en) Text labeling method and device, electronic equipment and storage medium
CN114282528A (en) Keyword extraction method, device, equipment and storage medium
CN117034921A (en) Prompt learning training method, device and medium based on user data
CN116541520A (en) Emotion analysis method and device, electronic equipment and storage medium
CN116561272A (en) Open domain visual language question answering method, device, electronic equipment and storage medium
CN115203388A (en) Machine reading understanding method and device, computer equipment and storage medium
CN110969005A (en) Method and device for determining similarity between entity corpora
CN112686052B (en) Test question recommendation and related model training method, electronic equipment and storage device
CN113761151A (en) Synonym mining, question answering method, apparatus, computer equipment and storage medium
Xu et al. A survey of machine reading comprehension methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant