CN119719321A - Query statement generation method, device, equipment and storage medium - Google Patents
Query statement generation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN119719321A CN119719321A CN202411767948.XA CN202411767948A CN119719321A CN 119719321 A CN119719321 A CN 119719321A CN 202411767948 A CN202411767948 A CN 202411767948A CN 119719321 A CN119719321 A CN 119719321A
- Authority
- CN
- China
- Prior art keywords
- query
- initial
- prompt information
- query statement
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
- G06F16/316—Indexing structures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/041—Abduction
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application belongs to the field of artificial intelligence, and relates to a generation method of query sentences, which comprises the steps of receiving query information input by a user, extracting query parameters of the query information, screening context prompt information corresponding to the query parameters from a preset database according to the query parameters, processing the query parameters and the context prompt information by utilizing a pre-trained large language model to generate initial query sentences, analyzing the initial query sentences to obtain initial table names and initial column names corresponding to the initial query sentences, carrying out mode linking on the initial table names and the initial column names and the preset database to generate prompt information after mode linking, and generating target query sentences by adopting the pre-trained large language model according to the prompt information after mode linking. In addition, the application also relates to a blockchain technology, and the generation result of the query statement and the like can be stored in the blockchain. The application can improve the accuracy of generating the query statement.
Description
Technical Field
The application relates to the field of artificial intelligence, in particular to a method for generating query sentences.
Background
With the rapid development of artificial intelligence technology, a Large Language Model (LLM) exhibits a strong capability in the field of natural language processing, wherein a technology of converting natural language into Structured Query Language (SQL) (Text 2 SQL) is widely applied in the fields of database question-answering, information retrieval and the like, and remarkable progress is achieved.
At present, natural language input by a user can be converted into accurate SQL sentences through a Text2SQL conversion technology, and efficient query and the like of a database are realized by utilizing the SQL sentences obtained through conversion.
However, although the Text2SQL conversion technique has achieved a lot of achievements, it still faces many challenges in practical application, especially when processing cumbersome database information or complicated user intention, the current Text2SQL conversion technique has difficulty in accurately understanding the natural language input by the user, resulting in low accuracy in generating query sentences.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device, equipment and a storage medium for generating query sentences, which mainly aim to improve the accuracy of generating the query sentences.
In order to solve the above technical problems, an embodiment of the present application provides a method for generating a query statement, which adopts the following technical scheme:
receiving query information input by a user, and extracting query parameters of the query information;
according to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database;
processing the query parameters and the context prompt information by using a pre-trained large language model to generate an initial query sentence;
analyzing the initial query statement to obtain an initial table name and an initial column name corresponding to the initial query statement, and carrying out mode linking on the initial table name and the initial column name and the preset database to generate prompt information after mode linking;
And generating a target query statement by adopting the pre-trained large language model according to the prompt information after the mode linking.
Further, after receiving the query information input by the user, the method further comprises:
Identifying a data type of the query information;
if the data type is identified to be in a non-text form, acquiring a text conversion method of the non-text form;
and converting the query information into a text form according to the text conversion method.
Further, the extracting the query parameters of the query information includes:
extracting keywords of the query information by a preset keyword extraction method to obtain a plurality of keywords of the query information;
Performing duplication removal operation on the keywords to obtain a plurality of combined keywords;
and converting the plurality of merging keywords according to a preset format to obtain the query parameters of the query information.
Further, after the contextual prompt information corresponding to the query parameter is screened out from the preset database, the method further includes:
Sensitive data identification is carried out on the context prompt information;
if the context prompt information is identified to have the sensitive data, a preset desensitization method is adopted to desensitize the sensitive data in the context prompt information, so that the desensitized context prompt information is obtained.
Further, the processing the query parameters and the context prompt information by using a pre-trained large language model to generate an initial query sentence includes:
acquiring a fine adjustment data set according to the context prompt information;
Performing fine tuning operation on the pre-trained large language model by adopting the fine tuning data set to obtain a fine-tuned large language model;
and processing the query parameters through the trimmed large language model according to the context prompt information to obtain the initial query statement.
Further, after the target query sentence is generated by adopting the pre-trained large language model according to the prompt information after the mode linking, the method further comprises:
executing the target query statement;
if the target query statement is successfully executed, a target answer corresponding to the target query statement is obtained, and the target answer is fed back to the user;
If the target query statement fails to be executed, detecting an error reason through the pre-trained large language model, and correcting according to the error reason to obtain a corrected target query statement;
And re-executing the target query statement after correction until the corrected query statement is successfully executed, obtaining the target answer, and feeding back the target answer to the user.
Further, after the target answer corresponding to the target query statement is obtained if the target query statement is successfully executed, and the target answer is fed back to the user, the method further includes:
periodically acquiring data of which the access times exceed a preset time threshold value in a preset time period to obtain a high-frequency data set;
establishing an index for the key fields of the high-frequency data set by adopting a preset index technology to obtain index fields;
Storing the index field into a search engine and storing the high-frequency data set into a preset cache;
And when the target query statement is received, querying the target answer from the preset cache according to the index field.
In order to solve the above technical problems, the embodiment of the present application further provides a query statement generating device, which adopts the following technical scheme:
The extraction module is used for receiving query information input by a user and extracting query parameters of the query information;
The screening module is used for screening out the context prompt information corresponding to the query parameters from a preset database according to the query parameters;
The first generation module is used for processing the query parameters and the context prompt information by utilizing a pre-trained large language model to generate an initial query statement;
The mode link module is used for analyzing the initial query statement to obtain an initial table name and an initial column name corresponding to the initial query statement, and carrying out mode link on the initial table name and the initial column name and the preset database to generate prompt information after mode link;
and the second generation module is used for generating a target query statement by adopting the pre-trained large language model according to the prompt information after the mode linking.
In order to solve the technical problem, the embodiment of the application also provides computer equipment, which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the query statement generation method.
To solve the above technical problem, an embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program implements the method for generating a query statement as described above when executed by a processor.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
By receiving the query information input by the user, extracting the query parameters of the query information, the query intention of the query information can be clarified, a basis is provided for the subsequent generation of query sentences, the accuracy of the subsequent query sentence generation is further improved, the search resources and the information are reduced, and the efficiency of the subsequent query language generation is further improved;
According to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database, so that the generation efficiency and the generation accuracy of subsequent query sentences can be improved;
The method and the device have the advantages that the query parameters are analyzed according to the context prompt information to generate the initial query statement, the model can generate more accurate initial query statement, and the accuracy of generating the initial query statement is improved;
By inputting the prompt information after the mode linkage into the pre-trained large language model, the prompt information can be automatically analyzed and corresponding query sentences can be generated. The method reduces the need of manually writing the query statement, improves the working efficiency, and can remarkably reduce the error rate caused by manually writing the query statement and improve the accuracy of generating the target query statement by generating the query statement by using a pre-trained large language model.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method of generating a query statement in accordance with the present application;
FIG. 3 is a schematic diagram of one embodiment of a query statement generation device in accordance with the present application;
fig. 4 is a schematic structural view of an embodiment of the device according to the application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs, the terms used in the description herein are used for the purpose of describing particular embodiments only and are not intended to limit the application, and the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the above description of the drawings are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include a terminal device 101, a network 102, and a server 103, where the terminal device 101 may be a notebook 1011, a tablet 1012, or a cell phone 1013. Network 102 is the medium used to provide communication links between terminal device 101 and server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables.
A user may interact with the server 103 via the network 102 using the terminal device 101 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices having a display screen and supporting web browsing, and the terminal device 101 may be an electronic book reader, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer III), an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer IV) player, a laptop portable computer, a desktop computer, or the like, in addition to the notebook 1011, the tablet 1012, or the mobile phone 1013.
The server 103 may be a server providing various services, such as a background server providing support for pages displayed on the terminal device 101.
It should be noted that, the method for generating the query sentence provided by the embodiment of the present application is generally executed by the server/terminal device, and accordingly, the device for generating the query sentence is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow chart of one embodiment of a method of generating a query statement in accordance with the present application is shown. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The method for generating the query statement provided by the embodiment of the application can be applied to any scene needing to generate the query statement, and the method for generating the query statement can be applied to products of the scenes. The method for generating the query statement comprises the following steps:
step S201, receiving query information input by a user, and extracting query parameters of the query information.
In this embodiment, the electronic device (for example, the server/terminal device shown in fig. 1) on which the method for generating the query sentence operates may receive the query information input by the user through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection. The execution subject of the present application may be a large language model, or simply a model. The query information is typically in the same format as the query training data used in large language model training, but the training data contains other noise, as the case may be. In the service scenario of generating the query statement, the query information may be input through a handheld terminal of the user or the like.
In this embodiment, query information input by a user is received, a preprocessing operation is performed on the query information to obtain preprocessed query information, keyword extraction is performed on the preprocessed query information to obtain a plurality of keywords, and the keywords are converted according to a predetermined format to obtain query parameters of the query information.
In this embodiment, the preprocessing operation includes, but is not limited to, text cleaning operation, word segmentation, part-of-speech tagging, etc., in order to improve user experience and diversity of query information, the query information may exist in text form, voice form or image form, and when the query information input by a user is received, the data type of the query information needs to be identified, the query information is converted into text form according to the data type, and then keyword extraction is performed on the query information existing in text form, so as to obtain query parameters corresponding to the query information.
In one embodiment, after receiving the query information input by the user, the method further comprises:
Identifying a data type of the query information;
if the data type is identified to be in a non-text form, acquiring a text conversion method of the non-text form;
and converting the query information into a text form according to the text conversion method.
In this embodiment, after receiving the query information input by the user, the specific query mode may directly determine that the query information is in Text form if the query information is in character string type, and does not perform conversion processing, if the query information is in file form, then check file extension, and when the file extension is in the form of. Jpg,. Png, etc., determine that the query information is in image form, and when the file extension is in the form of. Wav, etc., this determines that the query information is an audio file, when the query information is in image form, extract Text of the query information by OCR technology to obtain the query information in Text form, and when the query information is in audio form, convert the query information into Text information by Speech recognition technology (such as Speech-to-Text API, etc.).
In this embodiment, by supporting the query information input in various forms, the experience of the user can be improved, the flexibility of the query is improved, the query range is enlarged, and the accuracy of generating the query statement is improved.
In one embodiment, the extracting the query parameters of the query information includes:
extracting keywords of the query information by a preset keyword extraction method to obtain a plurality of keywords of the query information;
Performing duplication removal operation on the keywords to obtain a plurality of combined keywords;
and converting the plurality of merging keywords according to a preset format to obtain the query parameters of the query information.
In this embodiment, the above-mentioned preset keyword extraction method includes, but is not limited to, TF-IDF (word frequency and inverse document frequency) extraction method, by calculating the word frequency (TF) and Inverse Document Frequency (IDF) of each vocabulary in the text, and integrating the values of the TF and the inverse document frequency, selecting a keyword and a topic model extraction method according to the values of the TF and the Inverse Document Frequency (IDF), such as dirichlet (LDA, latent Dirichlet Allocation) extraction method, by identifying the topic of the query information and extracting the vocabulary related to the topic as the keyword, and neural network extraction method (such as CNN) by encoding the query information and outputting the probability distribution of the keyword through the trained neural network model;
After extracting the plurality of keywords, because repeated words possibly exist in the plurality of keywords, performing deduplication processing on the plurality of keywords to improve the accuracy of query parameter extraction, wherein the deduplication processing on the plurality of keywords can traverse all keywords through a predefined synonym dictionary to find out words with similar semantics, or calculate the semantic similarity of any two keywords through a semantic similarity calculation method, perform merging operation on the keywords with similar semantics or keywords with the semantic similarity of any two keywords larger than a preset semantic similarity threshold value to obtain a plurality of merged keywords, and convert the plurality of merged keywords according to a preset format to finally obtain the query parameters of the query information.
In the embodiment, the core content of the query information can be understood more quickly by extracting the keywords of the query information, the generation efficiency of the subsequent query statement is improved, and the accuracy of the generation of the subsequent query statement is improved by combining a plurality of keywords.
In one embodiment, a method for obtaining a vehicle insurance claim comprises the steps of receiving query information 'query insurance policy number: 123456 submitted by a user through a mobile phone', extracting keywords from the query information to obtain 'insurance policy number: 123456', 'insurance policy', 'claim settlement record', AND the like, judging whether keywords with similar semantics exist in the keywords, AND if not, converting the keywords according to a predefined format to obtain a query parameter client id=123456 AND insurance type=insurance policy AND query type=claim settlement record.
In this embodiment, by receiving query information input by a user, query parameters of the query information are extracted, so that query intention of the query information can be clarified, a basis is provided for a subsequently generated query sentence, further, accuracy of the subsequent query sentence generation is improved, search resources and information are reduced, and further, efficiency of the subsequent query sentence generation is improved.
S202, according to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database.
In this embodiment, the preset database refers to example data in which a plurality of different data tables, such as a claim record table, a customer ID table, and the like, and respective tables are stored; according to the query parameters, acquiring metadata information of a data table corresponding to the query parameters, for example, a query parameter claim record, from a preset database, acquiring metadata information of a claim record table from the preset database, wherein the metadata information comprises table names, column names and the like, and constructing a table structure description text by a character string splicing method after acquiring the metadata information, wherein the spliced description text is 'claims (ID INT, customer_id INT, status VARCHAR (20), current DECIAL (10, 2))', wherein claims represents the table names, ID INT represents a main key existing in an integer, customer_INT represents a customer ID (such as the number of the insurance) existing in an integer, status VARCHAR (20) represents a field in the form of a variable string and is limited to 20 characters, (e.g. the status of claims is through applied, pending, reject refuse), status DECIMAL (10, 2) represents a field in the form of fixed point numbers, at most 10 is a number, wherein the last two bits of the number of points are used for recording the amount of claims, data of a preset number of lines are randomly extracted from a claim record table as example data, the unit lattice value of the example data is extracted, and the unit lattice value is filled into the structural description text of the table to obtain context prompt information, e.g. the finally generated context prompt information is claims |id:1, customer_id:123456, status: 'applied', amount1000.00.
In one embodiment, after the contextual prompt information corresponding to the query parameter is screened from the preset database, the method further includes:
Sensitive data identification is carried out on the context prompt information;
if the context prompt information is identified to have the sensitive data, a preset desensitization method is adopted to desensitize the sensitive data in the context prompt information, so that the desensitized context prompt information is obtained.
In this embodiment, the sensitive data is predefined, such as an identification card number, a bank card number, a customer phone number, a specific address, etc., in order to ensure the security of the sensitive data in the preset database and prevent the sensitive data from being revealed, the sensitive data identification is further required to be performed on the context prompt information after the context prompt information is generated, the desensitization processing is performed on the sensitive data after the sensitive data is identified, and the processing is not performed on the context prompt information after the sensitive data is not identified.
Specifically, a sensitive data table is constructed, each item of sensitive data is defined in the sensitive data table, a regular expression, a character string matching algorithm or a specific sensitive data detection tool is utilized to scan the context prompt information, whether the context prompt information contains the sensitive data or not is judged, if the context prompt information contains the sensitive data, a proper desensitization method is selected according to the type and the characteristics of the sensitive data to desensitize the sensitive data, for example, for an identity card number, a partial hiding, character replacing or encrypting mode can be adopted to desensitize the sensitive data, for a bank card number, only partial data can be displayed or hash processing is carried out on the bank card number, and finally the desensitized context prompt information is obtained.
In the embodiment, the sensitive data in the context prompt information is identified and desensitized, so that the safety of the data is enhanced, and unnecessary loss caused by data leakage is prevented.
In this embodiment, according to the query parameters, the context prompt information corresponding to the query parameters is screened from a preset database, so that the generation efficiency and the generation accuracy of the subsequent query statement can be improved.
S203, processing the query parameters and the context prompt information by using a pre-trained large language model to generate an initial query sentence.
In this embodiment, the large language model LLM (Large Language Models) includes, but is not limited to, a BERT model, etc., and is trained to obtain a trained large language model, where the trained large language model may generate an initial query statement according to a query parameter and a context prompt, and the pre-trained large language model has the ability to understand a natural language, analyze a complex query requirement, and generate a structured query language (SQL, etc.) according to the context prompt and the query parameter.
In this embodiment, the query parameters and the context prompt information are input together into a pre-trained large language model, and the large language model parses and converts the query parameters according to the context prompt information, where the parsed conversion process includes, but is not limited to, semantic understanding of keywords in the query parameters, logic processing of filtering conditions, matching of database structures, and the like, and finally generates an initial query sentence.
In one embodiment, the processing the query parameters and the context prompt information by using a pre-trained large language model to generate an initial query statement includes:
acquiring a fine adjustment data set according to the context prompt information;
Performing fine tuning operation on the pre-trained large language model by adopting the fine tuning data set to obtain a fine-tuned large language model;
and processing the query parameters through the trimmed large language model according to the context prompt information to obtain the initial query statement.
In this embodiment, the Fine-tuning operation (Fine-tuning) means that the model is further trained and adjusted to adapt to a new task by using a small-scale dataset of a specific task based on a pre-trained model, and a Fine-tuning dataset for the Fine-tuning operation is obtained according to context prompt information, the Fine-tuning dataset including query parameters related to the context prompt information and corresponding query statement pairs (e.g., SQL statement pairs), the Fine-tuning operation is performed on the pre-trained large language model by the Fine-tuning dataset, the Fine-tuning operation includes adjusting parameters of the pre-trained large language model so that the model can better process the query parameters according to the context prompt information to generate a more accurate initial query statement
The method comprises the steps of acquiring a fine tuning data set for fine tuning according to context prompt information, specifically, if all training data are pre-allocated with unique identifiers during storage, matching the context prompt information with the unique identifiers of each training data in the training data set during acquisition of the fine tuning data set, screening the fine tuning data set for fine tuning, if all the training data are not pre-allocated with the unique identifiers during storage, searching data from the training data set according to content and structure related to the context prompt information by adopting a character string matching algorithm and a structure matching algorithm, taking the searched data as the fine tuning data set for fine tuning, clustering all the training data in advance, matching the context prompt information with each cluster after clustering, and selecting all the data contained in the corresponding cluster as fine tuning data to obtain the fine tuning data set for fine tuning.
In this embodiment, by selecting a training set that is more attached to the initial query statement according to the context prompt information and performing the fine tuning operation, a more accurate initial query statement can be generated, and the generalization capability of the model can be enhanced through the fine tuning operation.
In the embodiment, the query parameters are analyzed according to the context prompt information to generate the initial query statement, the model can generate more accurate initial query statement, the accuracy of generating the initial query statement is improved, and the initial query statement is generated through the large language model, so that the request for processing the complex query parameters is supported, the dependence of manually writing the query statement is reduced, the working efficiency is improved, and the risk of mistakes is reduced.
S204, analyzing the initial query statement to obtain an initial table name and an initial column name corresponding to the initial query statement, and carrying out mode linking on the initial table name and the initial column name and the preset database to generate prompt information after mode linking.
In this embodiment, the pattern linking refers to a process of matching and associating an initial table name and an initial column name in an initial query statement with an actual table structure and a column structure in a preset database, the prompt information refers to simple prompt information which helps to generate a query statement subsequently, for example, "the query statement has been successfully linked to a users table of the preset database, and two columns of name and email are selected," the name column and email column in the query users table "or" the initial table name and the initial column name in the query users "are selected, the initial query statement is parsed by a preset natural language processing method, the initial table name to which the initial column name and the initial column name belong is identified, the pattern information in the preset database is acquired, the pattern information includes all table names to be matched in the preset database and all table names to be matched, if matching is successful, the initial table name and the initial table name are matched, and the initial table name are matched in a matching method, and the association relationship between the initial table name and the initial table name is at least consistent with the initial table name to be matched, and the initial table name is obtained when matching is successful, and the initial table name is matched, and the association between the initial table names is the initial table name and the initial table is matched, and the initial table name is the same as the initial table name and the initial table is matched, and the initial table.
In this embodiment, a fault-tolerant mechanism is pre-constructed, where the fault-tolerant mechanism includes checking whether there is a possible spelling error (e.g., a possible spelling error of users is users, uaers, etc.), and if there is a mismatch between the initial table name and all the table names to be matched, for example, the initial table name is a user table, and the table names to be matched are users tables, automatically identifying an error of the initial table name and correcting the error by the fault-tolerant mechanism, then matching the corrected initial table name with the table names to be matched again, if the matching is successful, continuing to execute according to the above-mentioned column name matching steps until a prompt message after the pattern link is generated, if there is still a failure in the secondary matching, generating an error message, for example, the table name user of the query statement does not exist in the database, and checking whether the initial table name is correct.
In this embodiment, the preset database needs to be updated regularly, so as to improve the accuracy of generating the subsequent target query statement.
In the embodiment, the prompt information after the mode linking is generated by carrying out the mode linking with the preset database, so that the accuracy of generating the follow-up query statement can be ensured, the condition of query failure caused by table name or column name errors is avoided, the analysis and mode linking process of the initial query statement can provide a basis for the follow-up automatic generation of the target query statement, the user can generate the complex database query statement through simple natural language input, and the query threshold is reduced.
S205, generating a target query statement by adopting the pre-trained large language model according to the prompt information after the mode linking;
Inputting the prompt information after the mode linking into a pre-trained large language model, analyzing the prompt information after the mode linking to obtain an analysis result, acquiring at least one query statement template corresponding to the analysis result from a query language template library, wherein the query statement template comprises replaceable placeholders, replacing the replaceable placeholders in the query statement template by table names to be matched and column names to be matched in the prompt information after the mode linking, and ensuring that the insertion positions of the table names to be matched and the column names to be matched are matched with the placeholders in the query statement template, and generating a target query statement after the replacement is completed.
For example, if the prompt information is email with name a in the query users table, the selected template SELECT placeholder 1FROM placeholder 2WHERE placeholder 3 replaces the placeholder of the template with the prompt information, and the finally generated target query statement is SELECT EMAIL FROM users WHERE NAME = 'a'.
In one embodiment, after the target query sentence is generated according to the prompt information after the mode linking and using the pre-trained large language model, the method further includes:
executing the target query statement;
if the target query statement is successfully executed, a target answer corresponding to the target query statement is obtained, and the target answer is fed back to the user;
If the target query statement fails to be executed, detecting an error reason through the pre-trained large language model, and correcting according to the error reason to obtain a corrected target query statement;
And re-executing the target query statement after correction until the corrected query statement is successfully executed, obtaining the target answer, and feeding back the target answer to the user.
In this embodiment, the target query statement is executed in the preset database, if the execution is successful, a target answer corresponding to the target query statement returned by the preset database is received and fed back to the user, if the execution of the target query statement fails in the preset database, a ask oneself self-answer mode is adopted to detect an error cause of the failure of the execution of the target query statement, the error cause comprises an error type and an error position, the error type comprises a grammar error or a logic error, a corresponding correction suggestion is generated by adopting a pre-trained large language model according to the error type and the error position, if a necessary condition is absent, a correction suggestion report is generated, the correction suggestion report is fed back to the pre-trained large language model, the corrected target query statement is generated by fusing the correction suggestion report and the target query statement, the corrected query statement is executed, if the execution of the corrected query statement is successful, the target answer is fed back to the user, if the corrected query statement execution fails, the correction report is generated again, and the correction report is corrected again according to the correction report, and the correction report is circularly performed until the corrected query statement is successfully executed.
In the embodiment, when the target query statement fails to be executed, the system can automatically detect and correct the error reason, so that the complicated process that a user needs to manually check and modify the query statement is avoided, the query efficiency is improved, the automatic generation, the intellectualization and the automation of the error detection and the correction of the query statement are realized, the efficiency and the reliability of the whole query process are improved, and the system can continuously learn and optimize the query generation and the correction capability of the system through the cyclic correction step, so that the performance and the accuracy are further improved.
In another embodiment, after the target answer corresponding to the target query statement is obtained if the target query statement is successfully executed and the target answer is fed back to the user, the method further includes:
periodically acquiring data of which the access times exceed a preset time threshold value in a preset time period to obtain a high-frequency data set;
establishing an index for the key fields of the high-frequency data set by adopting a preset index technology to obtain index fields;
Storing the index field into a search engine and storing the high-frequency data set into a preset cache;
And when the target query statement is received, querying the target answer from the preset cache according to the index field.
In this embodiment, data with access times exceeding a threshold value of preset times in a preset time period is periodically obtained, for example, data with access times exceeding 5 times in 24 hours is obtained to obtain a high-frequency dataset, an index technology (such as hash index, b+ tree index, etc.) is adopted to index each data of the high-frequency dataset, the index is stored in a search engine (such as Redis, elasticsearch, etc.), the high-frequency dataset is stored in a memory or a cache with high processing speed, expiration time and updating policy of the cache are set to periodically update the data in the memory or the cache, when a target query statement is received, the data is preferentially searched from the cache or the memory according to an index field, if a target answer corresponding to the target query statement cannot be hit in the cache or the memory, the target answer is searched from a preset database, and the target answer is fed back to the user.
In the embodiment, the target answer can be rapidly positioned by screening the high-frequency data set and establishing an index for the key field of each high-frequency data in the high-frequency data set, so that the query speed is improved, the query fields are reduced, and the high-frequency data set is stored in a cache or a memory, so that the query speed is improved by utilizing the characteristic of the memory, and the burden of the database is also reduced.
In this embodiment, by inputting the prompt information after the mode linking into the pre-trained large language model, the prompt information can be automatically parsed and a corresponding query sentence can be generated. The method reduces the need of manually writing the query statement, improves the working efficiency, and can remarkably reduce the error rate caused by manually writing the query statement and improve the accuracy of generating the target query statement by generating the query statement by using a pre-trained large language model.
It should be emphasized that, to further ensure the privacy and security of the generated result, the target answer, etc. of the query statement, the generated result, the target answer, etc. of the query statement may also be stored in a node of a blockchain.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by computer readable instructions stored in a computer readable storage medium that, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a query statement generating apparatus, where an embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various computer devices.
As shown in fig. 3, the query sentence generating device 300 in this embodiment includes an extracting module 301, a filtering module 302, a first generating module 303, a pattern linking module 304, and a second generating module 305. Wherein:
the extracting module 301 is configured to receive query information input by a user, and extract a query parameter of the query information;
in one embodiment, the apparatus further comprises:
the first identification module is used for identifying the data type of the query information;
the method acquisition module is used for acquiring a text conversion method of a non-text form if the data type is identified to be the non-text form;
and the text conversion module is used for converting the query information into a text form according to the text conversion method.
In one embodiment, the extracting module 301 includes:
the keyword extraction sub-module is used for extracting keywords of the query information by adopting a preset keyword extraction method to obtain a plurality of keywords of the query information;
the merging sub-module is used for carrying out de-duplication operation on the keywords to obtain a plurality of merging keywords;
And the format conversion sub-module is used for converting the plurality of merging keywords according to a preset format to obtain the query parameters of the query information.
The screening module 302 is configured to screen, according to the query parameter, context prompt information corresponding to the query parameter from a preset database;
in one embodiment, the apparatus further comprises:
The second identification module is used for carrying out sensitive data identification on the context prompt information;
And the desensitization module is used for carrying out desensitization treatment on the sensitive data in the context prompt information by adopting a preset desensitization method if the context prompt information is identified to exist in the sensitive data, so as to obtain the desensitized context prompt information.
A first generating module 303, configured to process the query parameters and the context prompt information by using a pre-trained large language model, and generate an initial query statement;
in one embodiment, the first generating module 303 includes:
The data acquisition sub-module is used for acquiring a fine adjustment data set according to the context prompt information;
The fine tuning sub-module is used for carrying out fine tuning operation on the pre-trained large language model by adopting the fine tuning data set to obtain a fine-tuned large language model;
and the first generation sub-module is used for processing the query parameters through the trimmed large language model according to the context prompt information to obtain the initial query statement.
The mode linking module 304 is configured to parse the initial query statement to obtain an initial table name and an initial column name corresponding to the initial query statement, and perform mode linking on the initial table name and the initial column name with the preset database to generate prompt information after mode linking;
and the second generating module 305 is configured to generate a target query sentence by using the pre-trained large language model according to the prompt information after the mode linking.
In one embodiment, the apparatus further comprises:
the execution module is used for executing the target query statement;
the first feedback module is used for obtaining a target answer corresponding to the target query statement if the target query statement is successfully executed, and feeding back the target answer to the user;
The correction module is used for detecting error reasons through the pre-trained large language model if the target query statement fails to be executed, correcting the error reasons according to the error reasons and obtaining corrected target query statements;
and the second feedback module is used for re-executing the corrected target query statement until the corrected query statement is successfully executed, obtaining the target answer, and feeding back the target answer to the user.
In one embodiment, the apparatus further comprises:
the periodic acquisition module is used for periodically acquiring data of which the access times exceed a preset time threshold value in a preset time period to obtain a high-frequency data set;
The index establishing module is used for establishing an index for the key fields of the high-frequency data set by adopting a preset index technology to obtain index fields;
The cache module is used for storing the index field into a search engine and storing the high-frequency data set into a preset cache;
and the query module is used for querying the target answer from the preset cache according to the index field when the target query statement is received.
In the embodiment, query parameters of the query information are extracted by receiving the query information input by a user, so that the query intention of the query information can be clarified, a basis is provided for a subsequently generated query sentence, the accuracy of the subsequent query sentence generation is further improved, search resources and information are reduced, and the efficiency of the subsequent query sentence generation is further improved;
According to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database, so that the generation efficiency and the generation accuracy of subsequent query sentences can be improved;
The method and the device have the advantages that the query parameters are analyzed according to the context prompt information to generate the initial query statement, the model can generate more accurate initial query statement, and the accuracy of generating the initial query statement is improved;
By inputting the prompt information after the mode linkage into the pre-trained large language model, the prompt information can be automatically analyzed and corresponding query sentences can be generated. The method reduces the need of manually writing the query statement, improves the working efficiency, and can remarkably reduce the error rate caused by manually writing the query statement and improve the accuracy of generating the target query statement by generating the query statement by using a pre-trained large language model.
In order to solve the technical problems, the embodiment of the application also provides equipment (computer equipment). Referring specifically to fig. 4, fig. 4 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only a computer device 4 having a memory 41, a processor 42, a network interface 43 is shown in the figures, but it is understood that not all illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is generally used to store an operating system and various application software installed on the computer device 4, such as computer readable instructions of a method for generating a query statement. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, for example, computer readable instructions for executing a method for generating the query statement.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
In the implementation process of the electronic equipment, the query parameters of the query information are extracted by receiving the query information input by the user, so that the query intention of the query information can be definitely provided, a basis is provided for the subsequent generated query statement, the accuracy of the subsequent query statement generation is further improved, the search resources and the information are reduced, and the efficiency of the subsequent query statement generation is further improved;
According to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database, so that the generation efficiency and the generation accuracy of subsequent query sentences can be improved;
The method and the device have the advantages that the query parameters are analyzed according to the context prompt information to generate the initial query statement, the model can generate more accurate initial query statement, and the accuracy of generating the initial query statement is improved;
By inputting the prompt information after the mode linkage into the pre-trained large language model, the prompt information can be automatically analyzed and corresponding query sentences can be generated. The method reduces the need of manually writing the query statement, improves the working efficiency, and can remarkably reduce the error rate caused by manually writing the query statement and improve the accuracy of generating the target query statement by generating the query statement by using a pre-trained large language model.
The present application also provides another embodiment, namely, a storage medium (computer-readable storage medium) storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the method for generating a query statement as described above.
In the implementation process of the computer readable storage medium, query parameters of the query information are extracted by receiving the query information input by a user, so that the query intention of the query information can be definitely provided, a basis is provided for a subsequently generated query sentence, the accuracy of the subsequent query sentence generation is further improved, search resources and information are reduced, and the efficiency of the subsequent query sentence generation is further improved;
According to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database, so that the generation efficiency and the generation accuracy of subsequent query sentences can be improved;
The method and the device have the advantages that the query parameters are analyzed according to the context prompt information to generate the initial query statement, the model can generate more accurate initial query statement, and the accuracy of generating the initial query statement is improved;
By inputting the prompt information after the mode linkage into the pre-trained large language model, the prompt information can be automatically analyzed and corresponding query sentences can be generated. The method reduces the need of manually writing the query statement, improves the working efficiency, and can remarkably reduce the error rate caused by manually writing the query statement and improve the accuracy of generating the target query statement by generating the query statement by using a pre-trained large language model.
The non-native company software tools or components present in the embodiments of the present application are presented by way of example only and are not representative of actual use.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.
Claims (10)
1. The query statement generation method is characterized by comprising the following steps:
receiving query information input by a user, and extracting query parameters of the query information;
according to the query parameters, the context prompt information corresponding to the query parameters is screened out from a preset database;
processing the query parameters and the context prompt information by using a pre-trained large language model to generate an initial query sentence;
analyzing the initial query statement to obtain an initial table name and an initial column name corresponding to the initial query statement, and carrying out mode linking on the initial table name and the initial column name and the preset database to generate prompt information after mode linking;
And generating a target query statement by adopting the pre-trained large language model according to the prompt information after the mode linking.
2. The method for generating a query sentence according to claim 1, further comprising, after said receiving the query information input by the user:
Identifying a data type of the query information;
if the data type is identified to be in a non-text form, acquiring a text conversion method of the non-text form;
and converting the query information into a text form according to the text conversion method.
3. The method for generating a query sentence according to claim 1, wherein said extracting query parameters of the query information includes:
extracting keywords of the query information by a preset keyword extraction method to obtain a plurality of keywords of the query information;
Performing duplication removal operation on the keywords to obtain a plurality of combined keywords;
and converting the plurality of merging keywords according to a preset format to obtain the query parameters of the query information.
4. The method for generating a query sentence according to claim 1, further comprising, after said selecting context prompt information corresponding to said query parameter from a preset database:
Sensitive data identification is carried out on the context prompt information;
if the context prompt information is identified to have the sensitive data, a preset desensitization method is adopted to desensitize the sensitive data in the context prompt information, so that the desensitized context prompt information is obtained.
5. The method for generating a query sentence according to claim 1, wherein said processing said query parameters and said context prompt information using a pre-trained large language model to generate an initial query sentence comprises:
acquiring a fine adjustment data set according to the context prompt information;
Performing fine tuning operation on the pre-trained large language model by adopting the fine tuning data set to obtain a fine-tuned large language model;
and processing the query parameters through the trimmed large language model according to the context prompt information to obtain the initial query statement.
6. The method for generating a query sentence according to claim 1, further comprising, after the generating a target query sentence using the pre-trained large language model according to the mode-linked prompt message:
executing the target query statement;
if the target query statement is successfully executed, a target answer corresponding to the target query statement is obtained, and the target answer is fed back to the user;
If the target query statement fails to be executed, detecting an error reason through the pre-trained large language model, and correcting according to the error reason to obtain a corrected target query statement;
And re-executing the target query statement after correction until the corrected query statement is successfully executed, obtaining the target answer, and feeding back the target answer to the user.
7. The method for generating a query term as claimed in claim 6, wherein after said if said target query term is successfully executed, obtaining a target answer corresponding to said target query term, and feeding said target answer back to said user, the method further comprises:
periodically acquiring data of which the access times exceed a preset time threshold value in a preset time period to obtain a high-frequency data set;
establishing an index for the key fields of the high-frequency data set by adopting a preset index technology to obtain index fields;
Storing the index field into a search engine and storing the high-frequency data set into a preset cache;
And when the target query statement is received, querying the target answer from the preset cache according to the index field.
8. A query statement generation apparatus, the apparatus comprising:
The extraction module is used for receiving query information input by a user and extracting query parameters of the query information;
The screening module is used for screening out the context prompt information corresponding to the query parameters from a preset database according to the query parameters;
The first generation module is used for processing the query parameters and the context prompt information by utilizing a pre-trained large language model to generate an initial query statement;
The mode link module is used for analyzing the initial query statement to obtain an initial table name and an initial column name corresponding to the initial query statement, and carrying out mode link on the initial table name and the initial column name and the preset database to generate prompt information after mode link;
and the second generation module is used for generating a target query statement by adopting the pre-trained large language model according to the prompt information after the mode linking.
9. A computer device, the computer device comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of generating a query statement as claimed in any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of generating a query statement as claimed in any one of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411767948.XA CN119719321A (en) | 2024-12-03 | 2024-12-03 | Query statement generation method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411767948.XA CN119719321A (en) | 2024-12-03 | 2024-12-03 | Query statement generation method, device, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119719321A true CN119719321A (en) | 2025-03-28 |
Family
ID=95079936
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411767948.XA Pending CN119719321A (en) | 2024-12-03 | 2024-12-03 | Query statement generation method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119719321A (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119917637A (en) * | 2025-04-03 | 2025-05-02 | 侨远科技有限公司 | A method and system for optical cable monitoring and analysis based on large language model |
| CN120162428A (en) * | 2025-05-19 | 2025-06-17 | 天津市天河计算机技术有限公司 | Query statement generation method, system and storage medium |
| CN120353820A (en) * | 2025-06-19 | 2025-07-22 | 苏州元脑智能科技有限公司 | Structured query statement generation method, system, device and electronic equipment |
| CN120371853A (en) * | 2025-06-25 | 2025-07-25 | 山东亿云信息技术有限公司 | Text conversion query statement generation method, system, medium and equipment |
-
2024
- 2024-12-03 CN CN202411767948.XA patent/CN119719321A/en active Pending
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119917637A (en) * | 2025-04-03 | 2025-05-02 | 侨远科技有限公司 | A method and system for optical cable monitoring and analysis based on large language model |
| CN120162428A (en) * | 2025-05-19 | 2025-06-17 | 天津市天河计算机技术有限公司 | Query statement generation method, system and storage medium |
| CN120353820A (en) * | 2025-06-19 | 2025-07-22 | 苏州元脑智能科技有限公司 | Structured query statement generation method, system, device and electronic equipment |
| CN120371853A (en) * | 2025-06-25 | 2025-07-25 | 山东亿云信息技术有限公司 | Text conversion query statement generation method, system, medium and equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111783471B (en) | Semantic recognition method, device, equipment and storage medium for natural language | |
| US8452772B1 (en) | Methods, systems, and articles of manufacture for addressing popular topics in a socials sphere | |
| CN112287069B (en) | Information retrieval method and device based on voice semantics and computer equipment | |
| CN111767716B (en) | Method, device and computer equipment for determining multi-level industry information of an enterprise | |
| CN119719321A (en) | Query statement generation method, device, equipment and storage medium | |
| CN118377881A (en) | Intelligent question answering method, system, device, computer equipment and readable storage medium | |
| US11379527B2 (en) | Sibling search queries | |
| CN119960938B (en) | Interface calling method, device, computer equipment and storage medium | |
| CN119577148A (en) | A text classification method, device, computer equipment and storage medium | |
| CN116796730A (en) | Text error correction method, device, equipment and storage medium based on artificial intelligence | |
| CN119046432A (en) | Data generation method and device based on artificial intelligence, computer equipment and medium | |
| CN119201597A (en) | Log parsing method, device, computer equipment and medium based on artificial intelligence | |
| CN119691610A (en) | Text multi-label classification method, device, computer equipment and storage medium | |
| CN116521133B (en) | Software function safety requirement analysis method, device, equipment and readable storage medium | |
| CN119377366A (en) | Question and answer method, device, computer equipment and storage medium | |
| CN119599008A (en) | Element extraction method, element extraction device, computer equipment and storage medium | |
| CN119577078A (en) | Question and answer method, device, computer equipment and storage medium | |
| CN119293159A (en) | Query processing method, device, computer equipment and medium based on artificial intelligence | |
| CN119202152A (en) | Problem-solving method, device, computer equipment and medium based on artificial intelligence | |
| CN116166858B (en) | Information recommendation method, device, equipment and storage medium based on artificial intelligence | |
| CN115344679A (en) | Problem data processing method and device, computer equipment and storage medium | |
| CN114153946B (en) | A smart retrieval method, device, equipment and storage medium | |
| CN119669456A (en) | A search result generation method, device, computer equipment and storage medium | |
| CN119719292A (en) | Insurance question-and-answer method, device, equipment and storage medium | |
| CN116468028A (en) | Mechanism name error correction method, device, computer equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |