CN112163001A - High-concurrency query method, intelligent terminal and storage medium - Google Patents
High-concurrency query method, intelligent terminal and storage medium Download PDFInfo
- Publication number
- CN112163001A CN112163001A CN202011023623.2A CN202011023623A CN112163001A CN 112163001 A CN112163001 A CN 112163001A CN 202011023623 A CN202011023623 A CN 202011023623A CN 112163001 A CN112163001 A CN 112163001A
- Authority
- CN
- China
- Prior art keywords
- data
- query
- level cache
- hot
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000001914 filtration Methods 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000015556 catabolic process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24532—Query optimisation of parallel queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1032—Reliability improvement, data loss prevention, degraded operation etc
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/16—General purpose computing application
- G06F2212/163—Server or database system
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application relates to a high-concurrency query method, an intelligent terminal and a storage medium, comprising the following steps: receiving a query request of a client; responding to a received query request to access a first-level cache, and sending query data to a client when the query data corresponding to the query request exists in the first-level cache; when the first-level cache does not have the query data corresponding to the query request, accessing the second-level cache; when the query data corresponding to the query request exists in the secondary cache, filtering the query data, and sending the filtered query data to the client for pre-display; and after sending the query data after filtering to the client, accessing the supplier data, acquiring the query data corresponding to the query request in the supplier data, filtering the query data, and sending the query data after filtering to the client, so that the query data in the supplier data covers the query data in the secondary cache. The data can be updated timely and accurately.
Description
Technical Field
The present application relates to the field of information query technologies, and in particular, to a high-concurrency query method, an intelligent terminal, and a storage medium.
Background
With the continuous progress of network information technology, the application program of the mobile terminal is rapidly developed to meet the diversified requirements of people, and the service is continuously increased, the user base number is larger and larger, the access amount is higher and higher, but the system architecture designed from the beginning of the ticket purchasing processing system can not support millions of access amounts, the product performance is slower and slower, the consumed resource is higher and higher, and the user waiting time is longer and longer.
For example, the number of people going out on holidays increases, the request amount of the system directly increases, the time consumption of the interface also increases, and in addition, due to the fact that the change of holiday tickets is quick, the data of the system is prone to being updated untimely and inaccurate, and the user experience is poor.
Disclosure of Invention
In order to solve the problem that system data is not updated timely, the application provides a high-concurrency query method, an intelligent terminal and a storage medium.
In a first aspect, the high concurrency query method provided by the present application adopts the following technical scheme:
a high concurrency query method comprises the following steps:
receiving a query request of a client;
responding to a received query request to access a first-level cache, and sending query data to a client when the query data corresponding to the query request exists in the first-level cache; the first-level cache stores hot data updated in real time;
when the first-level cache does not have the query data corresponding to the query request, accessing the second-level cache; when the query data corresponding to the query request exists in the secondary cache, filtering the query data, and sending the filtered query data to the client for pre-display; the secondary cache stores full data which is not updated in real time;
after the query data after filtering processing is sent to a client, accessing supplier data, wherein the supplier data stores full data updated in real time; and acquiring query data corresponding to the query request in the supplier data, filtering the query data, and sending the filtered query data to the client, so that the query data in the supplier data covers the query data in the secondary cache.
By adopting the technical scheme, when the query request is received, the first-level cache is accessed, if the query data corresponding to the query request exists in the first-level cache, the query data is sent to the client, and the request is ended; and if the query data corresponding to the query request does not exist in the first-level cache, accessing the second-level cache, and accessing the supplier data after the basic data in the second-level cache is displayed by the client, so that the client covers the basic data in the second-level cache and displays the real-time data in the supplier data, and the client can conveniently display the real-time and accurate data.
On the other hand, high concurrent access is conveniently borne in a multi-level cache mode, high-level access amount is met, and cache breakdown is effectively prevented.
Preferably, the filtering processing on the query data is specifically set as:
and after acquiring corresponding query data according to the received query request, performing field mapping, data format conversion and product strategy mapping on the query data.
By adopting the technical scheme, the data stored in the secondary cache and the supplier data are original codes provided by the supplier, so that the client cannot directly use the data, and after field mapping, data format conversion and product strategy mapping are carried out through filtering processing, statement sentences stored in the secondary cache and the supplier data are converted into data which can be used by the client, and the data in the client can be conveniently displayed.
Preferably, the method further comprises the following steps:
when the client displays the query data in the supplier data, judging whether the query data is hot data, and if so, updating a first-level cache and a second-level cache according to the query data in the supplier data; and when the judgment result is negative, updating the secondary cache according to the query data in the supplier data.
By adopting the technical scheme, the data in the first-level cache and the second-level cache terminal can be updated conveniently according to the query data in the supplier data, the accuracy and the real-time performance of the data in the first-level cache and the second cache are effectively ensured, and meanwhile, the efficiency of data query is improved conveniently.
Preferably, the method further comprises the following steps:
generating a query log according to all received query requests;
generating hot data according to the query log and the analysis rule, and classifying the hot data;
the hot data are sent to the MQ, the hot data in the MQ are consumed, and the hot data are actively inquired;
and updating the primary cache and the secondary cache in real time according to the query result of the hot data.
By adopting the technical scheme, the hot data does not completely depend on the client to actively inquire and refresh, the MQ actively inquires the hot data, and the primary cache and the secondary cache can be updated in real time according to the inquiry result, so that the hot data can be timely and accurately updated, and the real-time accuracy of the hot data is effectively ensured.
Preferably, the analysis rule in the generation of the hot door data according to the query log and the analysis rule is specifically set as:
and acquiring the request frequency of each query data in the query log within a set time period, and setting the query data with the request frequency higher than a set value as hot data.
By adopting the technical scheme, the hot data is set according to the request frequency of the query data in the set time period, and the real-time accuracy of the hot data is improved conveniently by actively querying the hot data in the MQ.
Preferably, the classifying the hot data specifically includes:
ordering hot data according to the request frequency of the hot data in a set time period;
dividing the hot data into t1 call data and t2 call data according to the sorting and classification rules of the hot data, wherein t1 and t2 are time periods for actively consuming the hot data in the MQ, and t1 is less than t 2;
wherein the hot data with the highest request frequency is the t1 call data.
By adopting the technical scheme, the hot data are classified according to the sorting and classification rules of the hot data, so that the hot data in the MQ can be actively inquired, and the real-time accuracy of the hot data is improved.
Preferably, the supplier data is actively queried according to the data in the secondary cache in a set time period, and the query result is stored in the secondary cache.
By adopting the technical scheme, the data in the second-level cache is actively inquired in the supplier data within a set time period according to the data in the second-level cache, so that the data in the second-level cache is conveniently updated, and the integrity and the accuracy of the second-level cache data are effectively ensured.
Preferably, the method further comprises the following steps:
acquiring data in a first-level cache and a second-level cache;
judging whether the first-level cache and the second-level cache have full data of the pre-sale period or not, and inquiring supplier data when the first-level cache and the second-level cache do not have the full data of the pre-sale period;
wherein the full data for the pre-sale period is all data in t days in the future, and t > 0.
And judging whether the supplier data contains the full data of the pre-sale period or not, when the supplier data contains the full data of the pre-sale period, filtering the full data of the pre-sale period, and storing the filtered full data of the pre-sale period into a second-level cache or a first-level cache and a second-level cache.
By adopting the technical scheme, the full data in the pre-sale period is stored in the second-level cache or the first-level cache and the second-level cache, so that the integrity and the accuracy of the data in the first-level cache and the second-level cache are improved, and the use experience of a user is further improved.
In a second aspect, the present application provides an intelligent terminal, which adopts the following technical scheme:
an intelligent terminal comprising a memory and a processor, the memory having stored thereon a computer program that can be loaded by the processor and carry out any of the methods described above.
By adopting the technical scheme, the processor in the intelligent terminal can realize the high-concurrency query method according to the related computer program stored in the memory, so that the data can be updated timely and accurately.
In a third aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium storing a computer program that can be loaded by a processor and executed to perform any of the methods described above.
By adopting the technical scheme, the corresponding program can be stored, and the data can be updated timely and accurately.
In summary, the present application includes at least one of the following beneficial technical effects:
1. when receiving a query request, accessing the first-level cache, and if query data corresponding to the query request exists in the first-level cache, sending the query data to a client, and ending the request; if the query data corresponding to the query request does not exist in the first-level cache, accessing the second-level cache, and accessing the supplier data after the basic data in the second-level cache is displayed by the client, so that the client covers the basic data in the second-level cache and displays the real-time data in the supplier data, the client can conveniently display real-time and accurate data, and the real-time accuracy of the data is improved;
2. by consuming the hot data in the MQ and actively inquiring, the primary cache and the secondary cache can be updated in real time according to the inquiry result, so that the hot data can be timely and accurately updated, and the real-time accuracy of the hot data is effectively ensured.
Drawings
FIG. 1 is a block flow diagram of a high concurrency query method according to an embodiment of the present application;
FIG. 2 is a flowchart of a high concurrency query method according to an embodiment of the present application;
FIG. 3 is another flow chart diagram of a high concurrency query method according to an embodiment of the present application;
fig. 4 is another flow chart of a high-concurrency query method according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-4.
The embodiment of the application discloses a high concurrency query method, which refers to fig. 1 and 2, and comprises the following steps:
s10, receiving a request: the server receives a query request input by a client;
s11, first-level cache query: the server responds to the received query request to access the primary cache, and when query data corresponding to the query request exists in the primary cache, the server sends the query data to the client so that the client displays the query data updated in real time;
wherein, the first-level cache stores hot data updated in real time.
S12, second-level cache query: when the first-level cache does not have the query data corresponding to the query request, the server accesses the second-level cache; and when the query data corresponding to the query request exists in the second-level cache, the server filters the query data and sends the filtered query data to the client for pre-display, so that the client displays the basic data in the second-level cache, wherein the basic data is the query data which is not updated.
The second-level cache stores full data which is not updated in real time, the full data stored in the second-level cache is original codes provided by a supplier, and a client cannot directly use the original codes, so that query data in the second-level cache needs to be filtered.
Specifically, the filtering process is specifically set as: and after acquiring corresponding query data according to the query request received by the server, performing field mapping, data format conversion and product strategy mapping on the query data.
Since the full amount of data stored in the secondary cache is the original code provided by the provider, which cannot be identified by the client, the original code stored in the secondary cache needs to be converted. When query data corresponding to the query request exists in the second-level cache, field mapping and data format conversion need to be sequentially performed on the query data, for example, a field used by an original code is ABC, and a field which can be identified by a client is ABC, so that field mapping needs to be performed on the query data in the second-level cache, and the field ABC in the original code is converted into the field ABC which can be identified by the client, so that the client displays the query data in the second-level cache.
During holidays or activities, the client displays the corresponding theme, so that the product strategy mapping needs to be performed on the query data in the secondary cache according to the corresponding theme, so that the client corresponds to the theme when displaying the query data in the secondary cache.
S13, supplier data query: the server sends the filtered query data to the client, so that the client accesses the supplier data after displaying the basic data in the secondary cache; and when the requested query data exists in the supplier data, filtering the query data, and sending the filtered query data to the client, so that the client covers the basic data in the secondary cache and displays the real-time data in the supplier data.
The supplier data is stored with full data updated in real time, and the supplier data is an original code provided by the supplier, and cannot be directly used by the client, so that the query data in the supplier data needs to be filtered.
S14, updating the cache: when the client displays the query data in the supplier data, judging whether the query data is hot data, and if so, updating a first-level cache and a second-level cache according to the query data in the supplier data; and when the judgment result is negative, updating the secondary cache according to the query data in the supplier data.
Specifically, when the server receives a query request, the server accesses the primary cache, and if query data corresponding to the query request exists in the primary cache, the server sends the query data to the client, so that the client displays real-time data in the primary cache; if the first-level cache does not have query data corresponding to the query request, the server accesses the second-level cache;
and when the query data corresponding to the query request exists in the second-level cache, the server filters the query data and sends the filtered query data to the client, so that the client displays the basic data in the second-level cache, wherein the basic data is the data which is not updated in the second-level cache. And when the client displays the basic data in the secondary cache, the server accesses the supplier data.
When the query data corresponding to the query request does not exist in the second-level cache, the server directly accesses the supplier data;
and when the supplier data contains the query data corresponding to the query request, the server filters the query data and sends the filtered query data to the client, so that the client covers the basic data in the secondary cache and displays the real-time data in the supplier data.
After the client displays real-time data in the supplier data, judging whether the query data is hot data, and if so, updating a primary cache and a secondary cache according to the query data in the supplier data; when the judgment result is no, updating the secondary cache according to the query data in the supplier data
In addition, active query can be performed in the supplier data according to the total data in the second-level cache in a set time period, and a query result is stored in the second-level cache, so that the integrity and the accuracy of the data in the second-level cache are improved, and cache breakdown is effectively prevented. The cache breakdown means that when a hotspot Key expires at a certain time point, a great number of concurrent requests are sent to the Key at the time point, so that a great number of requests are sent to db.
Referring to fig. 3, when the above-described query method is executed, the following steps are performed:
s20, generating a log: the server generates a query log according to all the received query requests, and updates the query log in real time according to the received query requests;
s21, generating hot door data: generating hot data according to the query log and the analysis rule, and classifying the hot data;
the analysis rule is specifically set as: acquiring the request frequency of the query data in the query log within a set time period, setting the query data with the request frequency higher than a set value as hot data, and storing the hot data in a first-level cache.
The classification processing specifically includes:
s211, sorting the hot data according to the request frequency of the hot data in a set time period;
s212, dividing the hot data into t1 calling data and t2 calling data according to the sorting and classification rules of the hot data, wherein t1 and t2 are time periods for actively consuming the hot data, and t1 is less than t 2;
wherein the hot data with the highest request frequency is the t1 call data.
The classification rules are specifically set as: acquiring the quantity of the hot data, and when the quantity of the hot data is an even number, uniformly dividing the hot data into t1 calling data and t2 calling data according to the request frequency in a set time period from high to low; when the number of hot data is odd, the hot data is first divided into t1 call data and t2 call data in order from high to low evenly according to the request frequency within a set period, and the remaining one hot data is divided into t2 call data.
For example, if the number of hot data is 4, t1 is 1s, and t2 is 5s, two hot data with high request frequency in a set time period are divided into 1s pieces of calling data, and two hot data with low request frequency in the set time period are divided into 5s pieces of calling data; if the number of the hot data is 5, two hot data with high request frequency in the set time period are divided into 1s of calling data, and three hot data with low request frequency in the set time period are divided into 5s of calling data.
S22, active query: the hot data are sent to the MQ, and the consumer consumes the hot data according to the time period corresponding to the hot data in the MQ and actively inquires the hot data;
the mq (message queue) message queue is a first-in first-out data structure in a basic data structure, and is generally used to solve the problems of application decoupling, asynchronous messages, traffic cut-off and the like, and implement a high-performance, high-availability, scalable and final consistency architecture.
S23, active updating cache: and updating the primary cache and the secondary cache in real time according to the query result.
For example, assuming that t1 is 1s and t2 is 5s, the server generates a query log according to all received query data requests, obtains the frequency of occurrence of query requests of the client within a set time period, sets query data corresponding to the query requests with the frequency of occurrence higher than a set value as hot data, and divides the hot data into 1s call data and 5s call data according to the frequency of occurrence of the hot data within the set time period in the query log and a classification rule, wherein the hot data with the highest frequency of occurrence is the 1s call data.
After the hot data are classified, the hot data are sent to the MQ, the consumers consume the hot data in the MQ, namely the consumers call the data for 1s at intervals of 1s and call the data for 5s at intervals of 5s, active query is carried out, query results are sent to the server, and the server updates the primary cache and the secondary cache in real time according to the query results.
Referring to fig. 4, when the above-described query method is performed, the following steps are performed:
s30, acquiring data: acquiring data in a first-level cache and a second-level cache;
s31, primary judgment: judging whether the first-level cache and the second-level cache have full data of the pre-sale period or not, and inquiring supplier data when the first-level cache and the second-level cache do not have the full data of the pre-sale period;
wherein the full data for the pre-sale period is all data in t days in the future, and t > 0.
S32, secondary judgment: and judging whether the supplier data contains the full data of the pre-sale period or not, when the supplier data contains the full data of the pre-sale period, filtering the full data of the pre-sale period, and storing the filtered full data of the pre-sale period into a second-level cache or a first-level cache and a second-level cache.
Specifically, the pre-sale period of the ticket is t days, the server acquires all data in the first-level cache and the second-level cache, judges whether the total data of the future t days exist in the first-level cache and the second-level cache or not, and inquires the supplier data when the total data of the future t days do not exist in the first-level cache and the second-level cache; judging whether the total data of t days in the future exist in the supplier data, when the total data of t days in the future exist in the supplier data, performing field mapping, data format conversion and product strategy mapping on the data, judging whether hot data exist in the total data of the pre-sale period, storing the hot data in the total data of the pre-sale period into a primary cache when the hot data exist in the total data of the pre-sale period, and storing all the total data of the pre-sale period into a secondary cache; and when the hot data does not exist in the full data of the pre-sale period, storing the full data of the pre-sale period into the second-level cache. When the user inquires the pre-sale period data through the client, the pre-sale period data can be directly called from the first-level cache or the second-level cache.
The embodiment of the application also discloses an intelligent terminal which comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and can execute the high-concurrency query method.
The embodiment of the present application further discloses a computer-readable storage medium storing a computer program capable of being loaded by a processor and executing the high-concurrency query method, the computer-readable storage medium comprising: u disk, removable hard disk, read only memory, optical disk, etc. various media that can store program code.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.
Claims (10)
1. A high concurrency query method is characterized by comprising the following steps:
receiving a query request of a client;
responding to a received query request to access a first-level cache, and sending query data to a client when the query data corresponding to the query request exists in the first-level cache; the first-level cache stores hot data updated in real time;
when the first-level cache does not have the query data corresponding to the query request, accessing the second-level cache; when the query data corresponding to the query request exists in the secondary cache, filtering the query data, and sending the filtered query data to the client for pre-display; the secondary cache stores full data which is not updated in real time;
after the query data after filtering processing is sent to a client, accessing supplier data, wherein the supplier data stores full data updated in real time; and acquiring query data corresponding to the query request in the supplier data, filtering the query data, and sending the filtered query data to the client, so that the query data in the supplier data covers the query data in the secondary cache.
2. The method according to claim 1, wherein the filtering of the query data is specifically configured to:
and after acquiring corresponding query data according to the received query request, performing field mapping, data format conversion and product strategy mapping on the query data.
3. The high concurrency query method according to claim 1, further comprising:
when the client displays the query data in the supplier data, judging whether the query data is hot data, and if so, updating a first-level cache and a second-level cache according to the query data in the supplier data; and when the judgment result is negative, updating the secondary cache according to the query data in the supplier data.
4. The high concurrency query method according to claim 1, further comprising:
generating a query log according to all received query requests;
generating hot data according to the query log and the analysis rule, and classifying the hot data;
the hot data are sent to the MQ, the hot data in the MQ are consumed, and the hot data are actively inquired;
and updating the primary cache and the secondary cache in real time according to the query result of the hot data.
5. The high concurrency query method according to claim 4, wherein the analysis rule in the generation of the hot door data according to the query log and the analysis rule is specifically set as:
and acquiring the request frequency of each query data in the query log within a set time period, and setting the query data with the request frequency higher than a set value as hot data.
6. The high concurrency query method according to claim 4, wherein the classifying the hot data specifically comprises:
ordering hot data according to the request frequency of the hot data in a set time period;
dividing the hot data into t1 call data and t2 call data according to the sorting and classification rules of the hot data, wherein t1 and t2 are time periods for consuming the hot data in the MQ, and t1< t 2;
wherein the hot data with the highest request frequency is the t1 call data.
7. The highly concurrent query method according to claim 1, wherein: and actively inquiring the supplier data according to the data in the secondary cache in a set time period, and storing the inquiry result in the secondary cache.
8. The high concurrency query method according to claim 1, further comprising:
acquiring data in a first-level cache and a second-level cache;
judging whether the first-level cache and the second-level cache have full data of the pre-sale period or not, and inquiring supplier data when the first-level cache and the second-level cache do not have the full data of the pre-sale period;
wherein, the total data of the pre-sale period is all data in t days in the future, and t is greater than 0;
and judging whether the supplier data contains the full data of the pre-sale period or not, when the supplier data contains the full data of the pre-sale period, filtering the full data of the pre-sale period, and storing the filtered full data of the pre-sale period into a second-level cache or a first-level cache and a second-level cache.
9. The utility model provides an intelligent terminal which characterized in that: comprising a memory and a processor, said memory having stored thereon a computer program which can be loaded by the processor and which performs the method of any of claims 1-8.
10. A computer-readable storage medium characterized by: a computer program which can be loaded by a processor and which performs the method according to any of claims 1-8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011023623.2A CN112163001A (en) | 2020-09-25 | 2020-09-25 | High-concurrency query method, intelligent terminal and storage medium |
PCT/CN2020/134276 WO2022062184A1 (en) | 2020-09-25 | 2020-12-07 | High-concurrency query method, intelligent terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011023623.2A CN112163001A (en) | 2020-09-25 | 2020-09-25 | High-concurrency query method, intelligent terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112163001A true CN112163001A (en) | 2021-01-01 |
Family
ID=73863869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011023623.2A Pending CN112163001A (en) | 2020-09-25 | 2020-09-25 | High-concurrency query method, intelligent terminal and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112163001A (en) |
WO (1) | WO2022062184A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113672640A (en) * | 2021-06-28 | 2021-11-19 | 深圳云之家网络有限公司 | Data query method and device, computer equipment and storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115022397B (en) * | 2022-06-16 | 2023-11-03 | 河南匠多多信息科技有限公司 | Interface parameter simplifying method and device, electronic equipment and storage medium |
CN115525686B (en) * | 2022-10-10 | 2023-06-13 | 中电金信软件有限公司 | Caching method and device for mapping configuration data |
CN115934583B (en) * | 2022-11-16 | 2024-07-12 | 智慧星光(安徽)科技有限公司 | Hierarchical caching method, device and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123238A (en) * | 2014-06-30 | 2014-10-29 | 海视云(北京)科技有限公司 | Data storage method and device |
CN108132958A (en) * | 2016-12-01 | 2018-06-08 | 阿里巴巴集团控股有限公司 | A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device |
CN108984553A (en) * | 2017-06-01 | 2018-12-11 | 北京京东尚科信息技术有限公司 | Caching method and device |
CN109614404A (en) * | 2018-11-01 | 2019-04-12 | 阿里巴巴集团控股有限公司 | A kind of data buffering system and method |
CN109684358A (en) * | 2017-10-18 | 2019-04-26 | 北京京东尚科信息技术有限公司 | The method and apparatus of data query |
CN111291079A (en) * | 2020-02-20 | 2020-06-16 | 京东数字科技控股有限公司 | Data query method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7308426B1 (en) * | 1999-08-11 | 2007-12-11 | C-Sam, Inc. | System and methods for servicing electronic transactions |
CN102073726B (en) * | 2011-01-11 | 2014-08-06 | 百度在线网络技术(北京)有限公司 | Structured data import method and device for search engine system |
CN104935680B (en) * | 2015-06-18 | 2018-11-06 | 中国互联网络信息中心 | A kind of the recurrence Domain Name Service System and method of multi-layer shared buffer memory |
CN109446222A (en) * | 2018-08-28 | 2019-03-08 | 厦门快商通信息技术有限公司 | A kind of date storage method of Double buffer, device and storage medium |
-
2020
- 2020-09-25 CN CN202011023623.2A patent/CN112163001A/en active Pending
- 2020-12-07 WO PCT/CN2020/134276 patent/WO2022062184A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123238A (en) * | 2014-06-30 | 2014-10-29 | 海视云(北京)科技有限公司 | Data storage method and device |
CN108132958A (en) * | 2016-12-01 | 2018-06-08 | 阿里巴巴集团控股有限公司 | A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device |
CN108984553A (en) * | 2017-06-01 | 2018-12-11 | 北京京东尚科信息技术有限公司 | Caching method and device |
CN109684358A (en) * | 2017-10-18 | 2019-04-26 | 北京京东尚科信息技术有限公司 | The method and apparatus of data query |
CN109614404A (en) * | 2018-11-01 | 2019-04-12 | 阿里巴巴集团控股有限公司 | A kind of data buffering system and method |
CN111291079A (en) * | 2020-02-20 | 2020-06-16 | 京东数字科技控股有限公司 | Data query method and device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113672640A (en) * | 2021-06-28 | 2021-11-19 | 深圳云之家网络有限公司 | Data query method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022062184A1 (en) | 2022-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112163001A (en) | High-concurrency query method, intelligent terminal and storage medium | |
Chaczko et al. | Availability and load balancing in cloud computing | |
CN101493826B (en) | Database system based on WEB application and data management method thereof | |
CN110134516B (en) | Financial data processing method, apparatus, device and computer readable storage medium | |
CN109240946A (en) | The multi-level buffer method and terminal device of data | |
CN107515784B (en) | Method and equipment for calculating resources in distributed system | |
US20130346540A1 (en) | Storing and Moving Data in a Distributed Storage System | |
CN103368986A (en) | Information recommendation method and information recommendation device | |
CN101551745A (en) | Method for greatly improving performance of workflow engine | |
CN102780603B (en) | Web traffic control method and device | |
CN103679497A (en) | Trial commodity distributing method and device | |
EP3049940B1 (en) | Data caching policy in multiple tenant enterprise resource planning system | |
CN108717457A (en) | A kind of e-commerce platform big data processing method and system | |
CN117707763A (en) | Hierarchical calculation scheduling method, system, equipment and storage medium | |
CN102325098B (en) | Group information acquisition method and system | |
CN114036031B (en) | Scheduling system and method for resource service application in enterprise digital middleboxes | |
CN115357622A (en) | Service processing method based on hotspot data and server | |
CN114741187A (en) | Resource scheduling method, system, electronic device and medium | |
CN101382959B (en) | A method, device and system for acquiring multimedia resources | |
US11704200B2 (en) | Quiesce notifications for query retries | |
CN114402313A (en) | Label updating method and device, electronic equipment and storage medium | |
CN110490501B (en) | Transport capacity state management method and device | |
CN114237858A (en) | Task scheduling method and system based on multi-cluster network | |
CN108062311A (en) | A kind of method and system of access service device web data | |
CN112948461B (en) | Method, apparatus, storage medium and program product for calendar data processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |