CN106250492B - The processing method and processing device of index - Google Patents
The processing method and processing device of index Download PDFInfo
- Publication number
- CN106250492B CN106250492B CN201610623529.8A CN201610623529A CN106250492B CN 106250492 B CN106250492 B CN 106250492B CN 201610623529 A CN201610623529 A CN 201610623529A CN 106250492 B CN106250492 B CN 106250492B
- Authority
- CN
- China
- Prior art keywords
- thread
- kernel
- subindex
- index
- reading
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/93—Document management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides a kind of processing method and processing device of index, by the way that all kernels of server are divided into two kernel set, limitation reads thread and runs on the kernel in the first kernel set, and it limits and writes the kernel that thread is run in the second kernel set, it avoids and frequently modifies data in cache due to writing thread, and lead to the problem low for the data hit rate for reading to be stored in cache for thread, it improves and reads thread when needing to read the first index, the probability that the first index is read from cache, to improve search efficiency.
Description
Technical field
The present invention relates to Internet technical field more particularly to a kind of processing method and processing devices of index.
Background technique
Server can provide release information function and query information function for user.In general, server is to pass through rope
The mode drawn carrys out management document, wherein index indicates document corresponding to each keyword.
In the prior art, server provides query information function and release information using different threads come respectively user
Function.Specifically, providing query information function by reading thread for user, release information function is provided for user by writing thread
Energy.For query information function, reads thread and needs to determine the matched keyword of search sequence inputted with user according to index,
And corresponding document is returned to user according to identified keyword, therefore read thread and need to carry out read operation to index.For
Release news function, writes thread it needs to be determined that keyword in the document that user is issued, is closed determined by write-in in the index
The corresponding relationship of keyword and the document issued, therefore write thread and need to carry out write operation to index.Wherein, thread is being read to rope
When introducing row read operation, first determines whether to run and whether saved in the cache memory (cache) of the kernel of the reading thread
The index;If so, directly reading the index from cache;Otherwise, the index is read from memory, while will also be read
The index be stored in cache, the reading of the index is directly obtained from cache after allowing.Writing thread pair
When index carries out write operation, the data being written into are written to operation, and this is write in the cache of the kernel of thread.
But in the prior art, there is a problem of that search efficiency is lower.
Summary of the invention
The present invention provides a kind of processing method and processing device of index, and to solve, search efficiency in the prior art is lower to be asked
Topic.
In a first aspect, the present invention provides a kind of processing method of index, the method is applied to server, the server
In all kernels be divided into two kernel set;Wherein, it reads thread to operate on the kernel in the first kernel set, writes line
Journey operates on the kernel in the second kernel set, and the reading thread is used to provide query information function for user, described to write line
Journey is used to provide release information function for user;The described method includes:
The reading thread reads the first index, and first index is used to indicate keyword pass corresponding with document identification
System;
The thread of reading is determining corresponding with the search sequence according to first index and the search sequence of user's input
Document identification.
In a kind of possible design, first index includes N number of subindex, and N is the integer greater than 0;N number of son
Each of index subindex corresponds to K reading thread, and K is the integer more than or equal to 0;
Correspondingly, the reading thread reads the first index, comprising: first, which reads thread, reads the first subindex;Wherein, described
First reads thread as the corresponding reading thread of first subindex.
In a kind of possible design, all kernels in the first kernel set are divided into N number of kernel subclass,
Each of N number of kernel subclass kernel subclass and each described subindex correspond;Each height
Corresponding reading thread is indexed to operate on the kernel in the corresponding kernel subclass of each described subindex.
In a kind of possible design, before the reading index, further includes:
Interior nucleus number and Thread Count needed for estimating each described subindex;
According to interior nucleus number needed for each described subindex, the corresponding kernel subset of each described subindex is determined
It closes;
According to Thread Count needed for each described subindex, the corresponding reading thread of each described subindex is determined.
In a kind of possible design, the method also includes: the information to be released writing thread and being inputted according to user,
Second index is updated.
Second aspect, the present invention provide a kind of processing unit of index, and described device is server, in the server
All kernels are divided into two kernel set;Wherein, it reads thread to operate on the kernel in the first kernel set, writes thread fortune
For row on the kernel in the second kernel set, the reading thread is used to provide query information function for user, described to write thread use
In providing release information function for user;Described device includes:
The first processing module for reading thread, for reading the first index, first index is used to indicate keyword
With the corresponding relationship of document identification;
The Second processing module for reading thread, for the search sequence according to first index and user's input, really
Fixed document identification corresponding with the search sequence.
In a kind of possible design, first index includes N number of subindex, and N is the integer greater than 0;N number of son
Each of index subindex corresponds to K reading thread, and K is the integer more than or equal to 0;
Correspondingly, the processing module for reading thread includes: the processing submodule of the first reading thread, for reading the first son
Index;Wherein, described first thread is read as the corresponding reading thread of first subindex.
In a kind of possible design, all kernels in the first kernel set are divided into N number of kernel subclass,
Each of N number of kernel subclass kernel subclass and each described subindex correspond;Each height
Corresponding reading thread is indexed to operate on the kernel in the corresponding kernel subclass of each described subindex.
In a kind of possible design, described device further include: estimate module and determining module;
It is described to estimate module, for interior nucleus number and Thread Count needed for estimating each described index;
The determining module is used for: according to interior nucleus number needed for each described index, determining each described index pair
The kernel subclass answered;According to Thread Count needed for each described index, each described corresponding reading thread of index is determined.
In a kind of possible design, described device further include: the processing module for writing thread, for defeated according to user
The information to be released entered is updated the second index.
The processing method and processing device of index provided by the invention, by the way that all kernels of server are divided into two kernels
Set, limitation reads thread and runs on the kernel in the first kernel set, and limitation is write thread and run in the second kernel set
Kernel avoids and frequently modifies data in cache due to writing thread, and causes for reading to be stored in cache for thread
The low problem of data hit rate, improve and read thread when needing to read the first index, the first index is read from cache
Probability, to improve search efficiency.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the schematic diagram of the application scenarios for the processing method that the present invention indexes;
Fig. 2 is the flow chart for the processing method embodiment one that the present invention indexes;
Fig. 3 is the schematic diagram of inverted index of the present invention;
Fig. 4 is the flow chart for the processing method embodiment two that the present invention indexes;
Fig. 5 is the inquiry time-consuming comparison diagram under different modes of the present invention;
Fig. 6 is the schematic diagram that the present invention divides the first kernel set;
Fig. 7 is the structural schematic diagram for the processing device embodiment one that the present invention indexes.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Method of the invention can be applied to the server of any offer query information function and release information function, the clothes
Business device includes M processor, and M is the integer greater than 0.Wherein, each processor in M processor may include one or more
A kernel.All kernels of server are divided into two kernel set, respectively the first kernel set and the second kernel set;
Wherein, it reads thread and the first kernel set is bound (that is, the reading thread operates in the kernel in the first kernel set
On), thread and the second kernel set binding are write (that is, the kernel writing thread and operating in the second kernel set
On), the reading thread is used to provide query information function for user, and the thread of writing is for providing release information function for user
Energy.For example, as shown in Figure 1, server includes 2 processors, respectively processor 1 and processor 2;Processor 1 includes in 4
Core, respectively kernel 1, kernel 2, kernel 3 and kernel 4;Processor 2 includes 2 kernels, respectively kernel 5 and kernel 6.Wherein,
It for example can be with for the division of kernel set are as follows: kernel 1- kernel 5 is divided to the first kernel set, and kernel 6 is divided to
Two kernel set.It should be noted that read thread and the first kernel set and bind be in order to limit the reading thread run on this
Kernel in one kernel set;It writes thread and the binding of the second kernel set is to write thread to limit this to run on second kernel
Kernel in set.That is, the kernel in the first kernel set, which runs the reading thread, writes thread without running this, second kernel
Kernel in set runs this and writes thread without running the reading thread.
Wherein, for reading thread, when reading index, first determining whether to run in the cache of the kernel of the reading thread is
It is no to save the index;If so, directly reading the index from cache;Otherwise, the index is read from memory, while
The index read is stored in cache, the reading of the index is directly obtained from cache after allowing.It is right
In writing thread, the data that can be written into are written to operation, and this is write in the cache of the kernel of thread.In the prior art, by taking
The operating system of device of being engaged in is based on load balancing, to the kernel for reading the thread distribution operation reading thread, and to writing thread
Distribution runs this and writes the kernel of thread.Therefore, can exist and read thread and write the scene that thread runs on same kernel.When reading thread
With when writing thread and running on same kernel, it may appear that frequently modify the data in the cache of kernel due to writing thread, and cause pair
Low in the data hit rate for reading to be stored in cache for thread, required number cannot be read directly from cache by reading thread
According to, and need to read from memory, time problem that is longer, and then causing search efficiency low needed for reading data.
Fig. 2 is the flow chart for the processing method embodiment one that the present invention indexes, and the method for the present embodiment can be applied to appoint
What provides the server of query information function and the function that releases news;As shown in Fig. 2, the method for the present embodiment may include:
Step 201, the reading thread read the first index.
In this step, first index is used to indicate the corresponding relationship of keyword and document identification.First index
It should be the index that query information function is capable of providing in the server.First index for example can be inverted index,
In, the inverted index for example can be as shown in Figure 3.
Step 202, the thread of reading are according to first index and the search sequence of user's input, the determining and inquiry
The corresponding document identification of sequence.
In this step, when the server is forum servers, the document identification is specifically as follows model mark
(ID, identification).Wherein, the search sequence of user's input can be by user equipment (wherein, the user
Equipment can be for example mobile phone, tablet computer etc.) server is sent to by inquiry request message.Step 202 specifically may be used
Think and the keyword in first index is matched by the search sequence, according to matched with the search sequence
Keyword, and will and the corresponding document identification of the matched keyword of the search sequence be determined as it is corresponding with the search sequence
Document identification.For example, for inverted index shown in Fig. 3, when being that keyword 1+ is closed with the matched keyword of the search sequence
When keyword 2, then document identification corresponding with the search sequence is document identification 3.
In the present embodiment, by the way that all kernels of server are divided into two kernel set, limitation is read thread and is run on
Kernel in first kernel set, and limit and write the kernel that thread is run in the second kernel set, it avoids due to writing thread
The frequently data in modification cache, and lead to the problem low for the data hit rate for reading to be stored in cache for thread,
It improves and reads thread when needing to read the first index, the probability of the first index is read from cache, to improve inquiry effect
Rate.
Fig. 4 is the flow chart for the processing method embodiment two that the present invention indexes, and the method for the present embodiment can be applied to appoint
What provides the server of query information function and the function that releases news;As shown in figure 4, the method for the present embodiment may include:
Step 401, the reading thread read the first index.
In this step, first index is used to indicate the corresponding relationship of keyword and document identification.First index
It should be the index that query information function is capable of providing in the server.The reading thread reads the first index, is specifically as follows:
It is described read thread judge run it is described read thread kernel cache in whether save it is described first index;If so, from this
First index is read in cache;Otherwise, first index is read from the memory of the server, and by described the
One index is stored into the cache.It should be noted that the kernel should be the kernel in the first kernel set.
Optionally, first index may include N number of subindex, and N is the integer greater than 0;In N number of subindex
Each subindex corresponds to K reading thread, and K is the integer more than or equal to 0.Correspondingly, the reading thread reads the first index,
It include: that the first reading thread reads the first subindex;Wherein, described first thread is read as the corresponding reading line of first subindex
Journey.That is, being read out by the corresponding reading thread of subindex to the subindex.It include N number of subindex by the first index, and each
A subindex corresponds to K reading thread, and the multiple reading threads for corresponding to different subindexs can be enabled to read multiple reading simultaneously
The corresponding subindex of thread, to further improve search efficiency.Fig. 5 gives the consumption of the inquiry under mode 1 and mode 2
When comparison diagram.Wherein, first index being divided into N number of subindex in mode 1, each subindex corresponds to K reading thread, and according to
Above-mentioned mode in the prior art determines that the kernel that thread writes thread with operation is read in operation, therefore exists and read thread and write thread to transport
Row is in the scene of same kernel;The difference of mode 2 and mode 1, which essentially consists in mode 2, will read thread using method of the invention
It is bound with the first kernel set, writes thread and the second kernel set is bound, so as to avoid reading thread and writing thread to run on together
The scene of one kernel.Such as Fig. 5 as can be seen that mode 2 is inquired time-consuming and greatly reduced compared to mode 1.
Optionally, all kernels in the first kernel set are divided into N number of kernel subclass, N number of kernel
Each of subclass kernel subclass and each described subindex correspond;The corresponding reading of described each subindex
Thread operates on the kernel in the corresponding kernel subclass of each described subindex.For the division example of the first kernel set
It such as can be as shown in fig. 6, wherein assuming kernel in the first kernel set as kernel 1- kernel 5, N=5.Kernel subset in Fig. 6
The corresponding relationship with subindex is closed, such as subindex 2 can be corresponded to for the corresponding subindex 1 of kernel subclass 1, kernel subclass 2,
Kernel subclass 3 corresponds to subindex 3, the corresponding subindex 4 of kernel subclass 4, the corresponding subindex 5 of kernel subclass 5.It needs to illustrate
Be, it is assumed that the corresponding reading thread of subindex 1 is to read thread 1 and read thread 2, and the corresponding kernel subset of subindex 1 is combined into kernel
Subclass 1, then it represents that read thread 1 and read the kernel (running on kernel 1) that thread 2 is run in kernel subclass 1.Passing through will
All kernels in first kernel set are divided into N number of kernel subclass, each kernel subclass and each subindex one
One is corresponding, and the corresponding reading thread of each subindex operates on the kernel that each is indexed in corresponding kernel subclass,
The range for running each kernel for reading thread is further defined, therefore further improves and is stored in the cache of kernel
The hit rate of data, to further improve search efficiency.
It optionally, can also include: interior nucleus number and thread needed for estimating each described subindex before step 401
Number;According to interior nucleus number needed for each described subindex, the corresponding kernel subclass of each described subindex is determined;According to
Thread Count needed for each described subindex determines the corresponding reading thread of each described subindex.Optionally, described to estimate
Interior nucleus number needed for each described subindex, is specifically as follows: according to the kernel sum and the first ratio in the server
The example factor, interior nucleus number needed for determining each described subindex is (for example, interior nucleus number=kernel needed for each subindex is total
Number is multiplied by the first scale factor);Wherein, first scale factor is that each described subindex occupies in previous time period
Ratio of the kernel resources in the kernel resources that N number of subindex occupies.It is described to estimate needed for each described subindex
Thread Count is specifically as follows: according to the reading total number of threads and the second scale factor in the server, determine it is described each
Thread Count needed for subindex (for example, Thread Count needed for each subindex=reading total number of threads multiplied by the second ratio because
Son);Wherein, second scale factor is the thread resources of each subindex occupancy in previous time period in the N
The ratio in reading thread resources that a subindex occupies.It should be noted that described estimate needed for each described subindex
Interior nucleus number and Thread Count, the interior nucleus number according to needed for each described subindex determine described in the corresponding kernel of each subindex
Subclass, and the Thread Count according to needed for each described subindex determine the corresponding reading thread of each described subindex,
Specifically can by the server except the reading thread with it is described write thread in addition to other threads execute.
Step 402, the thread of reading are according to first index and the search sequence of user's input, the determining and inquiry
The corresponding document identification of sequence.
It should be noted that step 402 is similar with step 202, details are not described herein.
Step 403, the reading thread obtain corresponding document according to document identification corresponding with the search sequence.
In this step, the document for example can be model.After obtaining the document, it can be disappeared by inquiry response
The document is returned to the user equipment by breath, so that the document is shown to user by the user equipment.
Step 404, the information to be released writing thread and being inputted according to user are updated the second index.
In this step, the information to be released of user's input can be sent to by user equipment by posting request message
The server.Second index is used to indicate the corresponding relationship of keyword and document identification.Second index should be institute
State the index that release information service is capable of providing in server.Optionally, second index is different from above-mentioned first index
Index, or may be identical index.It, can be to avoid by the second index with above-mentioned first index for different indexes
It reads thread and writes thread and need to access access conflict problem caused by same index, to further improve search efficiency.
Optionally, when second index is different index with first index, second index and above-mentioned each subindex
There is the life cycle of oneself;After the life cycle of the index (for example, index A) for the function that releases news reaches, rope
Draw A and be changed to index for query information function, can produce new index (for example, index B) at this time for releasing news
Function;It, can be by index C and other after the life cycle of the index (for example, index C) for query information function reaches
Some index for being used for query information function merges.
It should be noted that there is no the limitations of sequencing between step 404 and step 401- step 403.
Fig. 7 is the structural schematic diagram for the processing device embodiment one that the present invention indexes;The processing unit of the index can lead to
Cross being implemented in combination with as some or all of of server of software, hardware or both.All kernels in the server
It is divided into two kernel set;Wherein, it reads thread to operate on the kernel in the first kernel set, writes thread and operate in second
On kernel in kernel set, the reading thread is used to provide query information function for user, and the thread of writing is used for as user
Release information function is provided.As shown in fig. 7, the processing unit of the index include: it is described read thread first processing module 701,
The Second processing module 702 for reading thread.Wherein, the first processing module 701 for reading thread, for reading the first rope
Draw, first index is used to indicate the corresponding relationship of keyword and document identification;The Second processing module for reading thread
702, for the inquiry sequence according to the first processing module 701 for reading thread first index read and user's input
Column determine document identification corresponding with the search sequence.
The device of the present embodiment can be used for executing the technical solution of embodiment of the method shown in Fig. 2, realization principle and skill
Art effect is similar, and details are not described herein again.
The processing device embodiment two of index
Optionally, on the basis of processing device embodiment one that the present invention indexes, further, the first index packet
N number of subindex is included, N is the integer greater than 0;Each of N number of subindex subindex corresponds to K reading thread, K be greater than
Or the integer equal to 0;
Correspondingly, the first processing module 701 for reading thread includes: the processing submodule of the first reading thread, for reading
Take the first subindex;Wherein, described first thread is read as the corresponding reading thread of first subindex.
Optionally, all kernels in the first kernel set are divided into N number of kernel subclass, N number of kernel
Each of subclass kernel subclass and each described subindex correspond;The corresponding reading of described each subindex
Thread operates on the kernel in the corresponding kernel subclass of each described subindex.
Optionally, the device of the present embodiment can also include: to estimate module and determining module;It is wherein, described to estimate module,
For interior nucleus number and Thread Count needed for estimating each described index;The determining module is used for: according to each described rope
Draw required interior nucleus number, determines each described corresponding kernel subclass of index;According to line needed for each described index
Number of passes determines each described corresponding reading thread of index.
Further alternative, the device of the present embodiment can also include: the processing module for writing thread.Wherein, described
The processing module for writing thread, the information to be released for being inputted according to user are updated the second index.
The device of the present embodiment can be used for executing the technical solution of embodiment of the method shown in Fig. 4, realization principle and skill
Art effect is similar, and details are not described herein again.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Program above-mentioned can be stored in a computer readable storage medium.The journey
When being executed, execution includes the steps that above-mentioned each method embodiment to sequence;And storage medium above-mentioned include: ROM, RAM, magnetic disk or
The various media that can store program code such as person's CD.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (10)
1. a kind of processing method of index, which is characterized in that the method is applied to server, all interior in the server
Core is divided into two kernel set;Wherein, it reads thread to operate on the kernel in the first kernel set, writes thread and operate in the
On kernel in two kernel set, the reading thread is used to provide query information function for user, and the thread of writing is for for use
Family provides release information function;The described method includes:
The reading thread reads the first index, and first index is used to indicate the corresponding relationship of keyword and document identification;
The thread of reading determines text corresponding with the search sequence according to first index and the search sequence of user's input
Shelves mark, the search sequence match with the keyword.
2. N is greater than 0 the method according to claim 1, wherein first index includes N number of subindex
Integer;Each of N number of subindex subindex corresponds to K reading thread, and K is the integer more than or equal to 0;
Correspondingly, the reading thread reads the first index, comprising: first, which reads thread, reads the first subindex;Wherein, described first
Reading thread is the corresponding reading thread of first subindex.
3. according to the method described in claim 2, it is characterized in that, all kernels in the first kernel set are divided into
N number of kernel subclass, each of described N number of kernel subclass kernel subclass and each described subindex one are a pair of
It answers;The corresponding kernel reading thread and operating in the corresponding kernel subclass of each described subindex of each described subindex
On.
4. according to the method described in claim 3, it is characterized in that, the reading index before, the method also includes:
Interior nucleus number and Thread Count needed for estimating each described subindex;
According to interior nucleus number needed for each described subindex, the corresponding kernel subclass of each described subindex is determined;
According to Thread Count needed for each described subindex, the corresponding reading thread of each described subindex is determined.
5. method according to claim 1-4, which is characterized in that the method also includes:
The information to be released writing thread and being inputted according to user is updated the second index.
6. a kind of processing unit of index, which is characterized in that described device is that server or described device are integrated in the clothes
It is engaged in device, all kernels in the server are divided into two kernel set;Wherein, it reads thread and operates in the first kernel collection
It on kernel in conjunction, writes thread and operates on the kernel in the second kernel set, the reading thread is used to provide inquiry for user
Informational function, the thread of writing is for providing release information function for user;Described device includes:
The first processing module for reading thread, for reading the first index, first index is used to indicate keyword and text
The corresponding relationship of shelves mark;
It is described read thread Second processing module, for according to it is described first index and user input search sequence, determine with
The corresponding document identification of the search sequence, the search sequence match with the keyword.
7. device according to claim 6, which is characterized in that first index includes N number of subindex, and N is greater than 0
Integer;Each of N number of subindex subindex corresponds to K reading thread, and K is the integer more than or equal to 0;
Correspondingly, the first processing module for reading thread includes: the processing submodule of the first reading thread, for reading the first son
Index;Wherein, described first thread is read as the corresponding reading thread of first subindex.
8. device according to claim 7, which is characterized in that all kernels in the first kernel set are divided into
N number of kernel subclass, each of described N number of kernel subclass kernel subclass and each described subindex one are a pair of
It answers;The corresponding kernel reading thread and operating in the corresponding kernel subclass of each described subindex of each described subindex
On.
9. device according to claim 8, which is characterized in that described device further include: estimate module and determining module;
It is described to estimate module, for interior nucleus number and Thread Count needed for estimating each described subindex;
The determining module is used for: according to interior nucleus number needed for each described subindex, determining each described subindex pair
The kernel subclass answered;According to Thread Count needed for each described subindex, the corresponding reading of each described subindex is determined
Thread.
10. according to the described in any item devices of claim 6-9, which is characterized in that described device further include:
The processing module for writing thread, the information to be released for being inputted according to user are updated the second index.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610623529.8A CN106250492B (en) | 2016-07-28 | 2016-07-28 | The processing method and processing device of index |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610623529.8A CN106250492B (en) | 2016-07-28 | 2016-07-28 | The processing method and processing device of index |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106250492A CN106250492A (en) | 2016-12-21 |
CN106250492B true CN106250492B (en) | 2019-11-19 |
Family
ID=57606919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610623529.8A Active CN106250492B (en) | 2016-07-28 | 2016-07-28 | The processing method and processing device of index |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106250492B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408613A (en) * | 2018-08-14 | 2019-03-01 | 广东神马搜索科技有限公司 | Index structure operating method, device and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1193600A2 (en) * | 1995-02-20 | 2002-04-03 | Hitachi, Ltd. | Memory control apparatus and its control method |
CN102171695A (en) * | 2008-10-05 | 2011-08-31 | 微软公司 | Efficient large-scale joining for querying of column based data encoded structures |
CN102346714A (en) * | 2011-10-09 | 2012-02-08 | 西安交通大学 | Consistency maintenance device for multi-kernel processor and consistency interaction method |
CN103207907A (en) * | 2013-03-28 | 2013-07-17 | 新浪网技术(中国)有限公司 | Method and device for combining index files |
CN104484131A (en) * | 2014-12-04 | 2015-04-01 | 珠海金山网络游戏科技有限公司 | Device and corresponding method for processing data of multi-disk servers |
CN105740164A (en) * | 2014-12-10 | 2016-07-06 | 阿里巴巴集团控股有限公司 | Multi-core processor supporting cache consistency, reading and writing methods and apparatuses as well as device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6832300B2 (en) * | 2002-03-20 | 2004-12-14 | Hewlett-Packard Development Company, L.P. | Methods and apparatus for control of asynchronous cache |
-
2016
- 2016-07-28 CN CN201610623529.8A patent/CN106250492B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1193600A2 (en) * | 1995-02-20 | 2002-04-03 | Hitachi, Ltd. | Memory control apparatus and its control method |
CN102171695A (en) * | 2008-10-05 | 2011-08-31 | 微软公司 | Efficient large-scale joining for querying of column based data encoded structures |
CN102346714A (en) * | 2011-10-09 | 2012-02-08 | 西安交通大学 | Consistency maintenance device for multi-kernel processor and consistency interaction method |
CN103207907A (en) * | 2013-03-28 | 2013-07-17 | 新浪网技术(中国)有限公司 | Method and device for combining index files |
CN104484131A (en) * | 2014-12-04 | 2015-04-01 | 珠海金山网络游戏科技有限公司 | Device and corresponding method for processing data of multi-disk servers |
CN105740164A (en) * | 2014-12-10 | 2016-07-06 | 阿里巴巴集团控股有限公司 | Multi-core processor supporting cache consistency, reading and writing methods and apparatuses as well as device |
Also Published As
Publication number | Publication date |
---|---|
CN106250492A (en) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103608809B (en) | Recommending data is enriched with | |
CN104731896B (en) | A kind of data processing method and system | |
Li et al. | SparkBench: a spark benchmarking suite characterizing large-scale in-memory data analytics | |
US10503786B2 (en) | Defining dynamic topic structures for topic oriented question answer systems | |
US20190339820A1 (en) | Displaying a subset of menu items based on a prediction of the next user-actions | |
CN107391744A (en) | Data storage, read method, device and its equipment | |
US20150113388A1 (en) | Method and apparatus for performing topic-relevance highlighting of electronic text | |
CN112416960A (en) | Data processing method, device and equipment under multiple scenes and storage medium | |
CN103647850A (en) | Data processing method, device and system of distributed version control system | |
CN109325055A (en) | The screening of business association tables of data and checking method, device, electronic equipment | |
CN107861878A (en) | The method, apparatus and equipment of java application performance issue positioning | |
CN102279761B (en) | Perform context based on application program and optionally show BCL | |
CN103049393A (en) | Method and device for managing memory space | |
CN107451271A (en) | A kind of Hash table processing method, device, equipment and storage medium | |
JP2010515998A5 (en) | ||
CN109032511A (en) | Data storage method, server and storage medium | |
US20150066995A1 (en) | Apparatus and method for connecting nosql data and linked data | |
CN106250492B (en) | The processing method and processing device of index | |
US10757190B2 (en) | Method, device and computer program product for scheduling multi-cloud system | |
US20170337197A1 (en) | Rule management system and method | |
CN110020018A (en) | Data visualization methods of exhibiting and device | |
CN109582834B (en) | Data risk prediction method and device | |
US8332595B2 (en) | Techniques for improving parallel scan operations | |
US20140289742A1 (en) | Method of sharing contents | |
US20160292282A1 (en) | Detecting and responding to single entity intent queries |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |