Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a data query method, which comprises the steps of receiving a query request, wherein the query request at least comprises attribute information of target message data, determining target index data from a plurality of index data based on the query request, wherein the target index data comprises a target file path associated with the attribute information of the target message data, each index data in the plurality of index data comprises the attribute information of historical message data and a file path of a file in which the historical message data is located, determining a target file from at least one file based on the target index data, wherein the file path of the target file is a target file path, and at least one file is used for storing the historical message data and acquiring the target message data from the target file.
The embodiment of the disclosure also provides a data storage method for storing the historical message data, which comprises the steps of obtaining a plurality of historical message data to be stored, analyzing each historical message data in the plurality of historical message data to be stored to obtain attribute information of each historical message data, storing the plurality of historical message data to be stored in at least one file in a distributed file system, and storing the attribute information of the historical message data and a file path of the file in which the historical message data are located in a bitmap database in an associated manner aiming at each historical message data to obtain index data aiming at each historical message data.
Fig. 1 schematically illustrates a system architecture of a data query method and a data storage method according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include forwarding devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between forwarding devices 101, 102, 103 and server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
Forwarding devices 101, 102, 103 interact with server 105 through network 104 to receive or send messages, etc. Forwarding devices 101, 102, 103 may include, but are not limited to, routers, switches, gateways, and the like.
The server 105 may be a server providing various services, such as providing storage functions (by way of example only) for message data from the forwarding devices 101, 102, 103. The server 105 may analyze the received query request and obtain target message data for the query request.
It should be noted that the data query method and the data storage method provided in the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the data querying device and the data storage device provided by the embodiments of the present disclosure may be generally disposed in the server 105. The data query method and the data storage method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the forwarding devices 101, 102, 103 and/or the server 105. Accordingly, the data querying means and the data storage means provided by the embodiments of the present disclosure may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the forwarding devices 101, 102, 103 and/or the server 105.
It should be understood that the number of forwarding devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of forwarding devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a schematic diagram of a data query method and a data storage method according to an embodiment of the present disclosure.
As shown in fig. 2, a plurality of history packet data 210 to be stored is obtained from, for example, a router, a switch, or the like. Taking the example that the plurality of historical packet data 210 includes historical packet data 211, historical packet data 212, historical packet data 213, and historical packet data 214.
And carrying out data analysis on each historical message data to obtain attribute information of each historical message data. For example, the attribute information of the history packet data 211 is "attribute a", the attribute information of the history packet data 212 is "attribute B", the attribute information of the history packet data 213 is "attribute C", and the attribute information of the history packet data 214 is "attribute D". The attribute information may include a quaternion of the message, where the quaternion includes a source IP address, a destination IP address, a source port, and a destination port.
Next, the plurality of historical packet data 210 is stored in a plurality of files 221, 222, and the plurality of files 221, 222 may be stored in a distributed file system. For example, the history data 211 and 212 are compressed and stored in the file 221, and the history data 213 and 214 are compressed and stored in the file 222. The file path of the file 221 is, for example, "path a", and the file path of the file 222 is, for example, "path b", through which the corresponding file can be found.
The attribute information of each history packet data and the file path of the file stored therein are stored in association to obtain an index file 230, and the index file 230 is stored in a bitmap database, for example. The index file 230 includes a plurality of index data corresponding to a plurality of history message data one by one. The plurality of index data includes, for example, association data of "attribute a" and "path a", association data of "attribute B" and "path a", association data of "attribute C" and "path B", and association data of "attribute D" and "path B".
After storing the historical message data to the plurality of files 221, 222 and generating the index file 230, the target message data 260 may be obtained from the plurality of historical message data based on the received query request 240. Specifically, the query request 240 includes, for example, attribute information of the required target packet data, for example, the attribute information included in the query request 240 is "attribute a".
Then, based on the "attribute a" in the query request 240, target index data 250 is determined from the plurality of index data in the index file 230, the target index data 250 being, for example, association data of the "attribute a" and the "path a". Then, based on the "path a" in the target index data 250, the file 221 whose file path is "path a" is determined as a target file from among the plurality of files 221, 222, the file 221 is decompressed next, and then the history packet data 211 whose attribute data is "attribute a" is acquired as target packet data 260 from the decompressed file 221 based on "attribute a" in the query request 240.
The data query method and the data storage method of the embodiments of the present disclosure are described below in conjunction with the schematic diagram of fig. 2.
Fig. 3 schematically illustrates a flow chart of a data query method according to an embodiment of the disclosure.
As shown in fig. 3, the method may include, for example, the following operations S310 to S340.
In operation S310, a query request is received, where the query request includes at least attribute information of the target message data. Wherein the query request is used for querying target message data from a plurality of stored historical message data, for example.
In operation S320, target index data is determined from among the plurality of index data based on the query request.
In the embodiment of the disclosure, the plurality of index data corresponds to the plurality of historical message data one by one, that is, each index data includes attribute information of the corresponding historical message data and a file path of a file in which the historical message data is located. Wherein a plurality of index data are stored, for example, in a bitmap database, the bitmap database has an advantage in terms of query efficiency.
Index data having the attribute information is determined from among a plurality of index data based on the attribute information in the query request as target index data including a target file path associated with the attribute information.
In operation S330, a target file is determined from the at least one file based on the target index data. The file path of the target file is a target file path, and at least one file is used for storing historical message data.
In an embodiment of the present disclosure, at least one file is stored, for example, in a distributed file system, each file including a plurality of historical packet data that is stored in compression, each file having a file path. And determining a file with the file path consistent with the target file path from at least one file as a target file based on the target file path in the target index data.
In operation S340, target message data is acquired from the target file.
In an embodiment of the present disclosure, the target file includes a plurality of historical message data. At least one history message data can be determined from a plurality of history message data stored in the target file as target message data based on the attribute information of the target message data in the query request, and the attribute information of the at least one history message data is matched with the attribute information of the target message data.
For example, after the target file is determined, since a plurality of history packet data are stored in the target file, the history packet data whose attribute information coincides with that in the query request can be acquired as desired target packet data from the target file based on the attribute information in the query request.
In the embodiment of the disclosure, the original property of the historical message data is ensured by storing the historical message data into the distributed file system, so that when the query evidence collection of the message is carried out later, the unprocessed original message data can be obtained from the distributed file system. In addition, after the historical message data is stored in the distributed file system, the file path and the attribute information of the message are stored in the bitmap database in an associated mode as index data, so that the index data can be conveniently searched from the bitmap database based on the attribute information to obtain a target file path for storing target message data, and the unprocessed target message data can be obtained from the file based on the target file path, so that the query speed is improved, and the resource consumption of data query is reduced.
FIG. 4 schematically illustrates a schematic diagram of index data storage according to an embodiment of the present disclosure.
As shown in FIG. 4, a plurality of history message data 410 includes history message data 411-418, for example. The historical packet data 411, 412 is stored in a file 421, the historical packet data 413, 414 is stored in a file 422, the historical packet data 415, 416 is stored in a file 423, and the historical packet data 417, 418 is stored in a file 424, for example. The file path of the file 421 is "path a", the file path of the file 422 is "path b", the file path of the file 423 is "path c" and the file path of the file 424 is "path d".
The plurality of index data corresponding to the plurality of historical message data one by one are stored in a plurality of first databases, for example. For example, a plurality of index data corresponding to the historical message data 411 to 414 one by one are stored in the first database 431, a plurality of index data corresponding to the historical message data 415 to 418 one by one are stored in the first database 432, a database identification of the first database 431 is, for example, "first database P", and a database identification of the first database 432 is, for example, "first database Q". The first database 431 includes a bitmap database and the second database 432 includes a bitmap database.
Wherein, the attribute information of each historical message data comprises any one or more of a source IP address, a destination IP address, a source port, a destination port and a data transmission protocol. Attribute information of each history message data and file path association of the file stored therein are stored in a first database.
In addition, each historical message data has a timestamp that characterizes, for example, the time at which the message was generated. For ease of understanding, taking the time stamp of the historical message data 411 as "20200101" as an example, the historical message data 411 is generated on the 01 st 2020. However, the time stamp of each historical packet data may also be accurately expressed to a certain time, for example, may be expressed as "20200101163020", which indicates that the historical packet data is generated at 16 minutes and 20 seconds of 01/2020, and the representation form of the time stamp is not particularly limited in the embodiment of the present disclosure.
The index data are stored in the first databases according to the time stamps of the historical message data. Each index data includes, for example, a time stamp of the history packet data, attribute information, a file path. For example, each first database may store 4 index data, the index data corresponding to the plurality of history message data 411 to 418 is sequentially stored to the first database 431 from small (early) to large (late) according to a time stamp, so that the first database 431 stores the plurality of index data corresponding to the history message data 411 to 414. After the first database 431 is full of 4 index data, the index data corresponding to the remaining historical message data 415-418 are sequentially stored into the second database 432 according to the time stamp.
Next, the database identification and index data identification associations for each first database are stored in the second database 440. Wherein the index data identifies index data characterizing the first database stored index data, e.g., the index data identifies a range of time stamps comprising the index data stored by the first database. The timestamp range is characterized, for example, by a minimum timestamp and a maximum timestamp.
Taking the first database 431 as an example, the minimum timestamp of the index data in the first database 431 is "20200101", and the maximum timestamp is "20200104". The database identification "first database P" of the first database 431 is stored in association with the minimum timestamp "20200101", the maximum timestamp "20200104", and the second database 440. The procedure for the first database 432 is the same or similar and will not be described in detail herein.
The process of determining target index data according to an embodiment of the present disclosure is described below in conjunction with the schematic diagram of fig. 4 and the flowchart of fig. 5.
Fig. 5 schematically illustrates a flow chart of determining target index data according to an embodiment of the disclosure.
As shown in fig. 5, regarding the determination of the target index data from the plurality of index data based on the query request in operation S320 described above, operations S521 to S524 below are included.
In operation S521, an index data identification indicated by the query request is determined from the second database based on the query request.
In an embodiment of the present disclosure, the query request further includes a target time range, and the timestamp of the target message data is within the target time range. For example, when the target message data needs to be queried, a target time range for generating the target message data may be specified, for example, the target time range is from 02 in the year 2020 to 03 in the year 2020, and the time for generating the target message data is within the target time range.
Then, based on the target time range in the query request, an index data identification indicated by the query request, for example, a minimum timestamp of "20200101" and a maximum timestamp of "20200104" is determined from the second database. It is known that the time stamp range (from 01/2020 to 01/2020) of the index data identification indicated by the query request includes the target time range (from 02/2020 to 03/2020).
At operation S522, at least one database identification associated with the indicated index data identification is determined from the second database based on the index data identification indicated by the query request.
For example, the databases associated with index data identifications "20200101" and "20200104" indicated by the query request are identified as "first database P".
In operation S523, at least one first database corresponding to the at least one database identification is determined based on the at least one database identification. For example, a first database 431 corresponding to the database identification "first database P" is determined.
In operation S524, index data having attribute information matching attribute information of the target message data is determined as target index data from among the index data stored in the at least one first database.
In an example, when the attribute information included in the query request is "destination port 31", index data including "destination port 31" is determined from the first database 431 as target index data including, for example, a file path "path a" therein. Next, target message data is acquired from the file 421 corresponding to the "path a", for example, one or more pieces of history message data with the destination port "31" are acquired from the file 421 as target message data.
In another example, when the attribute information included in the query request is "Telnet protocol", a plurality of index data including "Telnet protocol" are determined from the first database 431 as target index data including, for example, file paths "path a" and "path b" therein. Next, target message data is acquired from the file 421 corresponding to the "path a" and the file 422 corresponding to the "path b", for example, one or more pieces of history message data having a data transfer protocol of "Telnet protocol" are acquired from the file 421, one or more pieces of history message data having a data transfer protocol of "Telnet protocol" are acquired from the file 422, and the acquired history message data is taken as the target message data.
In another example, when the attribute information included in the query request is "destination port 31 or 32" and "Telnet protocol", a first piece of index data including "destination port 31 or 32" and "Te1net protocol" is determined from the first database 431 as target index data including, for example, file path "path a" therein. Next, target message data is acquired from the file 421 corresponding to the "path a", for example, one or more pieces of history message data having the destination port "31 or 32" and the data transfer protocol "Telnet protocol" are acquired from the file 421 as target message data.
In another example, when the target time range in the query request is 2020, 01, 04, to 2020, 01, 05, the index data identifications indicated by the query request are, for example, "20200101" and "20200104" and "20200105" and "20200108". The first database corresponding to the index data identification includes, for example, a first database 431 and a first database 432. Next, the target index data including the attribute information is determined from the first database 431 and the first database 432 based on the attribute information in the query request, and the target message data is acquired from the corresponding file based on the file path included in the target index data.
It can be appreciated that, in the embodiment of the disclosure, by storing the index data of the historical packet data into the plurality of first databases and establishing the second database for indexing the plurality of first databases, when the target packet data is queried, the corresponding first database is determined from the second database, and then the file path stored in the target packet data is determined from the determined first database, without traversing all the first databases, thereby improving the query efficiency and reducing the computing resources consumed by the query.
In an embodiment of the disclosure, at least one file is a file in a distributed file system, the at least one file corresponds to at least one preset time range one by one, and for each file in the at least one file, a message generation time of each historical message data stored in the file is within the preset time range corresponding to the file.
For example, the at least one file includes file 1, file 2, file 3, and the like, and the message generation time of the history message data stored in each file is, for example, within 1 hour. For example, the preset time range corresponding to the document 1 is 2020, 01, 00:00, and 2020, 01, 0:59:59. The preset time range corresponding to the document 2 is, for example, 2020, 01, 00 to 2020, 01, 1, 59. The preset time range corresponding to the document 3 is, for example, 2020, 01, 02:00, to 2020, 01, 2:59:59. Taking the historical message data stored in the file 1 as an example, the message generation time of each historical message data stored in the file 1 is in the time range of, for example, 00:00 in 01 month in 2020 to 0:59:59 in 01 month in 2020.
For each of at least one file, the plurality of historical message data stored by the file is compressed into a plurality of subfiles. Taking the file 1 as an example, the plurality of history packet data stored in the file 1 is compressed into, for example, a sub-file 11, a sub-file 12, a sub-file 13, and the like. For example, 3000 pieces of history message data are sequentially generated within a time range from 01 month 01 of 2020 to 0:59:59, and 1000 pieces of history message data can be stored in each sub-file. For example, in the process of sequentially generating 3000 pieces of history message data, the generated history message data are sequentially stored in the subfiles 11, after 1000 pieces of history message data are stored in the subfiles 11, the later generated history message data are sequentially stored in the subfiles 12 until 1000 pieces of history message data are stored in the subfiles 12, and the subsequently generated messages are sequentially stored in the subfiles 13.
For each sub-file, the plurality of historical message data in the sub-file is compressed in turn. Taking the subfile 11 as an example, 1000 pieces of history packet data stored in the subfile 11 are compressed and stored sequentially, for example.
In one embodiment, for the subfile 11, the received historical packet data may be compressed at preset intervals, which may be 1 minute. For example, the received 200 pieces of history message data within 1 minute are compressed to obtain a preliminary compressed subfile, and then 300 pieces of history message data newly received within the following 1 minute are compressed into the preliminary compressed subfile. The newly received 300 historical message data are compressed into the primary compression subfile, and can be compressed through a streaming compression technology, wherein the streaming compression technology has the function of continuously compressing the new file into the compressed file. Therefore, for the newly received historical message data in every 1 minute, the newly received historical message data can be continuously compressed into the previously compressed subfile until the subfile 11 is full of 1000 historical message data, and finally the 1000 historical message data stored in the subfile 11 are compressed into one file.
In another embodiment, for the subfile 11, a compression process may be performed once with a preset amount of history file data, for example 200. For example, the received 200 pieces of history message data are compressed to obtain a preliminary compressed subfile, and then the 200 pieces of history message data which are newly received are compressed into the preliminary compressed subfile. The newly received 200 historical message data can be compressed into the primary compressed subfiles through a stream compression technology. For every 200 newly received historical message data, the newly received historical message data can be continuously compressed into the previous compressed subfile until the subfile 11 is full of 1000 historical message data, and finally the 1000 historical message data stored in the subfile 11 are compressed into one file.
In the embodiment of the disclosure, the sub-file is obtained by compressing the historical message data for multiple times by using the stream compression technology, so that the consumption peak value of the storage space can be reduced. For example, after waiting for receiving 1000 pieces of history message data, the 1000 pieces of history message data are compressed to the subfiles 11 again, so that the received message data occupy a larger storage space due to being not compressed in the process of waiting for receiving all 1000 pieces of history message data. And the historical message data is compressed for a plurality of times by the streaming compression technology, so that the storage space occupied by the historical message data can be reduced.
In embodiments of the present disclosure, since each file has multiple subfiles, each subfile also has, for example, a file name. The file path for each file can also comprise the file name of the sub-file, so that the target message data can be conveniently acquired from the corresponding sub-file in the target file, and the data acquisition speed is improved.
Fig. 6 schematically illustrates a flow chart of a data storage method according to an embodiment of the present disclosure.
As shown in fig. 6, the method may include, for example, the following operations S610 to S650.
In operation S610, history message data to be stored is acquired.
In operation S620, each history message data in the history message data to be stored is parsed, and attribute information of each history message data is obtained.
Wherein the attribute information includes any one or more of a source IP address, a destination IP address, a source port, a destination port, and a data transfer protocol.
In operation S630, the history message data to be stored is stored in at least one file in the distributed file system, and a file path of the file in which each history message data is located is recorded.
In operation S640, for each history message data, attribute information of the history message data and a file path of a file in which the history message data is located are determined as index information.
In operation S650, the index information is stored in association with the bitmap database.
In an embodiment of the present disclosure, at least one file corresponds to at least one preset time range one-to-one. Storing the historical message data to be stored in at least one file in the distributed file system comprises determining the message generation time of the historical message data for each historical message data, and storing the historical message data in one of the at least one file based on the message generation time and at least one preset time range, wherein the message generation time is in the preset time range corresponding to the stored file.
In an embodiment of the present disclosure, for each of at least one file, a plurality of historical message data stored by the file is compressed into a plurality of subfiles. And compressing the received at least one history message data to obtain a preliminary compressed subfile for each subfile, and compressing the newly received at least one history message data into the preliminary compressed subfile.
In the embodiment of the disclosure, the original property of the historical message data is ensured by storing the historical message data into the distributed file system, so that when the query evidence collection of the message is carried out later, the unprocessed original message data can be obtained from the distributed file system. In addition, after the historical message data is stored in the distributed file system, the file path and the attribute information of the message are stored in the bitmap database in an associated mode as index data, so that the index data can be conveniently searched from the bitmap database based on the attribute information to obtain a target file path for storing target message data, and the unprocessed target message data can be obtained from the file based on the target file path, so that the query speed is improved, and the resource consumption of data query is reduced.
Fig. 7 schematically illustrates a block diagram of a data querying device in accordance with an embodiment of the present disclosure.
As shown in fig. 7, the data query apparatus 700 may include a receiving module 710, a first determining module 720, a second determining module 730, and a first obtaining module 740.
The receiving module 710 may be configured to receive a query request, where the query request includes at least attribute information of the target message data. The receiving module 710 may, for example, perform operation S310 described above with reference to fig. 3 according to an embodiment of the present disclosure, which is not described herein.
The first determining module 720 may be configured to determine, based on the query request, target index data from a plurality of index data, where the target index data includes a target file path associated with attribute information of target packet data, and each index data from the plurality of index data includes attribute information of history packet data and a file path of a file in which the history packet data is located. The first determining module 720 according to the embodiment of the present disclosure may, for example, perform the operation S320 described above with reference to fig. 3, which is not described herein.
The second determining module 730 may be configured to determine, based on the target index data, a target file from at least one file, where a file path of the target file is a target file path, and the at least one file is configured to store historical packet data. The second determining module 730 may, for example, perform operation S330 described above with reference to fig. 3 according to an embodiment of the present disclosure, which is not described herein.
The first obtaining module 740 may be configured to obtain target message data from a target file. According to an embodiment of the present disclosure, the first obtaining module 740 may, for example, perform the operation S340 described above with reference to fig. 3, which is not described herein.
Fig. 8 schematically illustrates a block diagram of a data storage device according to an embodiment of the present disclosure.
As shown in fig. 8, the data storage device 800 may include a second acquisition module 810, a parsing module 820, a first storage module 830, a third determination module 840, and a second storage module 850.
The second obtaining module 810 may be configured to obtain historical packet data to be stored. The second obtaining module 810 may, for example, perform operation S610 described above with reference to fig. 6 according to an embodiment of the present disclosure, which is not described herein.
The parsing module 820 may be configured to parse each of the historical message data to be stored to obtain attribute information of each of the historical message data. According to an embodiment of the present disclosure, the parsing module 820 may perform, for example, operation S620 described above with reference to fig. 6, which is not described herein.
The first storage module 830 may be configured to store a plurality of historical packet data to be stored in at least one file in the distributed file system, and record a file path of the file in which each historical packet data is located. According to an embodiment of the present disclosure, the first storage module 830 may perform, for example, operation S630 described above with reference to fig. 6, which is not described herein.
The third determining module 840 may be configured to determine, for each history packet data, attribute information of the history packet data and a file path of a file in which the history packet data is located as index information. According to an embodiment of the present disclosure, the third determining module 840 may perform, for example, operation S640 described above with reference to fig. 6, which is not described herein.
The second storage module 850 may be used to store the index information association to the bitmap database. The second storage module 850 may, for example, perform operation S650 described above with reference to fig. 6, which is not described herein.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Or one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which, when executed, may perform the corresponding functions.
FIG. 9 schematically illustrates a block diagram of a computer system suitable for data querying and data storage in accordance with an embodiment of the present disclosure. The computer system illustrated in fig. 9 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 9, a computer system 900 according to an embodiment of the present disclosure includes a processor 901, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. The processor 901 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 901 may also include on-board memory for caching purposes. Processor 901 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the system 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. The processor 901 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the program may be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the system 900 may also include an input/output (I/O) interface 905, the input/output (I/O) interface 905 also being connected to the bus 904. The system 900 may also include one or more of an input portion 906 including a keyboard, mouse, etc., an output portion 907 including a display such as a Cathode Ray Tube (CRT), liquid Crystal Display (LCD), etc., and speakers, etc., a storage portion 908 including a hard disk, etc., and a communication portion 909 including a network interface card such as a LAN card, modem, etc., connected to the I/O interface 905. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be included in the apparatus/device/system described in the above embodiments, or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a computer-nonvolatile computer-readable storage medium, which may include, for example, but is not limited to, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 902 and/or RAM 903 and/or one or more memories other than ROM 902 and RAM 903 described above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present disclosure are described above. These examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.