CN114741368A - Log data statistical method based on artificial intelligence and related equipment - Google Patents
Log data statistical method based on artificial intelligence and related equipment Download PDFInfo
- Publication number
- CN114741368A CN114741368A CN202210378426.5A CN202210378426A CN114741368A CN 114741368 A CN114741368 A CN 114741368A CN 202210378426 A CN202210378426 A CN 202210378426A CN 114741368 A CN114741368 A CN 114741368A
- Authority
- CN
- China
- Prior art keywords
- data
- data set
- target
- log data
- log
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 35
- 238000007619 statistical method Methods 0.000 title claims abstract description 10
- 238000003860 storage Methods 0.000 claims abstract description 39
- 238000012795 verification Methods 0.000 claims abstract description 23
- 238000005192 partition Methods 0.000 claims description 53
- 238000000034 method Methods 0.000 claims description 45
- 230000015654 memory Effects 0.000 claims description 28
- 230000006835 compression Effects 0.000 claims description 24
- 238000007906 compression Methods 0.000 claims description 24
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000000638 solvent extraction Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 description 17
- 238000012545 processing Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000012550 audit Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1805—Append-only file systems, e.g. using logs or journals to store data
- G06F16/1815—Journaling file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/221—Column-oriented storage; Management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2462—Approximate or statistical queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
本申请提出一种基于人工智能的日志数据统计方法、装置、电子设备及存储介质,基于人工智能的日志数据统计方法包括:依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证;若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集;依据预设阈值划分所述第一目标数据集以获取第二目标数据集;依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集;依据所述目标索引数据集进行数据统计以获取目标日志数据。本申请可以在节省日志数据的存储空间的基础上利用构建的索引值对日志数据进行快速统计,提高大规模日志数据的统计效率。
The present application proposes an artificial intelligence-based log data statistical method, device, electronic device and storage medium. The artificial intelligence-based log data statistical method includes: receiving a log data statistical request according to a search system, and verifying the statistical request; If the verification is passed, the search system searches the log data obtained from the server according to the statistical request to obtain the first target data set; divides the first target data set according to a preset threshold to obtain the second target data compressing the second target data set according to preset logic to obtain a target index data set; and performing data statistics according to the target index data set to obtain target log data. The present application can use the constructed index value to quickly perform statistics on log data on the basis of saving the storage space of log data, thereby improving the statistical efficiency of large-scale log data.
Description
技术领域technical field
本申请涉及人工智能技术领域,尤其涉及一种基于人工智能的日志数据统计方法、装置、电子设备及存储介质。The present application relates to the technical field of artificial intelligence, and in particular, to a method, device, electronic device and storage medium for log data statistics based on artificial intelligence.
背景技术Background technique
Elasticsearch(简称ES)是一种基于Lucene底层技术的分布式全文搜索服务器,通过提高数据入库与过滤性能的机制,能够在一定程度上实现快速查询。Elasticsearch (ES for short) is a distributed full-text search server based on Lucene's underlying technology. By improving the mechanism of data storage and filtering performance, it can achieve fast query to a certain extent.
日志的分析统计是日志系统工作中的重要依据,业界许多日志系统都把日志存储在Elasticsearch集群中。然而,当对大规模的日志数据进行统计分析时,Elasticsearch集群响应会很缓慢或者直接报错,从而大大降低对大规模日志数据的统计效率。The analysis and statistics of logs is an important basis for the work of the log system. Many log systems in the industry store logs in the Elasticsearch cluster. However, when performing statistical analysis on large-scale log data, the Elasticsearch cluster will respond slowly or report errors directly, which greatly reduces the statistical efficiency of large-scale log data.
发明内容SUMMARY OF THE INVENTION
鉴于以上内容,有必要提出一种基于人工智能的日志数据统计方法及相关设备,以解决如何提高大规模日志数据的统计效率这一技术问题,其中,相关设备包括基于人工智能的日志数据统计装置、电子设备及存储介质。In view of the above, it is necessary to propose an artificial intelligence-based log data statistical method and related equipment to solve the technical problem of how to improve the statistical efficiency of large-scale log data, wherein the related equipment includes an artificial intelligence-based log data statistical device , electronic equipment and storage media.
本申请提供一种基于人工智能的日志数据统计方法,所述方法包括:The present application provides a method for statistics of log data based on artificial intelligence, the method comprising:
依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证;Receive log data statistics requests according to the search system, and verify the statistics requests;
若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集;If the verification is passed, the search system searches the log data obtained from the server according to the statistical request to obtain the first target data set;
依据预设阈值划分所述第一目标数据集以获取第二目标数据集;dividing the first target data set according to a preset threshold to obtain a second target data set;
依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集;compressing the second target data set according to preset logic to obtain a target index data set;
依据所述目标索引数据集进行数据统计以获取目标日志数据。Data statistics are performed according to the target index data set to obtain target log data.
如此,通过对日志数据进行分类存储,然后依据预设逻辑对日志数据进行压缩后构建索引值,从而可以在节省日志数据的存储空间的基础上利用构建的索引值对日志数据进行快速统计,提高大规模日志数据的统计效率。In this way, by classifying and storing the log data, and then compressing the log data according to the preset logic, and then constructing an index value, the log data can be quickly counted by using the constructed index value on the basis of saving the storage space of the log data, thereby improving the performance of the log data. Statistical efficiency of large-scale log data.
在一些实施例中,所述依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证包括:In some embodiments, receiving the log data statistics request according to the search system, and verifying the statistics request includes:
依据预设方式对不同数据类型的日志数据设置编码标签;Set encoding labels for log data of different data types according to a preset method;
基于所述编码标签判断所述统计请求中的数据类型是否含有对应的编码标签,从而确定所述统计请求是否合格,若合格,则验证通过。Based on the encoding tag, it is determined whether the data type in the statistics request contains a corresponding encoding tag, so as to determine whether the statistics request is qualified, and if it is qualified, the verification is passed.
如此,可通过设置的编码标签判断所述统计请求是否合格,从而保证用户的统计请求准确性,防止异常统计请求所造成的系统资源浪费。In this way, it is possible to judge whether the statistics request is qualified through the set coding label, so as to ensure the accuracy of the user's statistics request and prevent the waste of system resources caused by abnormal statistics requests.
在一些实施例中,所述若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集包括:In some embodiments, if the verification is passed, the search system searches the log data obtained from the server according to the statistics request to obtain the first target data set including:
所述搜索系统基于所述日志数据的数据类型、时间范围和取值范围收集对应的日志数据,并将收集到的日志数据进行列式存储以作为所述第一目标数据集。The search system collects corresponding log data based on the data type, time range and value range of the log data, and stores the collected log data in a column format as the first target data set.
如此,所述搜索系统可根据用户给定的统计请求从服务端快速获取对应的日志数据,为后续过程提供准确的数据支撑。In this way, the search system can quickly obtain the corresponding log data from the server according to the statistical request given by the user, so as to provide accurate data support for the subsequent process.
在一些实施例中,所述依据预设阈值划分所述第一目标数据集以获取第二目标数据集包括:In some embodiments, the dividing the first target data set according to a preset threshold to obtain the second target data set includes:
依据预设阈值判断所述第一目标数据集的数据量以获取判断结果;Judging the data volume of the first target data set according to a preset threshold to obtain a judgment result;
基于所述判断结果对所述第一目标数据集进行分区以获取分区数据集;Partitioning the first target data set based on the judgment result to obtain a partitioned data set;
对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集。The partition data in the partition data set is divided into batches to obtain the second target data set.
如此,通过对所述第一目标数据集中的数据进行进一步的划分,可以使后续过程中同时对分区数据集中的多条日志数据进行并发统计,从而提高日志数据的统计效率。In this way, by further dividing the data in the first target data set, multiple pieces of log data in the partitioned data set can be concurrently counted in the subsequent process, thereby improving the statistical efficiency of log data.
在一些实施例中,所述基于所述判断结果对所述第一目标数据集进行分区以获取分区数据集包括:In some embodiments, the partitioning of the first target data set based on the judgment result to obtain the partitioned data set includes:
若所述第一目标数据集的数据量小于预设阈值,则将所述第一目标数据集作为所述分区数据集;If the data amount of the first target data set is less than a preset threshold, the first target data set is used as the partition data set;
若所述第一目标数据集的数据量大于预设阈值,则以预设阈值为单位对所述第一目标数据集进行划分以获取所述分区数据集。If the data amount of the first target data set is greater than a preset threshold, the first target data set is divided in units of a preset threshold to obtain the partitioned data set.
如此,在处理大批量的日志数据时,通过对所述第一目标数据集进行分区可以有效减少后续过程中日志数据的搜索范围,进一步提高统计效率。In this way, when processing a large batch of log data, partitioning the first target data set can effectively reduce the search range of the log data in the subsequent process, and further improve the statistical efficiency.
在一些实施例中,所述对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集包括:In some embodiments, performing batch division of each partition data in the partition data set to obtain the second target data set includes:
依据同一分区中各数据的数据量由大到小对所述分区数据进行排序以获取排序数据表;Sorting the partition data according to the data volume of each data in the same partition from large to small to obtain a sorted data table;
依据余弦相似度算法计算所述排序数据表中各相邻数据之间的余弦相似度;Calculate the cosine similarity between adjacent data in the sorted data table according to the cosine similarity algorithm;
依据自定义聚类算法和所述排序数据表中各相邻数据之间的余弦相似度对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集。According to the custom clustering algorithm and the cosine similarity between adjacent data in the sorted data table, each partition data in the partition data set is divided into batches to obtain the second target data set.
如此,可以使相似度较高的数据排列在一起,便于后续过程中据此生成对应的索引值,并依据索引值快速统计出对应相关的日志数据,提高统计效率。In this way, the data with high similarity can be arranged together, so that the corresponding index value can be generated accordingly in the subsequent process, and the corresponding relevant log data can be quickly counted according to the index value, thereby improving the statistical efficiency.
在一些实施例中,所述依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集包括:In some embodiments, the compressing the second target data set according to the preset logic to obtain the target index data set includes:
依据预设逻辑压缩所述第二目标数据集中的数据以获取压缩数据集;compressing data in the second target data set according to preset logic to obtain a compressed data set;
依据压缩算法转换所述压缩数据集中的数据以构建所述目标索引数据集。The data in the compressed data set is transformed according to a compression algorithm to construct the target index data set.
如此,可以在对日志数据进行压缩,从而有效减少存储空间的基础上构建对应的目标索引数据集,实现利用索引对日志数据的快速统计。In this way, the corresponding target index data set can be constructed on the basis of compressing the log data, thereby effectively reducing the storage space, so as to realize the rapid statistics of the log data by using the index.
本申请实施例还提供一种基于人工智能的日志数据统计装置,所述装置包括:The embodiment of the present application also provides an artificial intelligence-based log data statistics device, the device includes:
验证单元,用于依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证;a verification unit, configured to receive a log data statistical request according to the search system, and verify the statistical request;
获取单元,用于若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集;an acquiring unit, configured to search the log data acquired from the server according to the statistical request to acquire the first target data set if the verification is passed;
划分单元,用于依据预设阈值划分所述第一目标数据集以获取第二目标数据集;a dividing unit, configured to divide the first target data set according to a preset threshold to obtain a second target data set;
压缩单元,用于依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集;a compression unit, configured to compress the second target data set according to a preset logic to obtain a target index data set;
统计单元,用于依据所述目标索引数据集进行数据统计以获取目标日志数据。A statistics unit, configured to perform data statistics according to the target index data set to obtain target log data.
本申请实施例还提供一种电子设备,所述电子设备包括:The embodiment of the present application also provides an electronic device, the electronic device includes:
存储器,存储至少一个指令;a memory that stores at least one instruction;
处理器,执行所述存储器中存储的指令以实现所述的基于人工智能的日志数据统计方法。The processor executes the instructions stored in the memory to implement the artificial intelligence-based log data statistics method.
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现所述的基于人工智能的日志数据统计方法。Embodiments of the present application further provide a computer-readable storage medium, where at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in an electronic device to implement the artificial intelligence-based Log data statistics method.
附图说明Description of drawings
图1是本申请所涉及的基于人工智能的日志数据统计方法的较佳实施例的流程图。FIG. 1 is a flowchart of a preferred embodiment of the artificial intelligence-based log data statistics method involved in the present application.
图2是本申请所涉及的依据预设阈值划分所述第一目标数据集以获取第二目标数据集的较佳实施例的流程图。FIG. 2 is a flowchart of a preferred embodiment of dividing the first target data set according to a preset threshold to obtain a second target data set involved in the present application.
图3是本申请所涉及的基于人工智能的日志数据统计装置的较佳实施例的功能模块图。FIG. 3 is a functional block diagram of a preferred embodiment of the artificial intelligence-based log data statistics device involved in the present application.
图4是本申请所涉及的基于人工智能的日志数据统计方法的较佳实施例的电子设备的结构示意图。FIG. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the artificial intelligence-based log data statistical method involved in the present application.
图5是本申请所涉及的全局字典表和分批字典表的结构示意图。FIG. 5 is a schematic structural diagram of the global dictionary table and the batch dictionary table involved in the present application.
图6是本申请所涉及的B树索引的结构示意图。FIG. 6 is a schematic structural diagram of a B-tree index involved in the present application.
具体实施方式Detailed ways
为了能够更清楚地理解本申请的目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互结合。在下面的描述中阐述了很多具体细节以便于充分理解本申请,所述描述的实施例仅是本申请一部分实施例,而不是全部的实施例。In order to more clearly understand the purpose, features and advantages of the present application, the present application will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features of the embodiments may be combined with each other unless there is conflict. Many specific details are set forth in the following description to facilitate a full understanding of the present application, and the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, features defined as "first", "second" may expressly or implicitly include one or more of said features. In the description of the present application, "plurality" means two or more, unless otherwise expressly and specifically defined.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field to which this application belongs. The terms used herein in the specification of the application are for the purpose of describing specific embodiments only, and are not intended to limit the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
本申请实施例提供一种基于人工智能的日志数据统计方法,可应用于一个或者多个电子设备中,电子设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application SpecificIntegrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。The embodiment of the present application provides an artificial intelligence-based log data statistics method, which can be applied to one or more electronic devices, where the electronic device is a kind of automatic numerical calculation and/or information according to pre-set or stored instructions. Processing equipment, its hardware includes but is not limited to microprocessors, application specific integrated circuits (ASICs), programmable gate arrays (Field-Programmable Gate Arrays, FPGAs), digital processors (Digital Signal Processors, DSPs), Embedded devices, etc.
电子设备可以是任何一种可与客户进行人机交互的电子产品,例如,个人计算机、平板电脑、智能手机、个人数字助理(Personal Digital Assistant,PDA)、游戏机、交互式网络电视(Internet Protocol Television,IPTV)、智能式穿戴式设备等。The electronic device can be any electronic product that can interact with customers, such as personal computers, tablet computers, smart phones, personal digital assistants (PDAs), game consoles, Internet Protocol Television, IPTV), smart wearable devices, etc.
电子设备还可以包括网络设备和/或客户设备。其中,所述网络设备包括,但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量主机或网络服务器构成的云。Electronic devices may also include network devices and/or client devices. Wherein, the network device includes, but is not limited to, a single network server, a server group formed by multiple network servers, or a cloud formed by a large number of hosts or network servers based on cloud computing (Cloud Computing).
电子设备所处的网络包括但不限于互联网、广域网、城域网、局域网、虚拟专用网络(Virtual Private Network,VPN)等。The network where the electronic device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
如图1所示,是本申请基于人工智能的日志数据统计方法的较佳实施例的流程图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。As shown in FIG. 1 , it is a flow chart of a preferred embodiment of the artificial intelligence-based log data statistics method of the present application. According to different requirements, the order of the steps in this flowchart can be changed, and some steps can be omitted.
S10,依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证。S10: Receive a log data statistics request according to the search system, and verify the statistics request.
在一个可选的实施例中,所述搜索系统可使用ClickHouse系统,所述ClickHouse系统是一个可用于联机分析处理(OLAP)的列式数据库管理系统。其中,OLAP是数据仓库系统的主要应用,支持复杂的分析操作,侧重决策支持,并且提供直观易懂的查询结果。In an alternative embodiment, the search system may use the ClickHouse system, which is a columnar database management system available for online analytical processing (OLAP). Among them, OLAP is the main application of the data warehouse system, which supports complex analysis operations, focuses on decision support, and provides intuitive and easy-to-understand query results.
该可选的实施例中,不同于联机事务处理OLTP(on-line transactionprocessing)的场景,如电商场景中加购物车、下单、支付等需要在原地进行大量insert、update、delete操作,数据分析(OLAP)场景通常是将数据批量导入后,进行任意维度的灵活探索、BI工具洞察、报表制作等。数据一次性写入后,需要尝试从各个角度对数据做挖掘、分析,直到发现其中的商业价值、业务变化趋势等信息。这是一个需要反复试错、不断调整、持续优化的过程,其中数据的读取次数远多于写入次数,这就要求底层数据库为这个特点做专门设计。In this optional embodiment, different from on-line transaction processing (OLTP) scenarios, such as adding a shopping cart, placing an order, and paying in an e-commerce scenario, a large number of insert, update, and delete operations need to be performed on the spot. The analysis (OLAP) scenario is usually to import data in batches, and then perform flexible exploration in any dimension, BI tool insight, and report production. After the data is written at one time, it is necessary to try to mine and analyze the data from various angles until the business value and business trends are found. This is a process that requires repeated trial and error, continuous adjustment, and continuous optimization, in which the number of data reads is much more than the number of writes, which requires the underlying database to be specially designed for this feature.
该可选的实施例中,由于ClickHouse是一种列式数据库,与线上和本地使用的MySQL数据库不同,它的查询速度非常快,存储数据量也非常大,面对数十亿条数据的查询,都能以秒级别返回查询结果,使用clickhouse体现了系统的高效性。但clickhouse不支持修改数据,所以用来存储用户的日志信息非常合适,因为日志信息是不需要修改的增量数据。In this optional embodiment, since ClickHouse is a columnar database, unlike the MySQL database used online and locally, its query speed is very fast, and the amount of stored data is also very large. The query can return query results in seconds, and the use of clickhouse reflects the efficiency of the system. However, clickhouse does not support modifying data, so it is very suitable to store user log information, because log information is incremental data that does not need to be modified.
在一个可选的实施例中,依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证包括:In an optional embodiment, receiving a log data statistics request according to the search system, and verifying the statistics request includes:
S101,依据预设方式对不同数据类型的日志数据设置编码标签。S101, according to a preset method, set coding labels for log data of different data types.
在一个可选的实施例中,可依据预设方式对不同数据类型的日志数据设置编码标签,所述编码标签可以是数字、符号或者字母,本方案对此不作要求。In an optional embodiment, coded labels may be set for log data of different data types according to a preset manner, and the coded labels may be numbers, symbols or letters, which are not required in this solution.
S102,基于所述编码标签判断所述统计请求中的数据类型是否含有对应的编码标签,从而确定所述统计请求是否合格,若合格,则验证通过。S102: Determine whether the data type in the statistical request contains a corresponding encoded label based on the encoded label, so as to determine whether the statistical request is qualified, and if qualified, the verification is passed.
该可选的实施例中,对不同类型的日志数据设置完编码标签后,可基于所述编码标签判断所述统计请求中的数据类型是否含有对应的编码标签,从而确定当前的统计请求是否合格,若合格,则验证通过,所述搜索系统接收所述统计请求,若不合格,则验证不通过,所述搜索系统直接拒绝本次统计请求。In this optional embodiment, after coding labels are set for different types of log data, it can be determined based on the coding labels whether the data types in the statistics request contain corresponding coding labels, so as to determine whether the current statistics request is qualified , if it is qualified, the verification is passed, and the search system receives the statistics request; if it is unqualified, the verification fails, and the search system directly rejects the statistics request.
如此,可通过设置的编码标签判断所述统计请求是否合格,从而保证用户的统计请求准确性,防止异常统计请求所造成的系统资源浪费。In this way, it is possible to judge whether the statistics request is qualified through the set coding label, so as to ensure the accuracy of the user's statistics request and prevent the waste of system resources caused by abnormal statistics requests.
S11,若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集。S11, if the verification is passed, the search system searches the log data obtained from the server according to the statistics request to obtain a first target data set.
该可选的实施例中,用户可通过所述搜索系统的客户端指定需要统计的日志数据的数据类型、对应的时间范围和数据取值范围来生成所述统计请求后发送至所述搜索系统的服务端,从而初步确定所要统计的日志数据的整体范围和对应的数据量。In this optional embodiment, the user can specify the data type, corresponding time range and data value range of the log data to be counted through the client of the search system to generate the statistics request and send it to the search system The server side, so as to preliminarily determine the overall scope of the log data to be counted and the corresponding data volume.
该可选的实施例中,所述搜索系统收到用户的统计请求后,可根据用户请求中指定的数据类型、时间范围和数据范围,通过ClickHouse的Kafka(开源流处理平台),实时将服务端的日志数据从kafka接入ClickHouse进行列式存储以作为所述第一目标数据集。此外,ClickHouse也可以存储离线的日志数据,这部分日志数据流需以离线的方式接入,以保证Click House中存储有N天全量的日志数据,通常系统内定期限为N=15天。In this optional embodiment, after receiving the user's statistical request, the search system can, according to the data type, time range and data range specified in the user request, use ClickHouse's Kafka (open source stream processing platform) to real-time The log data of the terminal is connected to ClickHouse from kafka for columnar storage as the first target data set. In addition, ClickHouse can also store offline log data. This part of the log data stream needs to be accessed offline to ensure that ClickHouse stores the full amount of log data for N days. Usually, the default period of the system is N=15 days.
该可选的实施例中,Kafka是一个分布式、支持分区的、多副本的分布式消息系统,它的最大的特性就是可以实时的处理大量数据,具有高吞吐量、低延迟、可扩展性、持久性、可靠性、容错性、高并发的优点,以满足各种需求场景,如日志收集,用户活动跟踪,流式处理等。In this optional embodiment, Kafka is a distributed message system that supports partitioning and multiple copies. Its biggest feature is that it can process large amounts of data in real time, and has high throughput, low latency, and scalability. , durability, reliability, fault tolerance, high concurrency advantages to meet various demand scenarios, such as log collection, user activity tracking, streaming, etc.
该可选的实施例中,所述日志数据可以是由网络安全设备产生的不同类型的日志数据,如安全检测日志、网络流量日志、协议审计日志以及第三方设备输入日志。In this optional embodiment, the log data may be different types of log data generated by network security devices, such as security detection logs, network traffic logs, protocol audit logs, and third-party device input logs.
该可选的实施例中,对获得的日志数据进行列式存储的原因在于:In this optional embodiment, the reason for columnar storage of the obtained log data is as follows:
在行存储模式下,数据按行连续存储,所有列的数据都存储在一个block中,不参与计算的列在IO时也要全部读出,读取操作被严重放大。而列存模式下,只需要读取参与计算的列即可,极大的减低了IO cost,加速了查询。In the row storage mode, the data is stored continuously in rows, the data of all columns is stored in a block, and the columns that do not participate in the calculation must be read out during IO, and the read operation is severely amplified. In the column storage mode, only the columns involved in the calculation need to be read, which greatly reduces the IO cost and speeds up the query.
同一列中的数据属于同一类型,压缩效果显著。列存储往往有着高达十倍甚至更高的压缩比,节省了大量的存储空间,降低了存储成本;更高的压缩比意味着更小的datasize,从磁盘中读取相应数据耗时更短;高压缩比也意味着同等大小的内存能够存放更多数据,系统缓存效果更好。因此,相比于行式存储,ClickHouse在提供数据查询服务时,受数据规模的影响较小,提供大数据量查询服务的性能较好,能够提高查询效率。The data in the same column is of the same type, and the compression effect is significant. Column storage often has a compression ratio as high as ten times or even higher, which saves a lot of storage space and reduces storage costs; a higher compression ratio means a smaller data size, and it takes less time to read the corresponding data from the disk; A high compression ratio also means that the memory of the same size can store more data, and the system cache is better. Therefore, compared with row-based storage, when ClickHouse provides data query services, it is less affected by the scale of data, and has better performance in providing query services with large data volumes, which can improve query efficiency.
如此,所述搜索系统可根据用户给定的统计请求从服务端快速获取对应的日志数据,为后续过程提供准确的数据支撑。In this way, the search system can quickly obtain the corresponding log data from the server according to the statistical request given by the user, so as to provide accurate data support for the subsequent process.
S12,依据预设阈值划分所述第一目标数据集以获取第二目标数据集。S12: Divide the first target data set according to a preset threshold to obtain a second target data set.
请参见图2,在一个可选的实施例中,依据预设阈值划分所述第一目标数据集以获取第二目标数据集包括:Referring to FIG. 2, in an optional embodiment, dividing the first target data set according to a preset threshold to obtain a second target data set includes:
S121,依据预设阈值判断所述第一目标数据集的数据量以获取判断结果。S121, judging the data amount of the first target data set according to a preset threshold to obtain a judgment result.
该可选的实施例中,预设阈值可设为1T,并通过比较预设阈值与所述第一目标数据集的数据量大小来获取所述判断结果,若所述第一目标数据集的数据量大于所述预设阈值,则判断结果为分区,若所述第一目标数据集的数据量小于所述预设阈值,则所述判断结果为不分区。In this optional embodiment, the preset threshold may be set to 1T, and the judgment result is obtained by comparing the preset threshold with the data volume of the first target data set. If the amount of data is greater than the preset threshold, the judgment result is partition, and if the data amount of the first target data set is less than the preset threshold, the judgment result is no partition.
S122,基于所述判断结果对所述第一目标数据集进行分区以获取分区数据集。S122: Partition the first target data set based on the judgment result to obtain a partitioned data set.
该可选的实施例中,若所述第一目标数据集的数据量小于预设阈值,则将所述第一目标数据集作为所述分区数据集;若所述第一目标数据集的数据量大于预设阈值,则以预设阈值为单位对所述第一目标数据集进行划分以获取所述分区数据集。In this optional embodiment, if the data amount of the first target data set is less than a preset threshold, the first target data set is used as the partition data set; if the data of the first target data set is If the amount is greater than the preset threshold, the first target data set is divided in units of the preset threshold to obtain the partitioned data set.
S123,对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集。S123: Perform batch division on each partition data in the partition data set to obtain the second target data set.
该可选的实施例中,对所述分区数据集中的各分区数据进行批次划分的过程为:依据同一分区中各数据的数据量由大到小对所述分区数据进行排序以获取排序数据表,并依据余弦相似度算法计算所述排序数据表中各相邻数据之间的余弦相似度,然后依据自定义聚类算法和所述排序数据表中各相邻数据之间的余弦相似度对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集。其中,批次通常是用在数据库的批量操作里面,为了提高性能,比如:批次大小为1000,就是每次数据库交互处理1000条数据。In this optional embodiment, the process of performing batch division on each partition data in the partition data set is as follows: sorting the partition data according to the data volume of each data in the same partition from large to small to obtain the sorted data table, and calculate the cosine similarity between adjacent data in the sorted data table according to the cosine similarity algorithm, and then according to the custom clustering algorithm and the cosine similarity between adjacent data in the sorted data table The partition data in the partition data set is divided into batches to obtain the second target data set. Among them, the batch is usually used in the batch operation of the database, in order to improve the performance, for example: the batch size is 1000, that is, 1000 pieces of data are processed each time the database interacts.
该可选的实施例中,依据自定义聚类算法和所述排序数据表中各相邻数据之间的余弦相似度对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集的主要过程为:In this optional embodiment, each partition data in the partition data set is divided into batches according to a custom clustering algorithm and the cosine similarity between adjacent data in the sorted data table to obtain the first The main process of the two-target dataset is:
在同一分区内的日志数据中,依次以任何尚未访问过的日志数据为中心点,并依据预设的余弦相似度阈值对该中心点进行扩充,其中扩充的步长为1。即对一个日志数据,如果与其相邻的日志数据之间的余弦相似度大于预设的余弦相似度阈值,则以此日志数据点为中心开始聚类,如果附近的日志数据点小于预设的相似度阈值,则将其先标记为噪声日志数据点,预设的余弦相似度阈值可以为0.6;In the log data in the same partition, take any log data that has not been accessed as the center point in turn, and expand the center point according to the preset cosine similarity threshold, where the expansion step is 1. That is, for a log data, if the cosine similarity between its adjacent log data is greater than the preset cosine similarity threshold, the log data point will be used as the center to start clustering. If the nearby log data points are smaller than the preset cosine similarity threshold Similarity threshold, mark it as a noise log data point first, and the preset cosine similarity threshold can be 0.6;
聚类开始后,计算当前聚类中日志数据点的相邻日志数据点与当前聚类中所有日志数据点的余弦相似度的平均值,并计算判断该平均值是否大于预设的余弦相似度阈值,若大于,则继续按照同样的步长向周围进行聚类,并把不小于预设的余弦相似度阈值条件的日志数据点纳入这个聚类中;After the clustering starts, calculate the average value of the cosine similarity between the log data points adjacent to the log data point in the current cluster and all log data points in the current cluster, and calculate whether the average value is greater than the preset cosine similarity. Threshold, if it is greater than, continue to cluster around according to the same step size, and include log data points not less than the preset cosine similarity threshold condition into this cluster;
重复上述步骤,直到所有的日志数据点均已被访问,此时每个日志数据点都被标记为属于一个聚类或者噪声日志数据点,将所有的噪声日志数据点作为一个聚类类别,同已经得到的其他聚类一起对分区数据进行批次划分,即每个聚类类别所对应的数据作为一个批次,将批次划分完成后的所有日志数据作为所述第二目标数据集。Repeat the above steps until all log data points have been accessed. At this time, each log data point is marked as belonging to a cluster or noise log data point, and all noise log data points are regarded as a cluster category. The partition data is divided into batches together with other clusters that have been obtained, that is, the data corresponding to each cluster category is regarded as a batch, and all log data after the batch division is completed is regarded as the second target data set.
示例性的,当前分区内共有100条日志数据,经过自定义聚类后,共获得5个聚类和10个噪声日志数据点,则此时将10个噪声日志数据点归于同一个类别中,加上获得的5个聚类共有6个聚类类别,因此当前分区共分为6个批次,并依据经过批次划分后的所有分区的批次构成所述第二目标数据集。Exemplarily, there are a total of 100 log data in the current partition. After custom clustering, a total of 5 clusters and 10 noise log data points are obtained. At this time, the 10 noise log data points are classified into the same category. In addition, the obtained 5 clusters have a total of 6 cluster categories, so the current partition is divided into 6 batches, and the second target data set is formed according to the batches of all the partitions after the batch division.
如此,通过对所述第一目标数据集中的数据进行进一步的划分,可以使后续过程中同时对分区数据集中的多条日志数据进行并发统计,从而提高日志数据的统计效率。In this way, by further dividing the data in the first target data set, multiple pieces of log data in the partitioned data set can be concurrently counted in the subsequent process, thereby improving the statistical efficiency of log data.
S13,依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集。S13, compress the second target data set according to a preset logic to obtain a target index data set.
在一个可选的实施例中,依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集包括:In an optional embodiment, compressing the second target data set to obtain the target index data set according to preset logic includes:
S131,依据预设逻辑压缩所述第二目标数据集中的数据以获取压缩数据集。S131, compress data in the second target data set according to preset logic to obtain a compressed data set.
该可选的实施例中,由于所述第二目标数据集中的数据都是经过分批后的数据,因此每个批次中都可能存在重复的日志数据且每个日志数据对应的字符串都有相应的一个全局ID存储在全局字典表中,如图5所示为整个第二目标数据集所对应的全局字典以及每个批次所对应的分批字典。In this optional embodiment, since the data in the second target data set are all batched data, there may be duplicate log data in each batch, and the strings corresponding to each log data are all A corresponding global ID is stored in the global dictionary table, as shown in FIG. 5 , the global dictionary corresponding to the entire second target data set and the batch dictionary corresponding to each batch.
该可选的实施例中,可在每个批次中创建一个分批字典表,该表中存储了批次中所有日志数据对应的全局ID,且每一个全局ID对应了一个分批ID,通过这种二级字典表的方式,一个日志数据对应的字符串就可以通过全局字典表映射到一个全局ID,再通过分批字典表映射到一个分批ID,所以,此时各批次中也不再存储真正的日志数据对应的字符串,而是存储日志数据的字符串对应的分批id,从而完成对所述第二目标数据集的压缩,并将压缩后的全局字典表作为所述压缩数据集。如此,一个存储了日志数据字符串的列就转化成了存储32位整型值的列,数据空间大大缩小。In this optional embodiment, a batch dictionary table may be created in each batch, and the table stores the global IDs corresponding to all log data in the batch, and each global ID corresponds to a batch ID, Through this second-level dictionary table, a string corresponding to a log data can be mapped to a global ID through the global dictionary table, and then mapped to a batch ID through the batch dictionary table. The string corresponding to the real log data is no longer stored, but the batch id corresponding to the string of the log data is stored, so as to complete the compression of the second target data set, and use the compressed global dictionary table as the The compressed dataset described above. In this way, a column that stores log data strings is transformed into a column that stores 32-bit integer values, and the data space is greatly reduced.
示例性的,如要查询统计图5中分批字典0中第2个元素真正代表的值时,需要使用该元素的值2在分批字典表中查询得到它对应的全局ID为4,然后再使用4到全局字典表中查询得到4对应的字符串为“ij”,即可获知分批字典0中第2个元素对应的日志数据为“ij”。Exemplarily, if you want to query the value actually represented by the second element in the
S132,依据压缩算法转换所述压缩数据集中的数据以构建所述目标索引数据集。S132. Convert the data in the compressed data set according to a compression algorithm to construct the target index data set.
该可选的实施例中,对得到的所述压缩数据集可利用压缩算法Bit-VectorEncoding进行转换,其核心思想是将一个列中所有相同列属性的值转化为二元组(列属性值,该列属性值出现在列中位置的Bitmap),使用位图便可以表示出来,通过Bit-VectorEncoding,整个列使用两个简单的二元组就可以表示了,使用这种算法,一个列可以转化为多个二元组,通过在这些二元组上构建B树(B-Tree)索引就可以实现对该列的管理。In this optional embodiment, the compression algorithm Bit-VectorEncoding can be used to convert the obtained compressed data set, and the core idea is to convert all the values of the same column attribute in a column into two-tuples (column attribute value, The column attribute value appears in the Bitmap of the position in the column), which can be represented by using a bitmap. Through Bit-VectorEncoding, the entire column can be represented by two simple two-tuples. Using this algorithm, a column can be converted For a plurality of binary groups, the management of the column can be realized by constructing a B-tree (B-Tree) index on these binary groups.
示例性的,批次1中存储的一列日志数据为(1000,2000,2000,1000,1000,2000,1000),则经过压缩算法Bit-Vector Encoding进行转换后,得到的二元组为(1000,1001101)和(2000,0110010)。Exemplarily, a column of log data stored in
该可选的实施例中,B-Tree索引是最常见的索引结构,所述搜索系统默认创建的索引就是B-Tree索引。B-树索引是基于二叉树结构的,B-树索引结构有3个基本组成部分:根节点、分支节点和叶子节点。其中根节点位于索引结构的最顶端,而叶子节点位于索引结构的最底端,中间为分子节点。叶子节点(Leaf node)包含条目直接指向表里的数据行,分支节点(Branch node)包含的条目指向索引里其他的分支节点或者是叶子节点,而根节点(Branch node)则是在一个B树索引中只有一个,它实际就是位于树的最顶端的分支节点,B树索引的组织结构类似一颗树,主要数据集中在叶子节点上,叶子节点包含索引列的值和记录行对应的物理地址ROWID,如图6所示,通过物理地址ROWID可以获取经过压缩算法转换后的对应数据,并进一步获取所述压缩数据集中的对应数据本方案中将构建好的B树索引作为所述目标索引数据集。In this optional embodiment, the B-Tree index is the most common index structure, and the index created by the search system by default is the B-Tree index. The B-tree index is based on the binary tree structure, and the B-tree index structure has three basic components: root node, branch node and leaf node. The root node is at the top of the index structure, the leaf node is at the bottom of the index structure, and the middle is the molecular node. The leaf node contains entries that point directly to the data rows in the table, the branch node contains entries that point to other branch nodes or leaf nodes in the index, and the root node is in a B-tree There is only one index in the index, which is actually the branch node at the top of the tree. The organizational structure of the B-tree index is similar to a tree. The main data is concentrated on the leaf nodes. The leaf nodes contain the value of the index column and the physical address corresponding to the record row. ROWID, as shown in Figure 6, can obtain the corresponding data converted by the compression algorithm through the physical address ROWID, and further obtain the corresponding data in the compressed data set. In this scheme, the constructed B-tree index is used as the target index data set.
如此,可以在对日志数据进行压缩,从而有效减少存储空间的基础上构建对应的目标索引数据集,实现利用索引对日志数据的快速统计。In this way, the corresponding target index data set can be constructed on the basis of compressing the log data, thereby effectively reducing the storage space, so as to realize the rapid statistics of the log data by using the index.
S14,依据所述目标索引数据集进行数据统计以获取目标日志数据。S14, perform data statistics according to the target index data set to obtain target log data.
该可选的实施例中,可根据获得的目标索引数据集中的索引值对需要进行统计的日志数据进行快速查询,从而完成统计。In this optional embodiment, the log data that needs to be counted can be quickly queried according to the index value in the obtained target index data set, thereby completing the count.
示例性的,如图6所示,当前需要统计的日志数据的索引值为1019,则在根节点分别比较1019与1001、1013值的大小,发现1019在1013的后面,由此确定1019在右子节点,接着分别比较1019和右子节点的1013、1017、1021,发现1019在1017和1021之间(中子节点),然后通过比较1019和中子节点的1017、1018、1019找到叶节点1019,从而根据对应的物理地址ROWID获得实际的日志数据。Exemplarily, as shown in Figure 6, the index value of the log data currently to be counted is 1019, then compare the values of 1019, 1001, and 1013 at the root node, and find that 1019 is behind 1013, thus determining that 1019 is on the right. Child node, then compare 1019 and 1013, 1017, 1021 of the right child node respectively, find that 1019 is between 1017 and 1021 (neutral node), and then find the
如此,能够在进行日志数据统计时,依据所述目标索引数据集中的索引对日志数据进行快速匹配查找,从而获取对应的日志数据。In this way, when log data statistics are performed, the log data can be quickly matched and searched according to the index in the target index data set, so as to obtain the corresponding log data.
请参见图3,图3是本申请基于人工智能的日志数据统计装置的较佳实施例的功能模块图。基于人工智能的日志数据统计装置11包括验证单元110、获取单元111、划分单元112、压缩单元113、统计单元114。本申请所称的模块/单元是指一种能够被处理器13所执行,并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器12中。在本实施例中,关于各模块/单元的功能将在后续的实施例中详述。Please refer to FIG. 3 , which is a functional block diagram of a preferred embodiment of the artificial intelligence-based log data statistics device of the present application. The artificial intelligence-based log
在一个可选的实施例中,验证单元110用于依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证。In an optional embodiment, the
在一个可选的实施例中,所述依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证包括:In an optional embodiment, receiving the log data statistics request according to the search system, and verifying the statistics request includes:
依据预设方式对不同数据类型的日志数据设置编码标签;Set encoding labels for log data of different data types according to a preset method;
基于所述编码标签判断所述统计请求中的数据类型是否含有对应的编码标签,从而确定所述统计请求是否合格,若合格,则验证通过。Based on the encoding tag, it is determined whether the data type in the statistics request contains a corresponding encoding tag, so as to determine whether the statistics request is qualified, and if it is qualified, the verification is passed.
在一个可选的实施例中,所述搜索系统可使用ClickHouse系统,所述ClickHouse系统是一个可用于联机分析处理(OLAP)的列式数据库管理系统。其中,OLAP是数据仓库系统的主要应用,支持复杂的分析操作,侧重决策支持,并且提供直观易懂的查询结果。In an alternative embodiment, the search system may use the ClickHouse system, which is a columnar database management system available for online analytical processing (OLAP). Among them, OLAP is the main application of the data warehouse system, which supports complex analysis operations, focuses on decision support, and provides intuitive and easy-to-understand query results.
该可选的实施例中,不同于联机事务处理OLTP(on-line transactionprocessing)的场景,如电商场景中加购物车、下单、支付等需要在原地进行大量insert、update、delete操作,数据分析(OLAP)场景通常是将数据批量导入后,进行任意维度的灵活探索、BI工具洞察、报表制作等。数据一次性写入后,需要尝试从各个角度对数据做挖掘、分析,直到发现其中的商业价值、业务变化趋势等信息。这是一个需要反复试错、不断调整、持续优化的过程,其中数据的读取次数远多于写入次数,这就要求底层数据库为这个特点做专门设计。In this optional embodiment, different from on-line transaction processing (OLTP) scenarios, such as adding a shopping cart, placing an order, and paying in an e-commerce scenario, a large number of insert, update, and delete operations need to be performed on the spot. The analysis (OLAP) scenario is usually to import data in batches, and then perform flexible exploration in any dimension, BI tool insight, and report production. After the data is written at one time, it is necessary to try to mine and analyze the data from various angles until the business value and business trends are found. This is a process that requires repeated trial and error, continuous adjustment, and continuous optimization, in which the number of data reads is much more than the number of writes, which requires the underlying database to be specially designed for this feature.
该可选的实施例中,由于ClickHouse是一种列式数据库,与线上和本地使用的MySQL数据库不同,它的查询速度非常快,存储数据量也非常大,面对数十亿条数据的查询,都能以秒级别返回查询结果,使用clickhouse体现了系统的高效性。但clickhouse不支持修改数据,所以用来存储用户的日志信息非常合适,因为日志信息是不需要修改的增量数据。In this optional embodiment, since ClickHouse is a columnar database, unlike the MySQL database used online and locally, its query speed is very fast, and the amount of stored data is also very large. The query can return query results in seconds, and the use of clickhouse reflects the efficiency of the system. However, clickhouse does not support modifying data, so it is very suitable to store user log information, because log information is incremental data that does not need to be modified.
在一个可选的实施例中,可依据预设方式对不同数据类型的日志数据设置编码标签,所述编码标签可以是数字、符号或者字母,本方案对此不作要求。In an optional embodiment, coded labels may be set for log data of different data types according to a preset manner, and the coded labels may be numbers, symbols or letters, which are not required in this solution.
该可选的实施例中,对不同类型的日志数据设置完编码标签后,可基于所述编码标签判断所述统计请求中的数据类型是否含有对应的编码标签,从而确定当前的统计请求是否合格,若合格,则验证通过,所述搜索系统接收所述统计请求,若不合格,则验证不通过,所述搜索系统直接拒绝本次统计请求。In this optional embodiment, after coding labels are set for different types of log data, it can be determined based on the coding labels whether the data types in the statistics request contain corresponding coding labels, so as to determine whether the current statistics request is qualified , if it is qualified, the verification is passed, and the search system receives the statistics request; if it is unqualified, the verification fails, and the search system directly rejects the statistics request.
在一个可选的实施例中,获取单元111用于若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集。In an optional embodiment, the obtaining
该可选的实施例中,用户可通过所述搜索系统的客户端指定需要统计的日志数据的数据类型、对应的时间范围和数据取值范围来生成所述统计请求后发送至所述搜索系统的服务端,从而初步确定所要统计的日志数据的整体范围和对应的数据量。In this optional embodiment, the user can specify the data type, corresponding time range and data value range of the log data to be counted through the client of the search system to generate the statistics request and send it to the search system The server side, so as to preliminarily determine the overall scope of the log data to be counted and the corresponding data volume.
该可选的实施例中,所述搜索系统收到用户的统计请求后,可根据用户请求中指定的数据类型、时间范围和数据范围,通过ClickHouse的Kafka(开源流处理平台),实时将服务端的日志数据从kafka接入ClickHouse进行列式存储以作为所述第一目标数据集。此外,ClickHouse也可以存储离线的日志数据,这部分日志数据流需以离线的方式接入,以保证Click House中存储有N天全量的日志数据,通常系统内定期限为N=15天。In this optional embodiment, after receiving the user's statistical request, the search system can, according to the data type, time range and data range specified in the user request, use ClickHouse's Kafka (open source stream processing platform) to real-time The log data of the terminal is connected to ClickHouse from kafka for columnar storage as the first target data set. In addition, ClickHouse can also store offline log data. This part of the log data stream needs to be accessed offline to ensure that ClickHouse stores the full amount of log data for N days. Usually, the default period of the system is N=15 days.
该可选的实施例中,Kafka是一个分布式、支持分区的、多副本的分布式消息系统,它的最大的特性就是可以实时的处理大量数据,具有高吞吐量、低延迟、可扩展性、持久性、可靠性、容错性、高并发的优点,以满足各种需求场景,如日志收集,用户活动跟踪,流式处理等。In this optional embodiment, Kafka is a distributed message system that supports partitioning and multiple copies. Its biggest feature is that it can process a large amount of data in real time, and has high throughput, low latency, and scalability. , durability, reliability, fault tolerance, high concurrency advantages to meet various demand scenarios, such as log collection, user activity tracking, streaming, etc.
该可选的实施例中,所述日志数据可以是由网络安全设备产生的不同类型的日志数据,如安全检测日志、网络流量日志、协议审计日志以及第三方设备输入日志。In this optional embodiment, the log data may be different types of log data generated by network security devices, such as security detection logs, network traffic logs, protocol audit logs, and third-party device input logs.
该可选的实施例中,对获得的日志数据进行列式存储的原因在于:In this optional embodiment, the reason for columnar storage of the obtained log data is as follows:
在行存储模式下,数据按行连续存储,所有列的数据都存储在一个block中,不参与计算的列在IO时也要全部读出,读取操作被严重放大。而列存模式下,只需要读取参与计算的列即可,极大的减低了IO cost,加速了查询。In the row storage mode, the data is stored continuously in rows, the data of all columns is stored in a block, and the columns that do not participate in the calculation must be read out during IO, and the read operation is severely amplified. In the column storage mode, only the columns involved in the calculation need to be read, which greatly reduces the IO cost and speeds up the query.
同一列中的数据属于同一类型,压缩效果显著。列存储往往有着高达十倍甚至更高的压缩比,节省了大量的存储空间,降低了存储成本;更高的压缩比意味着更小的datasize,从磁盘中读取相应数据耗时更短;高压缩比也意味着同等大小的内存能够存放更多数据,系统缓存效果更好。因此,相比于行式存储,ClickHouse在提供数据查询服务时,受数据规模的影响较小,提供大数据量查询服务的性能较好,能够提高查询效率。The data in the same column is of the same type, and the compression effect is significant. Column storage often has a compression ratio as high as ten times or even higher, which saves a lot of storage space and reduces storage costs; a higher compression ratio means a smaller data size, and it takes less time to read the corresponding data from the disk; A high compression ratio also means that the memory of the same size can store more data, and the system cache is better. Therefore, compared with row-based storage, when ClickHouse provides data query services, it is less affected by the scale of data, and has better performance in providing query services with large data volumes, which can improve query efficiency.
在一个可选的实施例中,划分单元112用于依据预设阈值划分所述第一目标数据集以获取第二目标数据集。In an optional embodiment, the dividing
在一个可选的实施例中,所述依据预设阈值划分所述第一目标数据集以获取第二目标数据集包括:In an optional embodiment, the dividing the first target data set according to a preset threshold to obtain the second target data set includes:
依据预设阈值判断所述第一目标数据集的数据量以获取判断结果;Judging the data volume of the first target data set according to a preset threshold to obtain a judgment result;
基于所述判断结果对所述第一目标数据集进行分区以获取分区数据集;Partitioning the first target data set based on the judgment result to obtain a partitioned data set;
对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集。The partition data in the partition data set is divided into batches to obtain the second target data set.
该可选的实施例中,预设阈值可设为1T,并通过比较预设阈值与所述第一目标数据集的数据量大小来获取所述判断结果,若所述第一目标数据集的数据量大于所述预设阈值,则判断结果为分区,若所述第一目标数据集的数据量小于所述预设阈值,则所述判断结果为不分区。In this optional embodiment, the preset threshold may be set to 1T, and the judgment result is obtained by comparing the preset threshold with the data volume of the first target data set. If the amount of data is greater than the preset threshold, the judgment result is partition, and if the data amount of the first target data set is less than the preset threshold, the judgment result is no partition.
该可选的实施例中,若所述第一目标数据集的数据量小于预设阈值,则将所述第一目标数据集作为所述分区数据集;若所述第一目标数据集的数据量大于预设阈值,则以预设阈值为单位对所述第一目标数据集进行划分以获取所述分区数据集。In this optional embodiment, if the data amount of the first target data set is less than a preset threshold, the first target data set is used as the partition data set; if the data of the first target data set is If the amount is greater than the preset threshold, the first target data set is divided in units of the preset threshold to obtain the partitioned data set.
该可选的实施例中,对所述分区数据集中的各分区数据进行批次划分的过程为:依据同一分区中各数据的数据量由大到小对所述分区数据进行排序以获取排序数据表,并依据余弦相似度算法计算所述排序数据表中各相邻数据之间的余弦相似度,然后依据自定义聚类算法和所述排序数据表中各相邻数据之间的余弦相似度对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集。其中,批次通常是用在数据库的批量操作里面,为了提高性能,比如:批次大小为1000,就是每次数据库交互处理1000条数据。In this optional embodiment, the process of performing batch division on each partition data in the partition data set is as follows: sorting the partition data according to the data volume of each data in the same partition from large to small to obtain the sorted data table, and calculate the cosine similarity between adjacent data in the sorted data table according to the cosine similarity algorithm, and then according to the custom clustering algorithm and the cosine similarity between adjacent data in the sorted data table The partition data in the partition data set is divided into batches to obtain the second target data set. Among them, the batch is usually used in the batch operation of the database, in order to improve the performance, for example: the batch size is 1000, that is, 1000 pieces of data are processed each time the database interacts.
该可选的实施例中,依据自定义聚类算法和所述排序数据表中各相邻数据之间的余弦相似度对所述分区数据集中的各分区数据进行批次划分以获取所述第二目标数据集的主要过程为:In this optional embodiment, each partition data in the partition data set is divided into batches according to a custom clustering algorithm and the cosine similarity between adjacent data in the sorted data table to obtain the first The main process of the two-target dataset is:
在同一分区内的日志数据中,依次以任何尚未访问过的日志数据为中心点,并依据预设的余弦相似度阈值对该中心点进行扩充,其中扩充的步长为1。即对一个日志数据,如果与其相邻的日志数据之间的余弦相似度大于预设的余弦相似度阈值,则以此日志数据点为中心开始聚类,如果附近的日志数据点小于预设的相似度阈值,则将其先标记为噪声日志数据点,预设的余弦相似度阈值可以为0.6;In the log data in the same partition, take any log data that has not been accessed as the center point in turn, and expand the center point according to the preset cosine similarity threshold, where the expansion step is 1. That is, for a log data, if the cosine similarity between its adjacent log data is greater than the preset cosine similarity threshold, the log data point will be used as the center to start clustering. If the nearby log data points are smaller than the preset cosine similarity threshold Similarity threshold, mark it as a noise log data point first, and the preset cosine similarity threshold can be 0.6;
聚类开始后,计算当前聚类中日志数据点的相邻日志数据点与当前聚类中所有日志数据点的余弦相似度的平均值,并计算判断该平均值是否大于预设的余弦相似度阈值,若大于,则继续按照同样的步长向周围进行聚类,并把不小于预设的余弦相似度阈值条件的日志数据点纳入这个聚类中;After the clustering starts, calculate the average value of the cosine similarity between the log data points adjacent to the log data point in the current cluster and all log data points in the current cluster, and calculate whether the average value is greater than the preset cosine similarity. Threshold, if it is greater than, continue to cluster around according to the same step size, and include log data points not less than the preset cosine similarity threshold condition into this cluster;
重复上述步骤,直到所有的日志数据点均已被访问,此时每个日志数据点都被标记为属于一个聚类或者噪声日志数据点,将所有的噪声日志数据点作为一个聚类类别,同已经得到的其他聚类一起对分区数据进行批次划分,即每个聚类类别所对应的数据作为一个批次,将批次划分完成后的所有日志数据作为所述第二目标数据集。Repeat the above steps until all log data points have been accessed. At this time, each log data point is marked as belonging to a cluster or noise log data point, and all noise log data points are regarded as a cluster category. The partition data is divided into batches together with other clusters that have been obtained, that is, the data corresponding to each cluster category is regarded as a batch, and all log data after the batch division is completed is regarded as the second target data set.
示例性的,当前分区内共有100条日志数据,经过自定义聚类后,共获得5个聚类和10个噪声日志数据点,则此时将10个噪声日志数据点归于同一个类别中,加上获得的5个聚类共有6个聚类类别,因此当前分区共分为6个批次,并依据经过批次划分后的所有分区的批次构成所述第二目标数据集。Exemplarily, there are a total of 100 log data in the current partition. After custom clustering, a total of 5 clusters and 10 noise log data points are obtained. At this time, the 10 noise log data points are classified into the same category. In addition, the obtained 5 clusters have a total of 6 cluster categories, so the current partition is divided into 6 batches, and the second target data set is formed according to the batches of all the partitions after the batch division.
在一个可选的实施例中,压缩单元113用于依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集。In an optional embodiment, the compressing
在一个可选的实施例中,所述依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集包括:In an optional embodiment, the compressing the second target data set according to the preset logic to obtain the target index data set includes:
依据预设逻辑压缩所述第二目标数据集中的数据以获取压缩数据集;compressing data in the second target data set according to preset logic to obtain a compressed data set;
依据压缩算法转换所述压缩数据集中的数据以构建所述目标索引数据集。The data in the compressed data set is transformed according to a compression algorithm to construct the target index data set.
该可选的实施例中,由于所述第二目标数据集中的数据都是经过分批后的数据,因此每个批次中都可能存在重复的日志数据且每个日志数据对应的字符串都有相应的一个全局ID存储在全局字典表中,如图5所示为整个第二目标数据集所对应的全局字典以及每个批次所对应的分批字典。In this optional embodiment, since the data in the second target data set are all batched data, there may be duplicate log data in each batch, and the strings corresponding to each log data are all A corresponding global ID is stored in the global dictionary table, as shown in FIG. 5 , the global dictionary corresponding to the entire second target data set and the batch dictionary corresponding to each batch.
该可选的实施例中,可在每个批次中创建一个分批字典表,该表中存储了批次中所有日志数据对应的全局ID,且每一个全局ID对应了一个分批ID,通过这种二级字典表的方式,一个日志数据对应的字符串就可以通过全局字典表映射到一个全局ID,再通过分批字典表映射到一个分批ID,所以,此时各批次中也不再存储真正的日志数据对应的字符串,而是存储日志数据的字符串对应的分批id,从而完成对所述第二目标数据集的压缩,并将压缩后的全局字典表作为所述压缩数据集。如此,一个存储了日志数据字符串的列就转化成了存储32位整型值的列,数据空间大大缩小。In this optional embodiment, a batch dictionary table may be created in each batch, and the table stores the global IDs corresponding to all log data in the batch, and each global ID corresponds to a batch ID, Through this second-level dictionary table, a string corresponding to a log data can be mapped to a global ID through the global dictionary table, and then mapped to a batch ID through the batch dictionary table. The string corresponding to the real log data is no longer stored, but the batch id corresponding to the string of the log data is stored, so as to complete the compression of the second target data set, and use the compressed global dictionary table as the The compressed dataset described above. In this way, a column that stores log data strings is transformed into a column that stores 32-bit integer values, and the data space is greatly reduced.
示例性的,如要查询统计图5中分批字典0中第2个元素真正代表的值时,需要使用该元素的值2在分批字典表中查询得到它对应的全局ID为4,然后再使用4到全局字典表中查询得到4对应的字符串为“ij”,即可获知分批字典0中第2个元素对应的日志数据为“ij”。Exemplarily, if you want to query the value actually represented by the second element in the
该可选的实施例中,对得到的所述压缩数据集可利用压缩算法Bit-VectorEncoding进行转换,其核心思想是将一个列中所有相同列属性的值转化为二元组(列属性值,该列属性值出现在列中位置的Bitmap),使用位图便可以表示出来,通过Bit-VectorEncoding,整个列使用两个简单的二元组就可以表示了,使用这种算法,一个列可以转化为多个二元组,通过在这些二元组上构建B树(B-Tree)索引就可以实现对该列的管理。In this optional embodiment, the compression algorithm Bit-VectorEncoding can be used to convert the obtained compressed data set, and the core idea is to convert all the values of the same column attribute in a column into two-tuples (column attribute value, The column attribute value appears in the Bitmap of the position in the column), which can be represented by using a bitmap. Through Bit-VectorEncoding, the entire column can be represented by two simple two-tuples. Using this algorithm, a column can be converted For a plurality of binary groups, the management of the column can be realized by constructing a B-tree (B-Tree) index on these binary groups.
示例性的,批次1中存储的一列日志数据为(1000,2000,2000,1000,1000,2000,1000),则经过压缩算法Bit-Vector Encoding进行转换后,得到的二元组为(1000,1001101)和(2000,0110010)。Exemplarily, a column of log data stored in
该可选的实施例中,B-Tree索引是最常见的索引结构,所述搜索系统默认创建的索引就是B-Tree索引。B-树索引是基于二叉树结构的,B-树索引结构有3个基本组成部分:根节点、分支节点和叶子节点。其中根节点位于索引结构的最顶端,而叶子节点位于索引结构的最底端,中间为分子节点。叶子节点(Leaf node)包含条目直接指向表里的数据行,分支节点(Branch node)包含的条目指向索引里其他的分支节点或者是叶子节点,而根节点(Branch node)则是在一个B树索引中只有一个,它实际就是位于树的最顶端的分支节点,B树索引的组织结构类似一颗树,主要数据集中在叶子节点上,叶子节点包含索引列的值和记录行对应的物理地址ROWID,如图6所示,通过物理地址ROWID可以获取经过压缩算法转换后的对应数据,并进一步获取所述压缩数据集中的对应数据本方案中将构建好的B树索引作为所述目标索引数据集。In this optional embodiment, the B-Tree index is the most common index structure, and the index created by the search system by default is the B-Tree index. The B-tree index is based on the binary tree structure, and the B-tree index structure has three basic components: root node, branch node and leaf node. The root node is at the top of the index structure, the leaf node is at the bottom of the index structure, and the middle is the molecular node. The leaf node contains entries that point directly to the data rows in the table, the branch node contains entries that point to other branch nodes or leaf nodes in the index, and the root node is in a B-tree There is only one index in the index, which is actually the branch node at the top of the tree. The organizational structure of the B-tree index is similar to a tree. The main data is concentrated on the leaf nodes. The leaf nodes contain the value of the index column and the physical address corresponding to the record row. ROWID, as shown in Figure 6, can obtain the corresponding data converted by the compression algorithm through the physical address ROWID, and further obtain the corresponding data in the compressed data set. In this scheme, the constructed B-tree index is used as the target index data set.
在一个可选的实施例中,统计单元114用于依据所述目标索引数据集进行数据统计以获取目标日志数据。In an optional embodiment, the
该可选的实施例中,可根据获得的目标索引数据集中的索引值对需要进行统计的日志数据进行快速查询,从而完成统计。In this optional embodiment, the log data that needs to be counted can be quickly queried according to the index value in the obtained target index data set, thereby completing the count.
示例性的,如图6所示,当前需要统计的日志数据的索引值为1019,则在根节点分别比较1019与1001、1013值的大小,发现1019在1013的后面,由此确定1019在右子节点,接着分别比较1019和右子节点的1013、1017、1021,发现1019在1017和1021之间(中子节点),然后通过比较1019和中子节点的1017、1018、1019找到叶节点1019,从而根据对应的物理地址ROWID获得实际的日志数据。Exemplarily, as shown in Figure 6, the index value of the log data currently to be counted is 1019, then compare the values of 1019, 1001, and 1013 at the root node, and find that 1019 is behind 1013, thus determining that 1019 is on the right. Child node, then compare 1019 and 1013, 1017, 1021 of the right child node respectively, find that 1019 is between 1017 and 1021 (neutral node), and then find the
由以上技术方案可以看出,本申请能够通过对日志数据进行分类存储,然后依据预设逻辑对日志数据进行压缩后构建索引值,从而可以在节省日志数据的存储空间的基础上利用构建的索引值对日志数据进行快速统计,提高大规模日志数据的统计效率。It can be seen from the above technical solutions that the present application can classify and store log data, and then compress the log data according to a preset logic to build an index value, so that the built index can be used on the basis of saving the storage space of the log data. The value of the log data can be quickly counted to improve the statistical efficiency of large-scale log data.
请参见图4,是本申请实施例提供的一种电子设备的结构示意图。电子设备1包括存储器12和处理器13。存储器12用于存储计算机可读指令,处理器13用执行所述储器中存储的计算机可读指令以实现上述任一实施例所述的基于人工智能的日志数据统计方法。Please refer to FIG. 4 , which is a schematic structural diagram of an electronic device provided by an embodiment of the present application. The
在一个可选的实施例中,电子设备1还包括总线、存储在所述存储器12中并可在所述处理器13上运行的计算机程序,例如基于人工智能的日志数据统计程序。In an optional embodiment, the
图4仅示出了具有存储器12和处理器13的电子设备1,本领域技术人员可以理解的是,图4示出的结构并不构成对电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。FIG. 4 only shows the
结合图1,电子设备1中的所述存储器12存储多个计算机可读指令以实现一种基于人工智能的日志数据统计方法,所述处理器13可执行所述多个指令从而实现:1, the
依据搜索系统接收日志数据统计请求,并对所述统计请求进行验证;Receive log data statistics requests according to the search system, and verify the statistics requests;
若验证通过,则所述搜索系统依据所述统计请求对从服务端获取的日志数据进行搜索以获取第一目标数据集;If the verification is passed, the search system searches the log data obtained from the server according to the statistical request to obtain the first target data set;
依据预设阈值划分所述第一目标数据集以获取第二目标数据集;dividing the first target data set according to a preset threshold to obtain a second target data set;
依据预设逻辑压缩所述第二目标数据集以获取目标索引数据集;compressing the second target data set according to preset logic to obtain a target index data set;
依据所述目标索引数据集进行数据统计以获取目标日志数据。Data statistics are performed according to the target index data set to obtain target log data.
具体地,所述处理器13对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。Specifically, for the specific implementation method of the above-mentioned instruction by the
本领域技术人员可以理解,所述示意图仅仅是电子设备1的示例,并不构成对电子设备1的限定,电子设备1可以是总线型结构,也可以是星形结构,电子设备1还可以包括比图示更多或更少的其他硬件或者软件,或者不同的部件布置,例如电子设备1还可以包括输入输出设备、网络接入设备等。Those skilled in the art can understand that the schematic diagram is only an example of the
需要说明的是,电子设备1仅为举例,其他现有的或今后可能出现的电子产品如可适应于本申请,也应包含在本申请的保护范围以内,并以引用方式包含于此。It should be noted that the
其中,存储器12至少包括一种类型的可读存储介质,所述可读存储介质可以是非易失性的,也可以是易失性的。所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器12在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。存储器12在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(Secure Digital,SD)卡、闪存卡(Flash Card)等。存储器12不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如基于人工智能的日志数据统计程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。The
处理器13在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。处理器13是电子设备1的控制核心(Control Unit),利用各种接口和线路连接整个电子设备1的各个部件,通过运行或执行存储在所述存储器12内的程序或者模块(例如执行基于人工智能的日志数据统计程序等),以及调用存储在所述存储器12内的数据,以执行电子设备1的各种功能和处理数据。The
所述处理器13执行所述电子设备1的操作系统以及安装的各类应用程序。所述处理器13执行所述应用程序以实现上述各个基于人工智能的日志数据统计方法实施例中的步骤,例如图1至图2所示的步骤。The
示例性的,所述计算机程序可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器12中,并由所述处理器13执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机程序在电子设备1中的执行过程。例如,所述计算机程序可以被分割成验证单元110、获取单元111、划分单元112、压缩单元113、统计单元114。Exemplarily, the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the
上述以软件功能模块的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能模块存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、计算机设备,或者网络设备等)或处理器(processor)执行本申请各个实施例所述的基于人工智能的日志数据统计方法的部分。The above-mentioned integrated units implemented in the form of software functional modules may be stored in a computer-readable storage medium. The above-mentioned software function modules are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a processor (processor) to execute the various embodiments of the present application. Part of artificial intelligence-based log data statistical methods.
电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指示相关的硬件设备来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。If the modules/units integrated in the
其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存储器及其他存储器等。Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) , random access memory and other memories.
进一步地,计算机可读存储介质可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据区块链节点的使用所创建的数据等。Further, the computer-readable storage medium may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required by at least one function, and the like; Use the created data, etc.
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。The blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Blockchain, essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block. The blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,在图4中仅用一根箭头表示,但并不表示仅有一根总线或一种类型的总线。所述总线被设置为实现所述存储器12以及至少一个处理器13等之间的连接通信。The bus may be a peripheral component interconnect (PCI for short) bus or an extended industry standard architecture (extended industry standard architecture, EISA for short) bus, or the like. The bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one arrow is shown in FIG. 4, but it does not mean that there is only one bus or one type of bus. The bus is arranged to enable connection communication between the
本申请实施例还提供一种计算机可读存储介质(图未示),计算机可读存储介质中存储有计算机可读指令,计算机可读指令被电子设备中的处理器执行以实现上述任一实施例所述的基于人工智能的日志数据统计方法。Embodiments of the present application further provide a computer-readable storage medium (not shown), where computer-readable instructions are stored in the computer-readable storage medium, and the computer-readable instructions are executed by a processor in an electronic device to implement any of the foregoing implementations The artificial intelligence-based log data statistics method described in the example.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division manners in actual implementation.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software function modules.
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。说明书陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一、第二等词语用来表示名称,而并不表示任何特定的顺序。Furthermore, it is clear that the word "comprising" does not exclude other units or steps and the singular does not exclude the plural. A plurality of units or devices stated in the specification can also be implemented by one unit or device through software or hardware. The words first, second, etc. are used to denote names and do not denote any particular order.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application rather than limitations. Although the present application has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present application can be Modifications or equivalent substitutions can be made without departing from the spirit and scope of the technical solutions of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210378426.5A CN114741368A (en) | 2022-04-12 | 2022-04-12 | Log data statistical method based on artificial intelligence and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210378426.5A CN114741368A (en) | 2022-04-12 | 2022-04-12 | Log data statistical method based on artificial intelligence and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114741368A true CN114741368A (en) | 2022-07-12 |
Family
ID=82280804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210378426.5A Pending CN114741368A (en) | 2022-04-12 | 2022-04-12 | Log data statistical method based on artificial intelligence and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114741368A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115392838A (en) * | 2022-09-06 | 2022-11-25 | 珠海格力电器股份有限公司 | Warehouse cargo entry and exit control method, device and storage system |
CN115455088A (en) * | 2022-10-24 | 2022-12-09 | 建信金融科技有限责任公司 | Data statistical method, device, equipment and storage medium |
CN117078139A (en) * | 2023-10-16 | 2023-11-17 | 国家邮政局邮政业安全中心 | Cross-border express supervision method, system, electronic equipment and storage medium |
CN118278901A (en) * | 2024-06-04 | 2024-07-02 | 太行城乡建设集团有限公司 | Engineering auditing method and device based on blockchain, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113590556A (en) * | 2021-07-30 | 2021-11-02 | 中国工商银行股份有限公司 | Database-based log processing method, device and equipment |
CN114036117A (en) * | 2021-11-15 | 2022-02-11 | 平安普惠企业管理有限公司 | Log viewing method and device, computer equipment and storage medium |
CN114090529A (en) * | 2021-10-29 | 2022-02-25 | 青岛海尔科技有限公司 | A log management method, device, system and storage medium |
-
2022
- 2022-04-12 CN CN202210378426.5A patent/CN114741368A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113590556A (en) * | 2021-07-30 | 2021-11-02 | 中国工商银行股份有限公司 | Database-based log processing method, device and equipment |
CN114090529A (en) * | 2021-10-29 | 2022-02-25 | 青岛海尔科技有限公司 | A log management method, device, system and storage medium |
CN114036117A (en) * | 2021-11-15 | 2022-02-11 | 平安普惠企业管理有限公司 | Log viewing method and device, computer equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
ANSWER_BALL: "Clickhouse学习之路(二)-- 分区、分片原理", pages 1 - 3, Retrieved from the Internet <URL:https://blog.csdn.net/BIackMamba/article/details/119424507> * |
BITCARMANLEE: "列存储中常用的数据压缩算法", pages 1 - 4, Retrieved from the Internet <URL:https://blog.csdn.net/bitcarmanlee/article/details/50938970> * |
童维勤等: "《数据密集型计算和模型》", vol. 1, 31 January 2015, 上海科学技术出版社, pages: 29 - 30 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115392838A (en) * | 2022-09-06 | 2022-11-25 | 珠海格力电器股份有限公司 | Warehouse cargo entry and exit control method, device and storage system |
CN115455088A (en) * | 2022-10-24 | 2022-12-09 | 建信金融科技有限责任公司 | Data statistical method, device, equipment and storage medium |
CN117078139A (en) * | 2023-10-16 | 2023-11-17 | 国家邮政局邮政业安全中心 | Cross-border express supervision method, system, electronic equipment and storage medium |
CN117078139B (en) * | 2023-10-16 | 2024-02-09 | 国家邮政局邮政业安全中心 | Cross-border express supervision method, system, electronic equipment and storage medium |
CN118278901A (en) * | 2024-06-04 | 2024-07-02 | 太行城乡建设集团有限公司 | Engineering auditing method and device based on blockchain, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11921718B2 (en) | Query execution via computing devices with parallelized resources | |
CN114741368A (en) | Log data statistical method based on artificial intelligence and related equipment | |
US10521441B2 (en) | System and method for approximate searching very large data | |
CN108268586B (en) | Data processing method, device, medium and computing equipment across multiple data tables | |
CN104769586A (en) | Profiling data with location information | |
CN114281989B (en) | Data deduplication method and device based on text similarity, storage medium and server | |
US10599614B1 (en) | Intersection-based dynamic blocking | |
US12360980B2 (en) | Implementing different secondary indexing schemes for different segments stored via a database system | |
CN115374240B (en) | Sequential database reading performance optimization method and system based on multi-level index | |
CN110795469B (en) | Spark-based high-dimensional sequence data similarity query method and system | |
CN110175152A (en) | A kind of log inquiring method, transfer server cluster and log query system | |
Franke et al. | Parallel Privacy-preserving Record Linkage using LSH-based Blocking. | |
US11507578B2 (en) | Delaying exceptions in query execution | |
CN114860722B (en) | Data slicing method, device, equipment and medium based on artificial intelligence | |
CN103345527B (en) | Intelligent data statistical system | |
CN115905630A (en) | Graph database query method, device, equipment and storage medium | |
US20240403294A1 (en) | Database system and method with array field distribution data | |
CN110825744A (en) | A partitioned storage method for air quality monitoring big data based on cluster environment | |
CN116719822B (en) | Method and system for storing massive structured data | |
CN119003686A (en) | Reverse index structure oriented to time sequence data model and improved InfluxDB | |
CN115964529A (en) | Vehicle tracking method, device, equipment and medium based on feature extraction | |
CN115114297A (en) | Data lightweight storage and search method and device, electronic equipment and storage medium | |
Purdilă et al. | Single‐scan: a fast star‐join query processing algorithm | |
CN115687352A (en) | A storage method and device | |
CN110555137A (en) | Label filling method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |