Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
The invention aims to solve the problems and provides a hot event identification method and system based on multi-level clustering, which can accurately identify hot events in real time and provide characteristic words capable of representing the hot events to accurately describe hot public sentiments, so that the efficiency of reading hot sentiments by a user can be improved.
The technical scheme of the invention is as follows: the invention discloses a hot event identification method based on multi-level clustering, which comprises the following steps:
step 1: preprocessing a text, and dividing the text content into a plurality of phrases;
step 2: performing text vectorization processing on the text divided by the phrase to form a vectorized event set;
and step 3: aggregating the event sets subjected to vector quantization by adopting an unsupervised clustering algorithm to form an event cluster of the hot spot;
and 4, step 4: and performing vectorization processing on each event cluster by adopting a deep learning algorithm and performing aggregation by using an unsupervised clustering algorithm again.
According to an embodiment of the hot spot event identification method based on multi-level clustering of the present invention, step 1 further includes:
step 1-1: leading in a professional word bank and a stop word list for assisting a Chinese word segmentation module;
step 1-2: identifying major organizations and names appearing in the text using named entity identification technology;
step 1-3: a Chinese word segmentation module is adopted to segment the text into a plurality of phrases.
According to an embodiment of the hot spot event identification method based on multi-level clustering of the present invention, step 2 further includes:
step 2-1: calculating the frequency of each word appearing in the text, namely word frequency, and carrying out normalization processing;
step 2-2: calculating the reverse file frequency;
step 2-3: and vectorizing each piece of news in the text by adopting a word frequency-reverse file frequency algorithm.
According to an embodiment of the hot spot event identification method based on multi-level clustering of the present invention, step 3 further includes:
step 3-1: inputting news collection D ═ { D ═ D1,d2,...dnAnd a minimum threshold θ;
step 3-2: taking one news as an initial clustering center, and calculating the content similarity of the news and other news;
step 3-3: comparing the calculated content similarity with a minimum threshold theta, and if all the content similarity is smaller than the minimum threshold theta, using d1Adding a new cluster to the cluster center, otherwise d1Classifying the cluster with the maximum similarity;
step 3-4: and respectively aggregating the news sets into a plurality of event clusters according to the clustering result, and outputting the category numbers of the event clusters.
According to an embodiment of the hot spot event identification method based on multi-level clustering of the present invention, step 4 further includes:
step 4-1: taking each event cluster as a long text, performing word segmentation processing, and inputting the long text into a skip-gram algorithm, wherein the skip-gram algorithm passes through p (w)i+1,wi-1|wi,uj) Probabilistic model, computing and current word wiThe probability of two adjacent words is selected, the word with the highest probability is selected as output in a dictionary, and the event cluster vector u obtained by the last iteration is usedjInputting the data into a skip-gram algorithm;
step 4-2: will pass through p (w)i+1,wi-1|wi,uj) Calculating the obtained word, making difference between the obtained word and real adjacent word to obtain loss term, transferring the loss term to p (w) by using back propagation algorithmi+1,wi-1|wi,uj) Then updates the corresponding ujAn event cluster vector value of;
step 4-3: repeating steps 4-1 to 4-2 until ujThe vector value approaches to be stable or the event cluster is trained in the following text;
step 4-4: and (4) integrating the vectorization results of each event cluster together, taking the vectorization results as the input of a single-pass algorithm, carrying out secondary clustering, and defining the results as topic clusters.
According to an embodiment of the hot event identification method based on multi-level clustering of the present invention, the method further comprises:
and 5: and generating topic cluster description by using a new word discovery algorithm.
According to an embodiment of the hot spot event identification method based on multi-level clustering of the present invention, step 5 further includes:
step 5-1: gathering all news in each topic cluster together, using the segmented result as input through a Chinese word segmentation module, and respectively calculating three indexes of word frequency, polymerization degree and freedom degree;
step 5-2: and taking the product of word frequency, polymerization degree and freedom degree as a sequencing index, and generating a representative word as topic description.
The invention also discloses a hot spot event recognition system based on multi-level clustering, which comprises the following steps:
the phrase segmentation module is configured to preprocess the text and segment the text content into a plurality of phrases;
the vectorization module is configured to perform text vectorization processing on the text subjected to the phrase segmentation to form a vectorized event set;
the event cluster acquisition module adopts an unsupervised clustering algorithm to aggregate the event sets of the vector quantization to form an event cluster of the hot spot;
and the aggregation module is used for vectorizing each event cluster by adopting a deep learning algorithm and aggregating by using an unsupervised clustering algorithm again.
According to an embodiment of the hot event identification system based on multi-level clustering of the present invention, the phrase segmentation module is further configured to process the following: leading in a professional word bank and a stop word list for assisting a Chinese word segmentation module; identifying major organizations and names appearing in the text using named entity identification technology; a Chinese word segmentation module is adopted to segment the text into a plurality of phrases.
According to an embodiment of the hot event identification system based on multi-level clustering of the present invention, the vectorization module is further configured to process the following: calculating the frequency of each word appearing in the text, namely word frequency, and carrying out normalization processing; calculating the reverse file frequency; and vectorizing each piece of news in the text by adopting a word frequency-reverse file frequency algorithm.
According to an embodiment of the hot spot event recognition system based on multi-level clustering of the present invention, the event cluster acquisition module is further configured to process the following: inputting news collection D ═ { D ═ D1,d2,...dnAnd a minimum threshold θ; taking one news as an initial clustering center, and calculating the content similarity of the news and other news; comparing the calculated content similarity with a minimum threshold theta, and if all the content similarity is smaller than the minimum threshold theta, using d1Adding a new cluster to the cluster center, otherwise d1Classifying the cluster with the maximum similarity; and respectively aggregating the news sets into a plurality of event clusters according to the clustering result, and outputting the category numbers of the event clusters.
According to an embodiment of the hot event identification system based on multi-level clustering of the present invention, the aggregation module is further configured to process the following: taking each event cluster as a long text, performing word segmentation processing, and inputting the long text into a skip-gram algorithm, wherein the skip-gram algorithm passes through p (w)i+1,wi-1|wi,uj) Probabilistic model, computing and current word wiThe probability of two adjacent words is selected, the word with the highest probability is selected as output in a dictionary, and the event cluster vector u obtained by the last iteration is usedjInputting the data into a skip-gram algorithm; will pass through p (w)i+1,wi-1|wi,uj) Calculating the obtained word, making difference between the obtained word and real adjacent word to obtain loss term, transferring the loss term to p (w) by using back propagation algorithmi+1,wi-1|wi,uj) Then updates the corresponding ujAn event cluster vector value of; repeating the above two steps until ujThe vector value approaches to be stable or the event cluster is trained in the following text; and (4) integrating the vectorization results of each event cluster together, taking the vectorization results as the input of a single-pass algorithm, carrying out secondary clustering, and defining the results as topic clusters.
According to an embodiment of the hot spot event identification system based on multi-level clustering of the present invention, the system further includes:
and the topic cluster description generation module generates topic cluster description by using a new word discovery algorithm.
According to an embodiment of the hot event identification system based on multi-level clustering of the present invention, the topic cluster description generation module is further configured to process the following: gathering all news in each topic cluster together, using the segmented result as input through a Chinese word segmentation module, and respectively calculating three indexes of word frequency, polymerization degree and freedom degree; and taking the product of word frequency, polymerization degree and freedom degree as a sequencing index, and generating a representative word as topic description.
Compared with the prior art, the invention has the following beneficial effects:
firstly, the overall architecture of the method is initiated, the problem of multi-level text clustering cannot be simultaneously solved by the traditional technical process on the premise of not providing label data and manual intervention, the method solves the problem of text representation by deep learning and traditional TF-IDF vectorization for the first time, and a foundation is laid for multi-level text clustering.
Secondly, aiming at the fields with strong specialization and few labels (such as the financial field), the invention adopts the way of a financial professional lexicon and an entity recognition algorithm to increase the effectiveness of Chinese word segmentation and improve the effect of a news hotspot discovery algorithm.
Thirdly, compared with the existing hot spot discovery technology, the method can accurately identify the characteristic words representing the events through hot word discovery, form accurate description of the hot spot public sentiment and improve the efficiency of reading the hot spots by the user.
Fourthly, the method can intelligently identify recent hot words through topic description, automatically improve algorithm effect and enhance the real-time property of hot spot discovery.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
Fig. 1 shows a flow of an embodiment of a hot event identification method based on multi-level clustering according to the present invention. Referring to fig. 1, the implementation steps of the method of the embodiment are described in detail below, and the embodiment takes the identification of the hot event of the news text in the financial field as an example, and the invention can be extended to other similar application fields.
Step 1: and preprocessing the text, and dividing the text content into a plurality of phrases.
In this embodiment, the preprocessing is performed on the news text related to the financial field, and the specific processing steps of the preprocessing are as follows:
step 1-1: and (3) importing a professional word bank (such as a financial professional word bank) and a stop word list for assisting the Chinese word segmentation module.
Step 1-2: the main agencies and names of people that appear in text are identified using named entity recognition techniques. Such as named entity recognition techniques based on the language pre-training model BERT using large-scale financial annotation samples.
Step 1-3: a Chinese word segmentation module is adopted to segment the news text into a plurality of phrases.
Step 2: and performing text vectorization processing on the text subjected to the phrase segmentation.
The specific processing procedure of this step is as follows.
Step 2-1: the number of times each word appears in the news text-word frequency (term frequency) is calculated and normalized:
wherein f isijIndicating that word i is in News djNumber of occurrences in, NjRepresenting a news aggregate review, tfiIndicating the frequency with which the representative word i appears in the news.
Step 2-2: calculating reverse document frequency (inverse document frequency):
wherein N isiIndicates the number of news items, id, containing the word iiAnd representing the reverse file frequency, dividing the total news number by the news data containing the word, and taking the logarithm of the obtained quotient to obtain the reverse file frequency, wherein N represents the total news in the news set.
Step 2-3: quantizing each news vector by adopting a TF-IDF (term frequency-inverse document frequency) algorithm into: d { (t)1,w1),(t2,w2),…,(ti,wi),…,(tn,wn) Where t isiIs a feature item of the text, wiAnd d is the weight of the feature item, represents the result of vectorization of the news, firstly, a TF-IDF model is trained on the basis of large-scale corpus, and each piece of news is vectorized by using the model.
In addition, the method of vectorizing news according to the present invention is not limited to the TF-IDF method of the present embodiment, and other vectorizing methods may be used instead.
And step 3: and aggregating the relatively quantized news sets by adopting an unsupervised clustering algorithm (such as a single-pass clustering algorithm) to form hot news clusters.
The specific processing procedure of this step is as follows, please refer to fig. 2.
Step 3-1: inputting news collection D ═ { D ═ D1,d2,...dnAnd a minimum threshold θ.
Step 3-2: and taking one news as an initial clustering center, and calculating the content similarity of the news and other news.
In the present embodiment, the news d is used1As initial clustering center, calculating each of the rest news and news d by cosine similarity algorithm1Content similarity of (2):
sim(d,T)=cos(d,T)=a
in the above formula, T represents the whole news set, a represents the cosine similarity value, and the specific calculation step is the same feature item T in all feature items (n) in different news diWeight value w ofiMultiplication.
Step 3-3: comparing the calculated content similarity with a minimum threshold theta, and if all the content similarity is smaller than the minimum threshold theta, using d1Adding a new cluster to the cluster center, otherwise d1And classifying the cluster with the maximum similarity.
Step 3-4: and respectively aggregating the news sets into a plurality of event clusters according to the clustering result, outputting the class numbers of the event clusters, and defining each cluster as an event cluster with similar report contents.
And 4, step 4: and (3) vectorizing each event cluster by adopting a deep learning algorithm (such as a skip-gram algorithm) and aggregating by using an unsupervised clustering algorithm (such as a single-pass algorithm).
The specific processing procedure of this step is as follows.
Step 4-1: taking each event cluster as a long text, performing word segmentation processing, and inputting the long text into a skip-gram algorithm, wherein the skip-gram algorithm passes through p (w)i+1,wi-1|wi,uj) Probabilistic model (this model)Parameter w iniRepresenting the current word, parameter wi+1,wi-1Representing two words adjacent to the current word, parameter ujRepresenting the event cluster vector obtained by the last iteration, and randomly generating for the first time), calculating and calculating the current word wiThe probabilities of two adjacent words, and the word with the highest probability is selected as the output in the dictionary. Simultaneously, event cluster vector u obtained by last iteration is usedjInput into the skip-gram algorithm.
Step 4-2: will pass through p (w)i+1,wi-1|wi,uj) Calculating the obtained word, making difference between the obtained word and real adjacent word to obtain loss term, transferring the loss term to p (w) by using back propagation algorithmi+1,wi-1|wi,uj) Then updates the corresponding ujThe event cluster vector value of.
Step 4-3: repeating steps 4-1 to 4-2 until ujThe vector value of (a) approaches stability or the event cluster is trained in the following text.
Step 4-4: and (4) integrating the vectorization results of each event cluster together, taking the vectorization results as the input of a single-pass algorithm, carrying out secondary clustering, and defining the results as topic clusters.
In addition, the invention is not limited to the secondary clustering to form topic clusters in the embodiment, and multi-layer clustering can be performed by using the same method. The neural network structure used for vectorization in this step may be replaced with another network structure.
Preferably, the method of this embodiment further includes step 5: and generating topic cluster description by using a new word discovery algorithm.
The specific processing procedure of this step is as follows, please refer to fig. 3 at the same time.
Step 5-1: all news in each topic cluster are collected together, a Chinese word segmentation module is used for inputting word segmentation results, and three indexes of word frequency, polymerization degree and freedom degree are respectively calculated, as shown in fig. 3, the specific calculation mode is as follows:
(1) calculating word frequency: regular expressions are used for matching single Chinese characters, double Chinese characters, three Chinese characters, four Chinese characters and five Chinese character words and calculating word frequency respectively.
(2) Calculating the polymerization degree: assuming that the word is S, firstly calculating the probability P (S) of the occurrence of the word, and then trying all possible two segmentations of S, namely dividing the word into a left half part sl and a right half part sr, and calculating P (sl) and P (sr), for example, two segmentations exist in a double Chinese character word, and two segmentations exist in a three Chinese character word. Then, in all the two-segmentation schemes, the minimum value of P (S)/(P (sl) xP (sr)) is calculated, and after taking the logarithm, the minimum value can be used as the measure of the degree of polymerization, and the degree of polymerization of all possible alternative words is calculated.
(3) And (3) calculating the degree of freedom: assuming that a word totally appears N times, N Chinese characters totally appear on the left side of the word, and each Chinese character sequentially appears N1, N2, … … and Nn times, N is satisfied as N1+ N2+ … … + Nn, so that the probability of the appearance of each Chinese character on the left side of the word can be calculated, and the left-adjacent entropy can be calculated according to the entropy formula. The smaller the entropy is, the lower the degree of freedom is, and the smaller one of the left-adjacent entropy and the right-adjacent entropy of a word is taken as the final degree of freedom.
Step 5-2: and taking the product of word frequency, polymerization degree and freedom degree as a sequencing index, and generating a representative word as topic description.
FIG. 4 illustrates the principle of an embodiment of the hot spot event identification system based on multi-level clustering according to the present invention. Referring to fig. 4, the system of the present embodiment includes: the system comprises a phrase segmentation module, a vectorization module, an event cluster acquisition module and an aggregation module. Preferably, the system further comprises a topic cluster description generation module.
The phrase segmentation module is configured to preprocess the text and segment the text content into a plurality of phrases.
The phrase segmentation module is further configured to process the following:
a special word bank (such as a financial special word bank) and a stop word list are imported and used for assisting a Chinese word segmentation module;
identifying major institutions and names appearing in the text using named entity identification techniques, such as named entity identification techniques based on a language pre-training model BERT using large-scale financial annotation samples;
a Chinese word segmentation module is adopted to segment the text into a plurality of phrases.
The vectorization module is configured to perform text vectorization processing on the phrase-segmented text to form a vectorized event set.
The vectoring module is further configured to process the following:
calculating the frequency of occurrence of each word in the text, namely word frequency, and normalizing:
wherein f isijIndicating that word i is in News djNumber of occurrences in, NjRepresenting a news aggregate review, tfiIndicating the frequency of occurrence of the representative word i in news;
calculating reverse file frequency:
wherein N isiIndicating the number of news containing the word i, idfiRepresenting the frequency of reverse files, dividing the total news number by the news data containing the words, and then taking the logarithm of the obtained quotient to obtain the frequency of the reverse files, wherein N represents the total number of news in a news set;
performing vector quantization on each piece of news in the text by adopting a word frequency-reverse file frequency algorithm: d { (t)1,w1),(t2,w2),…,(ti,wi),…,(tn,wn) Where t isiIs a feature item of the text, wiAnd d is the weight of the feature item, represents the result of vectorization of the news, firstly, a TF-IDF model is trained on the basis of large-scale corpus, and each piece of news is vectorized by using the model.
The event cluster acquisition module is configured to aggregate the quantified event sets by adopting an unsupervised clustering algorithm to form event clusters of hot spots.
The event cluster acquisition module is further configured to process the following:
input needs to be processedNews set D ═ D1,d2,...dnAnd a minimum threshold θ;
taking one news as an initial clustering center, calculating the content similarity of the news and other news, and taking the news d1As initial clustering center, calculating each of the rest news and news d by cosine similarity algorithm1Content similarity of (2):
sim(d,T)=cos(d,T)=a
in the above formula, T represents the whole news set, and a represents the cosine similarity value;
comparing the calculated content similarity with a minimum threshold theta, and if all the content similarity is smaller than the minimum threshold theta, using d1Adding a new cluster to the cluster center, otherwise d1Classifying the cluster with the maximum similarity;
and respectively aggregating the news sets into a plurality of event clusters according to the clustering result, outputting the class numbers of the event clusters, and defining each cluster as an event cluster with similar report contents.
The aggregation module is configured to conduct vectorization processing on each event cluster by adopting a deep learning algorithm and conduct aggregation by using an unsupervised clustering algorithm again.
The aggregation module is further configured to process the following:
taking each event cluster as a long text, performing word segmentation processing, and inputting the long text into a skip-gram algorithm, wherein the skip-gram algorithm passes through p (w)i+1,wi-1|wi,uj) Probabilistic model (parameter w in this model)iRepresenting the current word, parameter wi+1,wi-1Representing two words adjacent to the current word, parameter ujRepresenting the event cluster vector obtained by the last iteration, and randomly generating for the first time), calculating and calculating the current word wiThe probability of two adjacent words is selected, the word with the highest probability is selected as output in a dictionary, and the event cluster vector u obtained by the last iteration is usedjInputting the data into a skip-gram algorithm;
will pass through p (w)i+1,wi-1|wi,uj) Calculating the difference between the obtained word and the real adjacent wordObtaining a loss term, and transmitting the loss term to p (w) through a back propagation algorithmi+1,wi-1|wi,uj) Then updates the corresponding ujAn event cluster vector value of;
repeating the above two steps until ujThe vector value approaches to be stable or the event cluster is trained in the following text;
and (4) integrating the vectorization results of each event cluster together, taking the vectorization results as the input of a single-pass algorithm, carrying out secondary clustering, and defining the results as topic clusters.
The topic cluster description generation module is configured to generate a topic cluster description using a new word discovery algorithm.
The topic cluster description generation module is further configured to process the following:
all news in each topic cluster are gathered together, a Chinese word segmentation module is used for inputting word segmentation results, and three indexes of word frequency, polymerization degree and freedom degree are calculated respectively in the following specific calculation mode:
(1) calculating word frequency: regular expressions are used for matching single Chinese characters, double Chinese characters, three Chinese characters, four Chinese characters and five Chinese character words and calculating word frequency respectively.
(2) Calculating the polymerization degree: assuming that the word is S, firstly calculating the probability P (S) of the occurrence of the word, and then trying all possible two segmentations of S, namely dividing the word into a left half part sl and a right half part sr, and calculating P (sl) and P (sr), for example, two segmentations exist in a double Chinese character word, and two segmentations exist in a three Chinese character word. Then, in all the two-segmentation schemes, the minimum value of P (S)/(P (sl) xP (sr)) is calculated, and after taking the logarithm, the minimum value can be used as the measure of the degree of polymerization, and the degree of polymerization of all possible alternative words is calculated.
(3) And (3) calculating the degree of freedom: assuming that a word totally appears N times, N Chinese characters totally appear on the left side of the word, and each Chinese character sequentially appears N1, N2, … … and Nn times, N is satisfied as N1+ N2+ … … + Nn, so that the probability of the appearance of each Chinese character on the left side of the word can be calculated, and the left-adjacent entropy can be calculated according to the entropy formula. The smaller the entropy is, the lower the degree of freedom is, and the smaller one of the left adjacent entropy and the right adjacent entropy of a word is taken as the final degree of freedom;
and taking the product of word frequency, polymerization degree and freedom degree as a sequencing index, and generating a representative word as topic description.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.