[go: up one dir, main page]

WO2005048185A1 - Methode d'inference neuro-floue transductive pour la modelisation personnalisee - Google Patents

Methode d'inference neuro-floue transductive pour la modelisation personnalisee Download PDF

Info

Publication number
WO2005048185A1
WO2005048185A1 PCT/NZ2004/000290 NZ2004000290W WO2005048185A1 WO 2005048185 A1 WO2005048185 A1 WO 2005048185A1 NZ 2004000290 W NZ2004000290 W NZ 2004000290W WO 2005048185 A1 WO2005048185 A1 WO 2005048185A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
output
input
predicting
rationalised
Prior art date
Application number
PCT/NZ2004/000290
Other languages
English (en)
Inventor
Nikola Kirilov Kasabov
Qun Song
Original Assignee
Auckland University Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Auckland University Of Technology filed Critical Auckland University Of Technology
Publication of WO2005048185A1 publication Critical patent/WO2005048185A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]

Definitions

  • the invention relates to a Transductive Neuro Fuzzy Inference Method and Uses for Personalised Modelling.
  • connectionist and also fuzzy inference systems a global model is learned that consists of many local models (e.g., rules representing clusters of data) that collectively cover the whole space and are adjusted incrementally on new data.
  • the output for a new vector is calculated based on the activation of one or several neighbouring local models (rules).
  • the inductive learning and inference approach is useful when a global model ("the big picture") of the problem is needed even in its very approximate form.
  • some models e.g. ECOS
  • the inductive global learning process is less suitable where personalised modelling is required, for example, in clinical and medical applications of learning systems. This problem is particularly acute in determining individual outcomes, diagnoses and treatment regimes for medical decision support systems. In such applications, the focus is not on the global model, but on the individual patient. It is not so important what the global error of a global model over the whole problem space is, but rather - the accuracy of prediction for an individual patient.
  • Transductive inference systems and methods have been devised to address this problem by estimation of a function at a single point of the search space only.
  • the closest examples that form a data subset are derived from an existing data set or/and generated from an existing model.
  • a new model is dynamically created from this subset to approximate the function at the new input vector.
  • An example of these models is the k-nearest neighbour method where for every new input vector v, the closest A: vectors from a training (existing) data set are chosen and the predicted output for the new vector is calculated based on the outputs of these k examples.
  • Unfortunately, currently available transductive inference methods and systems have one or more of the following disadvantages:
  • the models do not estimate the importance factors for the input variables in every part of the problem space, where a new vector is located; it is known that for different groups of patients, for example old, versus young; male versus female, some input variables are more important than others, and if this is taken into account, a more accurate output value would be calculated for the new input vector.
  • the present invention provides a method for predicting an output from a test input comprising the step of receiving a set of input data having expected output data; applying a transformation to at least some of the input data to obtain a set of normalised data; applying a rationalising function to the set of normalised data to obtain a set of rationalised input data and rationalised expected output data; applying a clustering function to the set of rationalised data; applying a transformation to a set of rules based at least partly on the results of the clustering function; and evaluating the accuracy of the rationalised expected output data; and generating output data.
  • the present invention comprises a prediction system configured to predict an output from a test input, the system comprising a data transformation module configured to transform at least some of the input data to obtain a set of normalised data; a rationalising module configured to apply a rationalising function to the set of normalised data to obtain a set of rationalised input data and rationalised expected output data; a clustering module configured to apply a clustering function to the set of rationalised data; a set of rules maintained in computer memory; an optimiser module configured to apply a transformation to the rules based at least partly on the results of the clustering function; a decoder configured to transform a series of outputs; and an output layer configured to display a set out outputs.
  • the present invention also extends in a still further aspect to a neural network module for carrying out the steps in the first aspect.
  • the present invention also provides a method for predicting an output from a test input x comprising at least the following steps: a) provide a set D of known global inputs and expected outputs of the used variables; b) select relevant input variables and initialise importance factors for the input variables for the new input vector x - local importance factors; c) perform a transformation of the problem space into a reduced and normalised variable space based on weighted variable normalisation that reflects the local importance of the input variables for the area of the new input vector, thus producing a normalised data set D '; d) rationalise the said set D ' to produce a new rationalised local set D ' x of inputs and expected outputs that are closely related to the test input x in the variable importance space; e) cluster and partition the rationalised set D' x in the weighted variable normalisation problem space using a clustering algorithm; f) set the initial parameters of the or classification/prediction model based on fuzzy rules according to the results of the clustering and the partitioning in steps
  • the present invention also provides a system for predicting an output from a test input comprising at least the following: a) an input device for receiving a test input x; b) a storage and retrieval medium to provide a set D of known global and previously stored inputs and expected outputs; c) a variable selection and data transformation module that transforms data from the original space to a weighted variable normalisation space by performing a transformation of the problem space into a normalised variable space based on weighted variable normalisation that reflects the local importance of the input variables for the area of the new input vector, thus producing a normalised data set D'; d) a rationalising module that produces a new rationalised set D' x of inputs and expected outputs that are closely related to the test input x from set D' in the weighted variable normalisation space; e) a clustering module comprising a clustering algorithm for clustering and partitioning the rationalised set D ' x ; f) a fuzzy rule creation module that creates fuzzy rules and sets initial parameters
  • Figure 1 shows a schematic diagram of the main components of an embodiment of the invention.
  • Figure 2 shows case study data associated with the invention.
  • FIG. 1 shows a schematic diagram of preferred aspects of one form of the invention.
  • the system 100 includes an input layer to which known inputs and expected outputs are passed.
  • the set of known inputs and expected outputs are preferably stored in a database maintained in computer memory.
  • the outputs are membership classes.
  • the membership class may, for example, be a class of patient permitting a classification of a patient or a condition.
  • the output is one or more data values or vectors.
  • typical input values are clinical and/or gene patient-specific data and outputs are preferably selected from the group consisting of membership of a group of patients, a risk of an event, a clinical variable not easily directly measured e.g. glomerular filtration rate, a prognostic outcome, a diagnostic outcome, a suggested treatment or treatment regime.
  • typical input variables and their corresponding output variables would be: records of applicants for a bank loan and the decision (grant, or don't grant the loan); a set of economic variables and a predicted economic state; a set of financial indexes and a predicted value for an index; etc.
  • Transformation could be performed on the input data to obtain a set of normalised data, shown as normalised inputs 110.
  • the system could assign an importance factor, or set of importance factors to one or more of the inputs.
  • the importance factors are normalisation weights.
  • the variable importance space is weighted variable normalisation space. The inputs having normalisation weights exceeding a threshold importance factor are selected for subsequent transformation and rationalisation.
  • Normalised variable space is also known as variable importance space.
  • Variable normalisation weights are also known as local importance factors.
  • importance factors in local space may be initialised by assuming they are equal to importance factors determined using a global model, such as an inductive fuzzy neural network. In an alternate embodiment, the importance factors are all initialised to be equal to 1.
  • the model produced from the optimisation is a set of rules.
  • a rationalising function could then be applied to the normalised inputs 110 to obtain a set of rationalised data, shown as rationalised inputs 115.
  • the rationalisation process may be in the form of human selection of relevant data based on experience.
  • the rationalisation process may be a computational method as suggested in this invention.
  • a simple example of such a computational method is the k- nearest neighbour (k-NN) transductive decision method described in Mitchell, M.T. (1997). "Machine Learning,.” MacGraw-Hill. and Vapnik, V. (1998) Statistical Learning Theory, John Wiley & Sons, Inc.
  • the rationalised set of inputs and expected outputs that are closely related to the test input represent a sub-set of the original set data more closely related to the test input than the original set data according to a measured similarity. If desired, the rationalised set can be selected after the initial data set is transformed into the variable importance space through normalisation procedure and distance is measured between the new input vector and all data vectors. One embodiment selects criteria for data selection by selecting a minimum number of N closest vectors to the new vector but all 5 samples that are different from each other in less than 10% percent are also included.
  • the rationalised set can also be selected based on the distance between the new input vector and the data samples in the original input space, after which importance factors are calculated with the use of methods from the art, such as correlation analysis (for prediction models) and signal-to-noise ratio (for classification models).
  • the rationalised set then is l o transformed into a set in the new space - the variable importance space.
  • a clustering function could then be applied to create a set of clustered inputs 120.
  • the clustering and partitioning of the rationalised set may be accomplished by using any suitable clustering algorithm in the art, but in a preferred embodiment, clustering and partitioning is performed in the local weighted variable normalisation problem space based 15 on the importance factors.
  • the currently preferred algorithm is ECM described in Kasabov, N. and Q. Song (2002). "DENFIS. Dynamic, evolving neural-fuzzy inference systems and its application for time-series prediction.” IEEE Trans. On Fuzzy Systems 10(2): 144-154 , which is hereby incorporated by reference.
  • the system also maintains a set of rules, shown as fuzzy rules 105.
  • fuzzy rules 105 The process of creating 20 fuzzy rules may be undertaken separately from the clustering and partitioning or may be undertaken in the same process.
  • the currently preferred ECM algorithm provides the requisite process as part of the partitioning and clustering process.
  • the system includes an optimiser 130 that is configured to apply a transformation, for example an optimising transformation, to the rules based at least partly on the clustering 25 function described above.
  • the parameters of the fuzzy rules may be optimised by any objective evaluation method that determines the fitness of the data.
  • the currently preferred method is to determine an overall error for the fitness of the data.
  • Optimisation occurs by: (1) changing the weighted normalisation intervals (importance) for the input variables; and (2) changing the parameters of the fuzzy rules, using in both cases error minimising algorithms in the art.
  • the currently preferred algorithm for optimisation is a steepest descent algorithm. However, there are other well-established algorithms available in the art suitable for application in the practice of the present invention. Following optimisation, control could be passed back to obtain normalised inputs 110, rationalised inputs 115 and clustered inputs 120.
  • the output from the system and method is calculated using fuzzy decoding algorithms in the art specific for the fuzzy rules used.
  • a decoder 135 applies a fuzzy decoding algorithm. Outputs are then passed to an output layer 140.
  • the problem space transformation module based on weighted variable normalisation, the clustering module, the fuzzy rule creation module, the optimising module, and fuzzy decoder form part of a computer implemented neural network 145 comprising an input transformation layer comprising one or more input nodes arranged to receive and normalise input data; a rule base layer comprising one or more rule nodes; an output layer comprising one or more output nodes; and an adaptive component arranged to aggregate selected two or more rule nodes in the rule base layer based on the input data.
  • the system and the method are preferably dynamic multi-input - multi-output neural-fuzzy inference systems and methods respectively with a local generalization, in which a fuzzy inference engine is used, for example the Zadeh-Mamdani engine described in Zadeh, L.A. (1965). "Fuzzy Sets.” Information and Control 8. 338-353 or the Takagi-Sugeno engine described in Takagi, T. and M. Sugeno (1985). "Fuzzy Identification of systems and its applications to modeling and control.” IEEE Trans. On Systems. Man, and Cybernetics 15: 116-132.
  • the local generalization means that in a sub-space of the whole problem space (local area) a model is created that performs generalization in this area.
  • Gaussian fuzzy membership functions may be applied in each fuzzy rule for both the antecedent and the consequent parts or for the antecedent part only.
  • a BP (Back- Propagation) learning algorithm may be used for optimizing the parameters of the fuzzy membership functions.
  • other learning algorithms may be employed.
  • An additional learning function may be derived for use in the model.
  • the distance between vectors x and y is preferably measured in the weighted variable normalisation space as the normalized Euclidean distance defined as follows:
  • the variable importance factors normalisation weights
  • an importance weight for an input variable can be 0 or close to 0, which indicates that this variable is not selected in the local model to calculate the output value for the input variable.
  • a ECM Evolving Clustering Method
  • a ECM Evolving Clustering Method
  • DENFIS Dynamic, evolving neural-fuzzy inference systems and its application for time-series prediction.
  • IEEE Trans. On Fuzzy Systems 10(2): 144-154 and the cluster centres and cluster radii are respectively taken as initial values of the centres and the widths of the Gaussian membership functions.
  • the data in a cluster may be used for creating a linear output function.
  • TNFIP Transductive Neuro-Fuzzy Inference Method for Prediction System
  • initial variable normalisation weighting functions f l3 f 2 ,...,f p for the input variables x l3 x 2 ,...,x p to represent their importance for the new input vector x t _ .
  • Ni can be pre-defined based on experience, or - optimised through the application of an optimization procedure. Here we assume the former approach.
  • ECM or another clustering algorithm
  • Fy are fuzzy sets defined by the following Gaussian type membership function:
  • the steepest descent algorithm (BP) is used then to obtain the formulas for the optimization of the parameters n q ⁇ , ⁇ q ⁇ , my, ay and ⁇ y of Z ⁇ deh-M ⁇ md ⁇ ni type TNFI such that the error function E from (12) is minimized: (13)
  • xl has a membership degree of 0.68 to a Gaussian function with a center at 0.7 and a standard deviation of 0.2
  • x2 has a membership degree to a Gaussian function with a center at 0.5 and standard deviation of 0.12 (x2 has an importance factor of 0.3)
  • THEN y has a membership degree of 0.9 to a Gaussian function with a center at 0.8 and a standard deviation of 0.18, with 15 vectors being in this cluster
  • xl has a membership degree of 0.68 to a Gaussian function with a center at 0.7 and a standard deviation of 0.2 (xl has an importance factor of 0.9) and x2 has a membership degree to a Gaussian function with a center at 0.5 and standard deviation of 0.12 (x2 has an importance factor of 0.3)
  • the TNFIP modelling method is used here as part of a methodology for time series prediction for modelling and predicting the future values of time series.
  • the methodology is presented through a case study problem of building transductive models for the prediction of the Mackey-Glass (MG) time series data set. This has been used as a bench-mark problem in the areas of neural networks, fuzzy systems and hybrid systems.
  • This time series is created with the use of the MG time-delay differential equation defined below: dx(t) 0.2 x(t- ⁇ ) - 0.1 JC ( (24) dt l +x 10 (t- ⁇ )
  • the fourth-order Runge-Kutta method was used to find the numerical solution to the above MG equation.
  • the task is to predict the values x(t + 85) from input vectors [x(t — 18), x(t — 12), x(t - 6), x(t)] for any value of the time t.
  • connectionist models applied for inductive inference on the same task. These models are MLP and DENFIS.
  • Table 1 lists the prediction results including obtained by using the TNFIP method and two other popular methods - MLP (Multilayer perception) and DENFIS (Dynamic Neuro- Fuzzy Inference System) in terms of RMSE (root mean square error), MAE (mean absolute error) on the simulating data as well as the number Rn of rules or rule nodes or neurone used in each model.
  • MLP Multilayer perception
  • DENFIS Dynamic Neuro- Fuzzy Inference System
  • RMSE root mean square error
  • MAE mean absolute error
  • MLP Number of neurons in the hidden layer: 16; Learning algorithm: Levenberg-
  • DENFIS Dthr (distance-threshod): 0.15; MofN: 4; Learning epochs: 60;
  • TNFIP JV ⁇ : 32; Dthr: 0.20; Learning epochs for weight and parameter optimisation for each input vector: 60.
  • the TNFIP transductive reasoning system performs better than the other inductive reasoning models. This is a result of the fine tuning of each local model in TNFIP for each simulated example, derived according to the TNFIP learning procedure. The finely tuned local models achieve a better local generalisation.
  • a GA is run on a population of TNFI models for different values of weights, over several generations.
  • a fitness function the root mean square error RMSE of a trained model on the training data or on a validation data is used.
  • the GA runs over generations of populations and standard operations are applied such as binary encoding of the genes (weights); roulette wheel selection criterion; multi-point crossover operation for crossover.
  • the model with the least error is selected as the best one, and its chromosome - the vector of weights [q ls q 2 ,...,q p ] defines the optimum normalization range for the input variables.
  • TNFIP is applied on the Mackey-Glass (MG) time series prediction task.
  • the following GA parameter values are used: for each input variable, the values from 0.16 to 1 are mapped onto 4 bit string; the number of individuals in a population is 12; mutation rate is 0.001; termination criterion (the maximum epochs of GA operation) is 100 generations; the root-mean square error RMSE on the training data is used as a fitness function.
  • the resulted weight values, the training RMSE and testing RMSE are shown in Table 2.
  • TNFIP results with the same parameters, the same framing data and testing data, but without optimisation of the normalisation weights are also shown in Table 2.
  • Zadeh-Mamdani rules e.g.:
  • xl has a membership degree of 0.68 to a Gaussian function with a center at 0.7 and a standard deviation of 0.2
  • x2 has a membership degree to a Gaussian function with a center at 0.5 and standard deviation of 0.12
  • x3 has a membership degree of 0.68 to a Gaussian function with a center at 0.14 and a standard deviation of 0.02
  • x4 has a membership degree to a Gaussian function with a center at 0.87 and standard deviation of 0.2
  • THEN y has a membership degree of 0.78 to a Gaussian function with a center at 0.83 and a standard deviation of 0.18, with 10 vectors being in this cluster
  • the TNFIP is used to develop an application oriented methodology for medical decision support systems. It is presented here through a case example - personalised (individualised) modelling for the evaluation of a renal function of patients in a renal clinic. Real data is used and the developed TNFIP system is currently considered for use in a clinical environment.
  • the accurate evaluation of renal function is fundamental to sound nephrology practice.
  • the early detection of renal impairment will allow for the institution of appropriate diagnostic and therapeutic measures, and potentially maximise preservation of intact nephrons.
  • GFR Glomerular filtration rate
  • Screat is a protein which is expected to be filtered in the kidneys and the residual of it - released into the blood. The creatinine level in the serum is determined by the rate it is being removed in the kidney and is also a measure of the kidney function.
  • Surea is a substance produced in the liver as a means of disposing of ammonia from protein metabolism. It is filtered by the kidney and can be reabsorbed to the bloodstream.
  • Salb is the protein of the highest concentration in plasma. Decreased serum albumin may result from kidney disease, which allows albumin to escape into the urine. Decreased albumin may also be explained by malnutrition or liver disease .
  • the TNFIP method is applied for the prediction of the GFR of each new patient where a modified Takagi-Sugeno types of fuzzy rules are used whwre the output function is of the MDRD type but the coefficients will be calculated for every individual patient (personalised model) with the use of the TNFIP method.
  • results produced by the MDRD formula (a global regression model), the MLP (a globally trained connectionist model) and DENFIS (a global model that is a set of adaptive local models), all - inductive reasoning systems, along with the results produced by using the transductive WKNN method, are also listed in the table.
  • the leave-one-out training-simulating tests were performed for each model on the data set and Table 3 lists the results including RMSE (root mean square error), MAE (mean absolute error) and Rn (the number of rules or nodes, neurone) used in each model.
  • RMSE root mean square error
  • MAE mean absolute error
  • Rn the number of rules or nodes, neurone
  • MLP Number of neurons in the hidden layer: 10; Learning algorithm: Levenberg-
  • DENFIS Dt/zr (distance-threshod): 0.15; MofN: 6; Learning epochs: 60;
  • the TNFIP system gives the best accuracy of the GFR evaluation for each individual patient and overall - for the whole data set. There was no optimisation of the variable normalisation weights applied (the transformation functions were assumed constant).
  • Variant 2 Using weighted normalisation for the input variables
  • a personalised model for each patient is derived and the input variables are weighted for their importance for the prediction of the output for this patient. This is illustrated in table 6 for a randomly selected single patient (one sample from the GFR data).
  • Fuzzy rules are extracted from this personalized model (six rules) as shown in Table 7 that best describe the prediction rules for the area of the problem space where the new input vector is located. Table 7. The fuzzy rules extracted from the personalised model for the person's data from fig. 6.
  • TNFIC Transductive Neuro-Fuzzy Inference Method for Classification
  • the TNFIC classifies a data set into a number of classes in the n-dimensional input space.
  • the system is a multi-input multi-output type fuzzy inference system optimized by a steepest descent algorithm (BP).
  • the fuzzy rules that constitute the system can be of Zadeh- Mamdani type, of Takagi-Sugeno type or any non-linear function.
  • initial variable normalisation weighting functions ft, f 2 ,...,f p for the input variables x ⁇ ,x 2 ,...,x p to represent their importance for the new input vector JC,..
  • N q the number of bits in the framing data set in the input space to find N q training examples that are closest to x q .
  • the value for N q can be pre-defined based on experience, or - optimized through the application of an optimization procedure. Here we assume the former approach.
  • the /-th rule has the form of:
  • the steepest descent algorithm (BP) is used then to obtain the formulas for the optimization of the parameters « & , ⁇ s , ay, my and ⁇ y of the TNFIC such that the value of E from Eq. (29) is minimized:
  • Input variables: 7 1 , 2, ... , P;
  • Example 1 TNFIC for the Classification of Iris data set with Optimisation of the Variable Normalisation Weights
  • TNFIC classification results with the same parameters, the same training data and testing data, but without variable weight normalisation are also shown in Table 8. From the results, we can see that the weight of the first variable is much smaller than the weights of the other variables. The weights show the importance of the variables and the least important variables can be removed from the input for some particular new input vectors. Same experiment is repeated without the first input variable (least important) and the results have improved as shown in Table 8. If another variable is removed, and the total number of input variables is 2, the test error increases, so it can be assumed that for the particular ECMC model the optimum number of input variables is 3. For different new input vectors, the normalisation weights of the input variables will be different pointing to the different importance of these variables for the classification (or prediction) of every new input vector located in a particular part of the problem space.
  • Zadeh-Mamdani rules e.g.: IF x2 has a membership degree of 0.68 to a Gaussian function with a center at 0.7 and a standard deviation of 0.2 (x2 has an importance factor of 0.5) and x3 has a membership degree to a Gaussian function with a center at 0.5 and standard deviation of 0.12 (x3 has an importance factor of 0.92) and x4 has a membership degree of 0.68 to a Gaussian function with a center at 0.14 and a standard deviation of 0.02 (x4 has an importance factor of 1) THEN y has a membership degree of 0.78 to belong to a class 2 defined by a Gaussian function with a center at 0.83 and a standard deviation of 0.18, with 10 vectors being in this cluster.
  • the problem used here is mortgage approval for applicants defined by 8 input variables - character (0- doubtful; 1 - good); total asset; equity; mortgage loan; budget surplus; gross income; debt servicing ratio; term of loan, and one output variable (decision ( 0- disapprove; 1 - approve).
  • TNFIC models are created in a leave-one-out mode for every single sample in the data set of 91 samples and results are presented in Table 9. The results are compared with the results obtained with the use of ECF and MLP as inductive methods.
  • a personalised decision support model is developed for every applicant that best makes the decision for them and the input variables are also weighted showing the importance of the variables for this applicant personalised model. This is illustrated in table box 10 where two of the rules that comprise the personalised decision model are shown:
  • Table 10 A personalised decision model for an applicant for a loan, the weighted input variables in this model through TNFI and two of the Zade-Mamdani fuzzy rules that comprise the model.
  • Input vector of a randomly selected person comprising the gene expression of the selected by M.Shipp 11 genes: [341 275 20 20 725 237 314 20 20 62.6 192]
  • Correctly predicted outcome by the personalised model TNFIC Class 2 (died in 5 years time)
  • Transductive reasoning is not practical in case of large data sets D (e.g. millions of data samples) and large number of variables (e.g. thousands).
  • a large data set D* given on a large number of variables V* is transformed into several clusters of data samples, each cluster defining their own list of variables, so that for every new vector xj only the data from the cluster that X ; belongs is used as data set D (see the general TNFI method) on a much smaller number of variables.
  • the method consists of the following steps:

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Analysis (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

La présente invention concerne un système prédictif (100) configuré pour prédire un résultat à partir d'une entrée de test. Le système comprend un module de transformation de données configuré pour transformer au moins un certain nombre des données d'entrée afin d'obtenir un ensemble de données normalisées (110). Un module de rationalisation est configuré pour appliquer une fonction de rationalisation à l'ensemble de données normalisées pour obtenir un ensemble de données d'entrée rationalisées (115) et de données de sortie rationalisées attendues. Un module d'agrégation est configuré pour appliquer une fonction d'agrégation à l'ensemble de données rationalisées (115). Un ensemble de règles (125) est conservé dans une mémoire d'ordinateur. Un module d'optimisation (130) est configuré pour appliquer une transformation aux règles (125) sur la base au moins partielle des résultats de la fonction d'agrégation. Un décodeur (135) est configuré pour transformer une série de résultats et une couche de sortie (140) est configurée pour présenter un ensemble de résultats.
PCT/NZ2004/000290 2003-11-17 2004-11-17 Methode d'inference neuro-floue transductive pour la modelisation personnalisee WO2005048185A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NZ52957003 2003-11-17
NZ529570 2003-11-17

Publications (1)

Publication Number Publication Date
WO2005048185A1 true WO2005048185A1 (fr) 2005-05-26

Family

ID=34588198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NZ2004/000290 WO2005048185A1 (fr) 2003-11-17 2004-11-17 Methode d'inference neuro-floue transductive pour la modelisation personnalisee

Country Status (1)

Country Link
WO (1) WO2005048185A1 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106535A (zh) * 2013-02-21 2013-05-15 电子科技大学 一种基于神经网络解决协同过滤推荐数据稀疏性的方法
EP2582341B1 (fr) 2010-06-16 2016-04-20 Fred Bergman Healthcare Pty Ltd Procédé d'analyse d'événements provenant de données de capteur par optimisation
WO2018137203A1 (fr) * 2017-01-25 2018-08-02 深圳华大基因研究院 Procédé de détermination d'un ensemble d'indicateurs biologiques d'échantillon de population et de prédiction de l'âge biologique et utilisation associée
CN110991478A (zh) * 2019-10-29 2020-04-10 西安建筑科技大学 热舒适感模型建立方法和用户偏好温度的设定方法及系统
CN111898628A (zh) * 2020-06-01 2020-11-06 淮阴工学院 一种新型t-s模糊模型辨识方法
US11062792B2 (en) 2017-07-18 2021-07-13 Analytics For Life Inc. Discovering genomes to use in machine learning techniques
US11139048B2 (en) 2017-07-18 2021-10-05 Analytics For Life Inc. Discovering novel features to use in machine learning techniques, such as machine learning techniques for diagnosing medical conditions
CN114981733A (zh) * 2020-01-30 2022-08-30 奥普塔姆软件股份有限公司 从动态物理模型自动生成复杂工程系统的控制决策逻辑
US20230196095A1 (en) * 2021-04-20 2023-06-22 Shanghaitech University Pure integer quantization method for lightweight neural network (lnn)
CN117707141A (zh) * 2023-11-20 2024-03-15 辽宁工业大学 面向水陆两栖车的航行变权重动态协调控制方法和装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078003A1 (fr) * 2000-04-10 2001-10-18 University Of Otago Systeme et technique d'apprentissage adaptatif
WO2003040949A1 (fr) * 2001-11-07 2003-05-15 Biowulf Technologies, Llc Classement de caracteristiques pretraitees pour une machine a vecteur de support

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001078003A1 (fr) * 2000-04-10 2001-10-18 University Of Otago Systeme et technique d'apprentissage adaptatif
WO2003040949A1 (fr) * 2001-11-07 2003-05-15 Biowulf Technologies, Llc Classement de caracteristiques pretraitees pour une machine a vecteur de support

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KASABOV N. ET AL.: "Evolving connectionist systems", Retrieved from the Internet <URL:http://www.aut.ac.nz/reserach_showcase/research_activity_areas/kedri/books.shtml> *
KASABOV N. ET AL.: "Evolving fuzzy neural networks for supervised/unsupervised on-line, knowledge-based learning", IEEE TRANSACTIONS OF SYSTEMS, MAN AND CYBERNETICS, PART B - CYBERNETICS, vol. 3, no. 6, December 2001 (2001-12-01), Retrieved from the Internet <URL:http://www.aut.ac.nz/reserach_showcase/research_activity_areas/kedri/downloads/pdf/kas-smc-2001.pdf> *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2582341B1 (fr) 2010-06-16 2016-04-20 Fred Bergman Healthcare Pty Ltd Procédé d'analyse d'événements provenant de données de capteur par optimisation
EP3064179A1 (fr) * 2010-06-16 2016-09-07 Fred Bergman Healthcare Pty Ltd Appareil et procédé permettant d'analyser des événements à partir des données de capteur par optimisation
CN103106535A (zh) * 2013-02-21 2013-05-15 电子科技大学 一种基于神经网络解决协同过滤推荐数据稀疏性的方法
CN103106535B (zh) * 2013-02-21 2015-05-13 电子科技大学 一种基于神经网络解决协同过滤推荐数据稀疏性的方法
WO2018137203A1 (fr) * 2017-01-25 2018-08-02 深圳华大基因研究院 Procédé de détermination d'un ensemble d'indicateurs biologiques d'échantillon de population et de prédiction de l'âge biologique et utilisation associée
US12243624B2 (en) 2017-07-18 2025-03-04 Analytics For Life Inc. Discovering novel features to use in machine learning techniques, such as machine learning techniques for diagnosing medical conditions
US11062792B2 (en) 2017-07-18 2021-07-13 Analytics For Life Inc. Discovering genomes to use in machine learning techniques
US11139048B2 (en) 2017-07-18 2021-10-05 Analytics For Life Inc. Discovering novel features to use in machine learning techniques, such as machine learning techniques for diagnosing medical conditions
CN110991478A (zh) * 2019-10-29 2020-04-10 西安建筑科技大学 热舒适感模型建立方法和用户偏好温度的设定方法及系统
CN114981733A (zh) * 2020-01-30 2022-08-30 奥普塔姆软件股份有限公司 从动态物理模型自动生成复杂工程系统的控制决策逻辑
CN111898628B (zh) * 2020-06-01 2023-10-03 淮阴工学院 一种新型t-s模糊模型辨识方法
CN111898628A (zh) * 2020-06-01 2020-11-06 淮阴工学院 一种新型t-s模糊模型辨识方法
US20230196095A1 (en) * 2021-04-20 2023-06-22 Shanghaitech University Pure integer quantization method for lightweight neural network (lnn)
US11934954B2 (en) * 2021-04-20 2024-03-19 Shanghaitech University Pure integer quantization method for lightweight neural network (LNN)
CN117707141A (zh) * 2023-11-20 2024-03-15 辽宁工业大学 面向水陆两栖车的航行变权重动态协调控制方法和装置

Similar Documents

Publication Publication Date Title
El-Shafiey et al. A hybrid GA and PSO optimized approach for heart-disease prediction based on random forest
EP1534122B1 (fr) Systemes de support de decision medicale utilisant l&#39;expression genique ainsi que des informations cliniques, et procedes d&#39;utilisation correspondants
Urso et al. Data mining: Classification and prediction
Sugumar Rough set theory-based feature selection and FGA-NN classifier for medical data classification
Dhar An adaptive intelligent diagnostic system to predict early stage of parkinson's disease using two-stage dimension reduction with genetically optimized lightgbm algorithm
Khan et al. Use of classification algorithms in health care
Balasubramanian et al. Rough set theory-based feature selection and FGA-NN classifier for medical data classification
Moturi et al. Grey wolf assisted dragonfly-based weighted rule generation for predicting heart disease and breast cancer
Welchowski et al. A framework for parameter estimation and model selection in kernel deep stacking networks
WO2005048185A1 (fr) Methode d&#39;inference neuro-floue transductive pour la modelisation personnalisee
Fadhil et al. Multiple efficient data mining algorithms with genetic selection for prediction of SARS-CoV2
Liang et al. Evolving personalized modeling system for integrated feature, neighborhood and parameter optimization utilizing gravitational search algorithm
Shinde et al. A genetic algorithm, information gain and artificial neural network based approach for hypertension diagnosis
Di Nuovo et al. Psychology with soft computing: An integrated approach and its applications
Bouslah et al. A new Parkinson detection system based on evolutionary fast learning networks and voice measurements
Zhang et al. Fairness-aware multiobjective evolutionary learning
Mattas Brain stroke prediction using machine learning
Swetha et al. A hybrid multiple indefinite kernel learning framework for disease classification from gene expression data
Srivastava et al. A taxonomy on machine learning based techniques to identify the heart disease
Joly et al. Permutation feature importance-based cardiovascular disease (CVD) prediction using ANN
Saha et al. A lightweight CNN-based ensemble approach for early detecting Parkinson’s disease with enhanced features
de Oliveira Using machine learning to predict mobility improvement of patients after therapy: a case study on rare diseases
Kasabov et al. Integrating local and personalised modelling with global ontology knowledge bases for biomedical and bioinformatics decision support
Gergerli et al. An Approach Using in Communication Network Apply in Healthcare System Based on the Deep Learning Autoencoder Classification Optimization Metaheuristic Method
Sabeena et al. Ensemble feature selection and ensemble deep learning (edl) classifier for parkinson’s

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase