Abstract
In the present paper we propose an approach to inductive machine learning based on a consistent integration of the generalization-based (such as inductive learning from examples) and metric-based (such as agglomerative clustering) approaches. The approach stems from the natural idea (formally studied within the lattice theory) to estimate the similarity between two objects in a hierarchical structure by the distances to their closest common parent. The hierarchies used are the subsumption lattices induced by the generalization operations (e.g. lgg) commonly used in inductive learning. Using some basic results from the theory the paper shows how the corresponding ML techniques can be combined and extended in order to define a unified framework for solving some of the basic inductive learning tasks. An algorithm for this purpose is proposed and its performance is illustrated by examples.
Preview
Unable to display preview. Download preview PDF.
References
D. Aha, D. Kibler, and M. Albert. Instance-based learning algorithms. Machine Learning, 6:37–66, 1991.
M. Champesme, P. Brézellec, and H. Soldano. Empirically conservative search space reduction. In L. D. Raedt, editor, Proceedings of ILP-95, pages 387–401. Dept. of Computer Science, K.U.Leuven, 1995.
P. Cheeseman, J. Kelly, M. Self, J. Stutz, W. Taylor, and D. Freeman. AutoClass: a Bayesian classification system. In Proceedings of the Fifth International Workshop on Machine Learning, Ann Arbor, pages 54–64, San Mateo, CA, 1988. Morgan Kaufmann.
D. Conklin and I. Witten. Complexity-based induction. Machine Learning, 16(3):203–225, 1994.
J. H. Gennari, P. Langley, and D. Fisher. Model of incremental concept formation. In J. G. Carbonell, editor, Machine Learinng: paradigms and methods. MIT Press, 1990.
A. Hutchinson. Metrics on terms and clauses. In M. van Someren and G. Widmer, editors, Machine Learning: ECML-97, volume 1224 of Lecture Notes in Artificial Intelligence, pages 138–145. Springer-Verlag, 1997.
R. Michalski and R. Chilausky. Learning by being told and learning from examples: An experimental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis. International Journal of Policy Analysis and Information Systems, 4(2), 1980.
R. Michalski and R. Stepp. Learning from observation: conceptual clustering. In Michalski, Carbonell, and Mitchell, editors, Machine Learning: Artificial Intelligence Approach, volume 1, pages 331–363. Tioga, 1983.
T. M. Mitchell. Generalization as search. Artificial Intelligence, 18:203–226, 1982.
B. Monjardet. Metrics on partially ordered sets — a survey. Discrete Mathematics, 35:173–184, 1981.
G. D. Plotkin. A note on inductive generalization. Machine Intelligence, 5:153–163, 1970.
J. C. Reynolds. Transformational systems and the algebraic structure of atomic formulas. Machine Intelligence, 5:135–153, 1970.
E. Y. Shapiro. Algorithmic program debugging. MIT Press, 1983.
S. B. Thrun et al. The MONK's problems — a performance comparison of different learning algorithms. Technical Report CS-CMU-91-197, Carnegie Mellon University, Dec. 1991.
P. R. J. van der Laag. An Analysis of Refinement Operators in Inductive Logic Programming. PhD thesis, Tinbergen Institute Research, 1995.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Markov, Z., Pelov, N. (1998). A framework for inductive learning based on subsumption lattices. In: Giunchiglia, F. (eds) Artificial Intelligence: Methodology, Systems, and Applications. AIMSA 1998. Lecture Notes in Computer Science, vol 1480. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0057457
Download citation
DOI: https://doi.org/10.1007/BFb0057457
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64993-9
Online ISBN: 978-3-540-49793-6
eBook Packages: Springer Book Archive