Address
:
[go:
up one dir
,
main page
]
Include Form
Remove Scripts
Session Cookies
Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
1K views
8 pages
Unit 5
ML UNIT 5
Uploaded by
kishore5783
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save unit 5 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
1K views
8 pages
Unit 5
ML UNIT 5
Uploaded by
kishore5783
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save unit 5 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 8
Search
Fullscreen
UNIT -V Analytical Learning 5.1: Introduction to Analytical Learning Q.4 What is analytical learning ? ES [INTU : Dec-17, Marks 2] Ans.: In analytical learning, the input to the learner includes the same hypothesis space H and training examples D as for inductive learning. In addition, the leamer is provided an additional input: A domain theory B consisting of background knowledge that can be used to explain observed training examples. The desired output of ,the leamer is a hypothesis h from H that is consistent with both the training examples D and the domain theory B. Q2 What is inductive learning ? Ans. : In inductive learning, the learner is given a hypothesis space H from which it must select an output hypothesis, and a set of training examples D = {(x1, f (X41). » » @u- f (Xn) ) | where f (x;) is the target value for the instance x;. The desired output of the learner is a hypothesis h from H that is consistent with these training examples Q.3 What is the difference between inductive and analytical learning methods ? Ans. : Parameters _Inductive learning Analytical learning Goal Hypothesis fits data Hypothesis fits domain theory Justification Statistical inference Deductive inference Merit Requires little prior knowledge Learns from scarce data "Demerit © Scarce data, Imperfect domain theory # Incorrect bias Q.4 What is domain theory ? Ans. : A domain theory is said to be correct if each of its assertions is a truthful statement about the world. A domain theory is said to be complete with respect to a given target concept and instance space, if the domain theory covers every positive example in the instance space 5.2 : Learning with Perfect Domain Theories : PROLOG-EBG Q.5 What is prolog ? Ans.: #Prolog is a logic programming language associated with artificial intelligence and computational linguistics. 6-1)‘Machine Learning 5-2 Analytical Learning *Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations. Q.6 What are the main properties of PROLOG-EBG algorithm ? Is it deductive or inductive ? Justify your answer. TSP INTU + Dec-17, Marks-5] Ans. : * PROLOG-EBG is a sequential covering algorithm *PROLOG-EBG computes the most general rule that can be justified by the explanation, by computing the weakest pre-image of the explanation. + PROLOG-EBG constructs intermediate features after analyzing examples. It is deductive learning system, which assume that domain knowledge is correct and complete. * PROLOG-EBG produces justified general hypotheses by using prior knowledge to analyze individual examples *PROLOG-EBG implicitly assumes that the domain theory is correct and complete. If the domain theory is incorrect or incomplete, the resulting learned concept may also be incorrect. # The generality of the learned Horn clauses will depend on the formulation of the domain theory and on the sequence in which training examples are considered In its pure form, PROLOG-EBG is a deductive, rather than inductive, learning process. That is, b calculating the weakest pre-image of the explanation it produces a hypothesis h that follow: deductively from the domain theory B, while covering the training data D. Q7 Discuss Prolog-EBG algorithm. ES-[INTU : Dec-16, Marks 5] Ans. : PROWG-EBG(TargetConcept, TrainingExamples, DomainTheory) LearnedRules <{ } Pos < the positive examples from TrainingExamples for each PositiveExample in Pos that is not covered by LearnedRules, do 1. Explain : Explanation < an explanation (proof) in terms of the DomainTheory that PositiveExample satisfies the TargetConcept 2. Analyze : SufficientConditions — the most general set of features of PositiveExample sufficient to satisfy the TargetConcept according to the Explanation. 3. Rejine : LearnedRules — LearnedRules + NewHornClause, where NewHornClause is of the form TargetConcept — SufficientConditions Return LearnedRules 5.3 : Remarks on Explanation Based Learning Q.8 What is explanation-based leaning ? = TECHNICAL PUBLICATIONS”. An up tus fer knowiedge Ecove‘Machine Learning Analytical Learning Ans.: * Explanation-based learning is a form of analytical learning in which the learner processes each novel training example by explaining the observed target value for this example in terms of the domain theory, analyzing this explanation to determine the general conditions under which the explanation holds refining its hypothesis to incorporate these general conditions. An Explanation-based Learning (EBL) system accepts an example (ie. a training example) and explains what it learns from the example. The EBL system takes only the relevant aspects of the training. This explanation is translated into particular form that a problem solving program can understand. The explanation is generalized so that it can be used to solve other problems. *The EBL module uses the results from the problem-solving trace (ie. Steps in solving problems) that were generated by the central problem solver. # It constructs explanations using an axiomatized theory that describes both the domain and the architecture of the problem solver. The results are then translated as control rules and added to. the knowledge knowledge that contains control rules is used to guide the search process effectively. base, The control Q9 List and explain Explanation-based Learning phase: Ans. : EBL phases are as follows 4. Problem solving © From the example of the concept and the domain theory a solution that explains the concept is obtained * From this resolution we are interested on all the actions performed * These action will be the trace of the resolution, and will be used during the generalization process 2. Resolution trace analysis and filtering * The domain determines the operational criteria that tells which are the primitive actions for the problem © The relevance criteria will also be defined , this will allow to decide what parts of the resolution are important * Using these two criteria the parts that need to be generalized from the resolution trace will be determined * The filtered resolution trace will be the explanation of the example. 3. Generalization of the explanation The generalization of the explanation requires the substitution of constants by variables in a way that preserves the original explanation © The usual mechanism for the generalization id the goal regression algorithm © This algorithm consists on the variabilization of the goal and the propagation of the substitution in the explanation 4. Ba 1g the new knowledge * The explanation has to be expressed using the primitive predicates of the domain ©The knowledge to be representation formalism of the domain theory has translated to the * This knowledge can be new definition of the domain predicates or control rules that represent how the knowledge has to be used to solve new problems 5. Incorporating the new knowledge # Sometimes is not enough to add the knowledge to the domain theory * If we do not want the theory to degrade, the new knowledge has to be transformed so it can be used efficiently «Their use can also be evaluated so it can be eliminated if it is not used frequently enough, Q.10 Explain Learning. elements of — Explanation-based Ans. : EBL elements are as follows : 1. Domain theory: Information about the specifice domain of the problem 2. Goal concept: Concept we want to obtain an operational definition SF recricat PUBLICATIONS”. Anup trast or nowedoe‘Machine Learning 5-4 Analytical Learning 3. Example : Positive example of the concept we want to learn 4, New domain Theory : The initial theory plus the new definition learned for the goal concept from the example Q.11 Explanation determines feature relevance." Substantiate this statement with respect to explanation based learning. US INTU : Dec-16, Marks 5] Ans.: #Choosing good features to represent objects can be crucial to the success of supervised machine learning algorithms « Explanation-based learning (EBL) is a method of dynamically incorporating prior domain knowledge into the learning process by explaining training examples. #1n classical EBL, an “explanation” is a logical proof that shows how the class label of a particular labeled example can be derived from the observed inputs * Unlike inductive methods, PROLOG-EBG produces justified general hypotheses by using. prior knowledge to analyze individual examples. * The explanation of how the example satisfies the target concept determines which example attributes are relevant, those mentioned by the explanation. The further analysis of the explanation, regressing the target concept to determine its weakest pre-image with respect to the explanation, allows deriving more general constraints on the values of the relevant features Q.42 What is knowledge level learning ? Explain. Ans.: ¢ The knowledge level is a level of description for computer systems ‘+ Example of knowledge-level analytical learning is provided by considering a type of assertions known as determinations. ‘*Determinations assert that some attribute of the instance is fully determined by certain other attributes, without specifying the exact nature of the dependence. ‘*For example, consider learning the target concept "people who speak Hindi,” and imagine we are given as a domain theory the single determination assertion "the language spoken by a person is determined by their nationality.” Taken alone, this domain theory does not enable us to classify any instances as positive or negative ‘* However, if we observe that "Jon, a 23- year-old left-handed US, speaks Hindi,” then we can conclude from this positive example and the domain theory that “all US speak Hindi.” 5.4 : Using Prior Knowledge to Alter the Search Objective Q.13 What is prior knowledge ? Ans. : Prior knowledge refers to all information about the problem available in addition to the training data, Q.14 Describe TANGENT PROP algorithm ? Ans. ; * Tangent Propagation is the name of a learning technique of an artificial neural network (ANN) which enforces soft constraints on first order partial derivatives of the output vector. #It accommodates domain knowledge expressed as derivatives of the target function with respect to transformations of its inputs. SF recricat PUBLICATIONS”. Anup trast or nowedoe econMachine Learning 5-5 Anatytical Learning *The TANGENTPROP algorithm assumes various training derivatives of the target function are also provided. ‘For example, if each instance xi is described by a single real value, then each training example may be f(x) } of the form (aie? . af(u| Here S* denotes the derivative of the target function f with respect to x evaluated at the point *To develop an intuition for the benefits of providing training derivatives as well as training values during learning, consider the simple learning task depicted in Fig. Q.14.1 \ (9) | fc) (XQ). x9) ‘ \ 4 % % * / Fig. 14.1. * The task is to learn the target function f shown in the leftmost plot of the figure, based on the three training examples shown: (x, ,f(%,)),( %y f(%)), and (xy f(x). Given these three training examples, the BACKPROPAGATATION algorithm can be expected to hypothesize a smooth function, such as the function g depicted in the middle plot of the figure. In TANGENTPROP an additional term is added to the error function to penalize discrepancies between the trainin derivatives and the actual derivatives of the learned neural network function f 5.5 : Using Prior Knowledge to Augment Search Operators Q.15 What is difference between first-order inductive learner (FOIL) and First Order Combined Learner (FOCL) ? Ans. : FOIL generates each candidate specialization by adding a single new literal to the clause preconditions. FOCL uses this same method for producing candidate specializations, but also generates additional specializations based on the domain theory. Q.16 What is FOCL ? Explain in detail. Ans. : *FOCL uses the domain theory to increase the number of candidate specializations considered at each step of the search for a single Horn clause. + FOCL expands its current hypothesis h using the following two operators: 1. For each operational literal that is not part of h, create a specialization of h by adding this single literal to the preconditions TECHNICAL PUBLICATIONS”. An up trust for knowledge Ecove‘Machine Learning 5-6 Analytical Learning 2. Create an operational, logically _ sufficient condition for the target concept according to the domain theory. Add this set of literals to the current preconditions of h. + FOCL first selects one of the domain theory clauses whose head (postcondition) matches the target concept. « If there are several such clauses, it selects the clause whose body (preconditions) have the highest information gain relative to the training examples of the target concept. * FOCL learns Horn clauses of the form (c 0; * 0,” Op where c is the target concept, 0; is an initial conjunction of operational literals, added one at a time by the first syntactic operator, ©, is a conjunction of operational literals added in a single step based on the domain theory, and Q, is a final conjunction of operational literals added one at a time by the first syntactic operator. * Any of these three sets of literals may be empty. * FOCL uses both a syntactic generation of candidate specializations and a domain theory driven generation of candidate specializations at each step in the search. The algorithm chooses among these candidates based solely on their empirical support over the training data. * Thus, the domain theory is used in a fashion that biases the learner, but leaves final search choices to be made based on performance over the training data. 5.6: Combining Inductive and Analytical Learning Q.17 Write specific properties of a learning ‘method. Ans. : Properties include: a Given no domain theory, it should learn at least as effectively as purely inductive methods. b. Given a perfect domain theory, it should learn at least as effectively as purely analytical methods. Given an imperfect domain theory and imperfect training data, it should combine the two to outperform either purely inductive or purely analytical methods. d. It should accommodate an unknown level of error in the training data. It should accommodate an unknown level of error in the domain theory. 5.7: Using Prior Knowledge to Initialize the Hypothesis Q.418 Write KBANN algorithm to explain usage of prior knowledge to reduce complexity. USP [INTU : Dec-17, Marks 5] Ans. : * KBANN(Dom ‘Theory, Training Examples) * Domain-Theory: Horn clauses. Set of propositional, nonrecursive * Training example: Set of (input output) pairs of the target function. * Analytical step: Create an initial network equivalent to the domain theory. 1. For each instance attribute create a network input. 2. For each Horn clause in the Domain-Theory, create a network unit as follows: * Connect the inputs of this unit to the attributes tested by the clause antecedents * For each mon-negated antecedent of the clause, assign a weight of W to the corresponding sigmoid unit input. * For cach negated antecedent of the clause, assign a weight of - W to the corresponding sigmoid unit input. + Set the threshold weight wo for this unit to -(n_- 0 .5)W, where n is the number of non-negated antecedents of the clause. 3. Add additional connections among the network units, connecting each network unit at depth i from the input layer to all network units at depth i +1. Assign random near-zero weights to these additional connections. "FF recrcas PUBLICKTIONS". Anup tat konto coor‘Machine Learning Analytical Learning Inductive step: Refine the initial network. 4. Apply the BACKPROPAGATION algorithm adjust the initial network weights to fit the Training-Examples. aa a2 a3 aa as a6 a7 as as 10 at az Fill in the Blanks for Mid Term Exam PROLOG-EBG is a ___ covering algorithm. KBANN stands for SOAR uses a variant of explanation-based learning called to extract the general conditions under which the same explanation applies. In its pure form, deductive, rather than PROLOG-EBG computes the weakest pre-image of the target concept with respect to the explanation, using a general procedure called . PROLOG-EBG is a process, domain theory is said to be if each of its assertions is a truthful statement about the world. A is said to be complete with respect to a given target concept and instance space. and deductive information Analytical learning uses reasoning to augment the provided by the training examples, so that it is not subject to these same bounds. In crossover, offspring are created by substituting intermediate segments of one parent into the middle of the second parent string The general-to-specific search suggested above for the algorithm is a greedy depth-first search with no backtracking, Horn clauses may also refer to variables in the preconditions that do not occur in the postconditions In explanation-based learning, is used to analyse, or explain, how each observed training example satisfies the target concept. Q.13 Decision tree to Q.15 PROLOG-EBG is an learning, neural network learning, inductive logic programming, and genetic algorithms are all examples of methods that operate in fashion. Q.14 Explanation-based learning is a form of learning in which the learner processes each novel training example explanation-based learning algorithm that uses first-order Horn clauses to represent both its and its learned hypotheses. Mutliple Choice Questions for Mid Term Exam. Q.4 PROLOG-EBG isan Qi in the learner must output a hypothesis that is consistent with both the training data and the domain theory. [a] analytical learning, [b] inductive learning deductive learning Le] [d) none of these Q2 crossover combines bits sampled uniformly from the two parents. a| single point b] two point [d] all of these [ec] uniform Q3 A domain theory B consisting of that can be used to explain observed training amples. 2 Prior knowledge background knowledge training examples none of these ERIE EI) explanation-based learning algorithm that uses Horn clauses to represent both its domain theory and its learned hypotheses. a) First order [b] second order [d] all of these no order "FF recrcas PUBLICKTIONS". Anup tat konto coor‘Machine Learning 5-8 Analytical Learning Q5 PCA stands for Answer Key for Fill in the Blanks [a] Poor component analysis Qa | sequential Q2 | Knowledge-Based [b] Prior component analysis Aisa Neural etwor! [ec] principal code analysis Q3 | chunking Q4 inductive [d) principal component analysis learning, Q.6 FOCL means Q5 | regression Q6 | correct [a] First Order Combined Learner Q7 | domain theory Q8 | prior knowledge [b] First Order Close learner Q9 | two-point Q.0 | LEARN-ONERULE [ec] First Order Combined List Q.11 | Firstorder Q.12 | prior knowledge [a] All of these 2.3 | inductive Q.4 | analytical Q.15 | domain theory O16 Answer Key for Multiple Choice Questions: Qu a Q2 © Q3 b Q4 a Qs 4 Q6 a END... SF TECHNICAL PUBLICATIONS”. An up trust fr knowedgo vecoor
You might also like
AI (Horn Clauses and Definite Clauses)
PDF
No ratings yet
AI (Horn Clauses and Definite Clauses)
13 pages
KRR Unit-5
PDF
100% (1)
KRR Unit-5
51 pages
STM Question Paper R18
PDF
No ratings yet
STM Question Paper R18
2 pages
Ai Unit 4 Notes
PDF
No ratings yet
Ai Unit 4 Notes
36 pages
UNIT-4B-Logic Based Testing
PDF
100% (1)
UNIT-4B-Logic Based Testing
38 pages
NLP Unit-Ii
PDF
No ratings yet
NLP Unit-Ii
71 pages
IRS UNIT 5-Compressed
PDF
No ratings yet
IRS UNIT 5-Compressed
80 pages
Applications of Context Free Grammars
PDF
No ratings yet
Applications of Context Free Grammars
23 pages
Planning and Acting in The Real World
PDF
No ratings yet
Planning and Acting in The Real World
31 pages
AI UNIT 2 All Notes
PDF
No ratings yet
AI UNIT 2 All Notes
19 pages
DBMS Technical Publication Chapter 3
PDF
No ratings yet
DBMS Technical Publication Chapter 3
59 pages
Search Problems
PDF
100% (1)
Search Problems
19 pages
KRR Unit 1
PDF
No ratings yet
KRR Unit 1
26 pages
ML Unit-5
PDF
No ratings yet
ML Unit-5
14 pages
Data Mining Notes Jntuh Compress
PDF
No ratings yet
Data Mining Notes Jntuh Compress
62 pages
Mental Events and Mental Objects
PDF
No ratings yet
Mental Events and Mental Objects
8 pages
Irs Unit-V
PDF
No ratings yet
Irs Unit-V
48 pages
KRR Unit I Notes
PDF
100% (1)
KRR Unit I Notes
32 pages
Information Visualization Technologies
PDF
No ratings yet
Information Visualization Technologies
15 pages
First-Order Logic in Artificial Intelligence
PDF
No ratings yet
First-Order Logic in Artificial Intelligence
38 pages
IRS Unit Wise Important Questions
PDF
No ratings yet
IRS Unit Wise Important Questions
3 pages
States, State Graphs, and Transition Testing: Unit Iv
PDF
No ratings yet
States, State Graphs, and Transition Testing: Unit Iv
42 pages
Analytical Learning
PDF
No ratings yet
Analytical Learning
42 pages
ML Unit 5
PDF
No ratings yet
ML Unit 5
6 pages
Features of Bayesian Learning Methods
PDF
No ratings yet
Features of Bayesian Learning Methods
39 pages
5.5.2 Video To Text With LSTM Models
PDF
No ratings yet
5.5.2 Video To Text With LSTM Models
10 pages
Unification and Lifting
PDF
No ratings yet
Unification and Lifting
8 pages
Designing A Learning System
PDF
No ratings yet
Designing A Learning System
12 pages
Unit 2 - Notes
PDF
No ratings yet
Unit 2 - Notes
9 pages
Unit-2.4 Searching With Partial Observations - CSPs - Back Tracking
PDF
100% (2)
Unit-2.4 Searching With Partial Observations - CSPs - Back Tracking
42 pages
Resolution in AI
PDF
0% (1)
Resolution in AI
5 pages
Attribute Grammars - PPL
PDF
No ratings yet
Attribute Grammars - PPL
9 pages
21cs502 Unit 4 Ai Notes Short
PDF
No ratings yet
21cs502 Unit 4 Ai Notes Short
32 pages
Dbms Lab Manual II Cse II Sem
PDF
No ratings yet
Dbms Lab Manual II Cse II Sem
58 pages
Ai Unit-4 Notes
PDF
100% (1)
Ai Unit-4 Notes
19 pages
DM - Unit-I R16
PDF
No ratings yet
DM - Unit-I R16
39 pages
IRS III Year UNIT-3 Part 1
PDF
50% (2)
IRS III Year UNIT-3 Part 1
18 pages
Machine Learning UNIT 1 PDF
PDF
100% (1)
Machine Learning UNIT 1 PDF
33 pages
Lecture-4.1. Representing Knowledge Using Rules
PDF
No ratings yet
Lecture-4.1. Representing Knowledge Using Rules
29 pages
Undecidable Problems For Recursively Enumerable Languages: Continued
PDF
No ratings yet
Undecidable Problems For Recursively Enumerable Languages: Continued
54 pages
Explain Item Normalization?
PDF
No ratings yet
Explain Item Normalization?
7 pages
Concept Learning
PDF
No ratings yet
Concept Learning
62 pages
Flat - Unit - 4 Notes
PDF
No ratings yet
Flat - Unit - 4 Notes
20 pages
Unit 5
PDF
100% (1)
Unit 5
19 pages
Reasoning With Default Information
PDF
No ratings yet
Reasoning With Default Information
3 pages
Unit 2 Machine Learning Notes
PDF
100% (1)
Unit 2 Machine Learning Notes
25 pages
Subject:Machine Learning Unit-5 Analytical Learning Topic:Remarks On Explanation Based Learning
PDF
100% (1)
Subject:Machine Learning Unit-5 Analytical Learning Topic:Remarks On Explanation Based Learning
21 pages
LM7 Approximate Inference in BN
PDF
No ratings yet
LM7 Approximate Inference in BN
18 pages
Decision Properties of Regular Language
PDF
100% (1)
Decision Properties of Regular Language
29 pages
Classical Planning in AI
PDF
100% (1)
Classical Planning in AI
5 pages
Example: Air Cargo Transport
PDF
No ratings yet
Example: Air Cargo Transport
13 pages
STM Unit-4
PDF
No ratings yet
STM Unit-4
36 pages
KE QB Questions Only
PDF
100% (1)
KE QB Questions Only
7 pages
Unit II WT Notes
PDF
No ratings yet
Unit II WT Notes
32 pages
Forward Chaining and Backward Chaining in Ai: Inference Engine
PDF
No ratings yet
Forward Chaining and Backward Chaining in Ai: Inference Engine
18 pages
AI Unit-3 Notes
PDF
No ratings yet
AI Unit-3 Notes
23 pages
DWDM I Mid Objective QB
PDF
100% (1)
DWDM I Mid Objective QB
7 pages
DBMS Technical Publications Chapter 2
PDF
100% (2)
DBMS Technical Publications Chapter 2
33 pages
Data Mining-Mining Time Series Data
PDF
0% (1)
Data Mining-Mining Time Series Data
7 pages
Reinforcement Learning
PDF
No ratings yet
Reinforcement Learning
32 pages
What Is ?: Software
PDF
No ratings yet
What Is ?: Software
12 pages
Lab-4 Under Repair Systems Numbers-19,26,40,45 Hardware Problmes-46 System
PDF
No ratings yet
Lab-4 Under Repair Systems Numbers-19,26,40,45 Hardware Problmes-46 System
1 page
Scripting Languages Lab
PDF
No ratings yet
Scripting Languages Lab
1 page
STM Viva Que
PDF
100% (2)
STM Viva Que
54 pages
Task 3
PDF
No ratings yet
Task 3
2 pages
K Means Algorithm
PDF
No ratings yet
K Means Algorithm
2 pages
Java
PDF
No ratings yet
Java
5 pages
Knowledge Representation Using Rules
PDF
100% (12)
Knowledge Representation Using Rules
19 pages
PAT Trees and PAT Arrays
PDF
No ratings yet
PAT Trees and PAT Arrays
12 pages
DBMS Technical Publications Chapter 1
PDF
100% (2)
DBMS Technical Publications Chapter 1
24 pages
Information Retrieval Systems U6
PDF
No ratings yet
Information Retrieval Systems U6
13 pages
Issues in Knowledge Representation: Inverses
PDF
No ratings yet
Issues in Knowledge Representation: Inverses
4 pages