Tuesday, January 10, 2017

Inductive Learning

In Inductive Learning or prediction we are given examples of a function (X, F(X)) and then we Predict function F(X)  for new examples X. We use 

i)Classification problem when F(X) is discrete.


ii)Regression problem when F(X) is continuous.


iii)Probability estimation when F(X) = Probability(X)


Here prediction of  F(X) is known as hypothesis and it can be more than one for the given data set.And these set of hypothesis is known as Hypothesis space
.


We can think supervised learning as device which explore a hypothesis space.In this each settings of parameters is different hypothesis about the function that maps input vectors to output vectors


There are various terminologies regarding to this Inductive Learning:

Example (x,y): Instance x with label y.
Training Data S: Collection of examples observed by learning algorithm.
Instance Space X: Set of all possible objects describable by features.
Concept c: Subset of objects from X (c is unknown).


Target Function f:  Maps each instance x X to target label y Y
Note:
If there are 4 (N) input features, there are



    possible Boolean functions.
We cannot figure out which one is correct unless we see every possible input-output pair  





INDUCTIVE BIAS

In Machine Learning, our  aim is to construct algorithms that are able to learn to
predict a certain target output. To achieve this, the learning algorithm is presented some
training examples which demonstrate the intended relation of input and output values. 
Then the learner is supposed to approximate the correct output, even for examples that
have not been shown during training. Without any additional assumptions, this problem
cannot be solved exactly since unseen situations might have an arbitrary output value.
Due to this there is need of Inductive Bias.

It is of Two types:

Restriction: Limit the hypothesis space.

Preference: Impose ordering on hypothesis space.

Minimum description length: when forming a hypothesis, 

attempt to minimize the length of the description of the

hypothesis.

Maximum margin: when drawing a boundary between two

classes, attempt to maximize the width of the boundary (SVM).

Occam's Razor is an classical example of Inductive Bias 

   
                                                  INDUCTIVE LEARNING

In inductive learning we Induce a general function from training examples
Construct hypothesis h to agree with c on the training examples.
A hypothesis is consistent if it agrees with all training examples.
A hypothesis said to generalize well if it correctly predicts the value of y for novel example.

It is an ill posed problem Unless we see all possible examples
the data is not sufficient for an inductive learning algorithm to
find a unique solution.



In inductive learning hypothesis Any hypothesis h found to
approximate the target function well over a sufficiently large
set of training examples D will also approximate the
target function well over other unobserved examples.

The best way to refine the hypothesis space Concept learning
is a better approach.In this we search an hypotheses space of
possible representations looking for the representation(s)
that best fits the data, given the bias.The tendency to prefer
one hypothesis over another is called a bias.

Given a representation, data, and a bias, the problem of
learning can be reduced to one of search.


There are various issues in machine learning like:
What are good hypothesis spaces?
Algorithms that work with the hypothesis spaces
How to optimize accuracy over future data points  (overfitting)
How can we have confidence in the result? (How much training data – statistical qs)
Are some learning problems computationally intractable?

Given below is the video in which I have explain about
Inductive learning:

Hope you have enjoyed reading this article.In next article I will be discussing evaluation and cross validation in Machine Learning.Till then enjoy learning!!!

No comments:

Post a Comment