Machine Learning Syllabus
Machine Learning Syllabus
Course outcomes:
At the end of the course the student will be able to:
C01. Choose the learning techniques with this basic knowledge
CO2Apply effectively genetic algorithms for appropriate applications.
CO3Apply bayesian techniques and derive effectively learning rules.
C04. Choose and differentiate Clustering & Unsupervised Learning and Language Learning
Question paper pattern:
The SEE question paper will be set for 100 marks and the marks scored will be proportionately reduced to 60.
The question paper will have ten full questions carrying equal marks.
Each full question is for 20 marks.
There will be two full questions (with a maximum of four sub questions) from each module.
Each full question will have sub question covering all the topics under a module.
The students will have to answer five full questions, selecting one full question from each module. s
Textbook/ Textbooks
1. Tom M. Mitchell , "Machine learning", McGraw Hill 1997
2. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
3. Rajjan Shinghal, "Pattern Recognition", Oxford Press, 2006.
Reference Books
1. Ethem Alpaydin, "Introduction to machine learning", PHI learning, 2008.
2. Hastie, Tibshirani, Friedman, "The Elements of Statistical Learning", Springer 2001.
3. .R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification, Wiley-Interscience, 2nd Edition, 2000. 3. T. Hastie,
R. Tibshirani and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference and Prediction,
Springer, 2nd Edition, 2009
Module-1
Introductions
Definition of learning systems.
Goals and applications of machine learning.
Aspects of developing a learning system
Training data
Concept representation
Function approximation
Inductive Classification
The concept learning task.
Concept learning as search through a hypothesis space.
General-to-specific ordering of hypotheses.
Finding maximally specific hypotheses.
Version spaces and the candidate elimination algorithm.
Learning conjunctive concepts.
The importance of inductive bias - Decision Tree Learning
Representing concepts as decision trees.
Recursive induction of decision trees.
Picking the best splitting attribute
entropy and information gain.
Searching for simple trees and computational complexity.
Occam's razor.
Overfitting, noisy data, and pruning.