Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
17 views

Machine Learning Syllabus

The document discusses a course on machine learning for mechanical engineering. It covers 5 modules on topics like inductive classification, decision trees, ensemble learning, Bayesian learning and more. It provides learning outcomes, exam pattern, textbooks and module-wise topics to be covered in the first module.

Uploaded by

S B
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Machine Learning Syllabus

The document discusses a course on machine learning for mechanical engineering. It covers 5 modules on topics like inductive classification, decision trees, ensemble learning, Bayesian learning and more. It provides learning outcomes, exam pattern, textbooks and module-wise topics to be covered in the first module.

Uploaded by

S B
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Group no.

2 Subject Code 20MMD242 Exam Duration 03 Hours

Exam Marks 100


Subject Title MACHINE LEARNING for Mechanical Engineering
Module-1
Introduction: Definition of learning systems. Goals and applications of machine learning. Aspects of developing a
learning system: training data, concept representation, function approximation. Inductive Classification: The concept
learning task. Concept learning as search through a hypothesis space. General-to-specific ordering of hypotheses.
Finding maximally specific hypotheses. Version spaces and the candidate elimination algorithm. Learning
conjunctive concepts. The importance of inductive bias - Decision Tree Learning: Representing concepts as decision
trees. Recursive induction of decision trees. Picking the best splitting attribute: entropy and information gain.
Searching for simple trees and computational complexity. Occam's razor. Overfitting, noisy data, and pruning.
Module-2
Ensemble Learning: Using committees of multiple hypotheses. Bagging, boosting, and DECORATE. Active
learning with ensembles - Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned
hypotheses. Comparing learning algorithms: cross-validation, learning curves, and statistical hypothesis testing -
Computational Learning Theory: Models of learnability: learning in the limit; probably approximately correct (PAC)
learning. Sample complexity: quantifying the number of examples needed to PAC learn. Computational complexity
of training. Sample complexity for finite hypothesis spaces. PAC results for learning conjunctions, kDNF, and
kCNF. Sample complexity for infinite hypothesis spaces, Vapnik-Chervonenkis dimension.
Module-3
Rule Learning: Propositional and First-Order: Translating decision trees into rules. Heuristic rule induction using
separate and conquer and information gain. First-order Hornclause induction (Inductive Logic Programming) and
Foil. Learning recursive rules. Inverse resolution, Golem, and Progol - Support Vector Machines: Maximum margin
linear separators. Quadratic programming solution to finding maximum margin separators. Kernels for learning non-
linear functions.
Module-4
Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm. Parameter smoothing.
Generative vs. discriminative training. Logistic regression. Bayes nets and Markov nets for representing
dependencies - Instance-Based Learning: Constructing explicit generalizations versus comparing to past specific
examples. KNearest-neighbor algorithm. Case-based learning - Text Classification: Bag of words representation.
Vector space model and cosine similarity. Relevance feedback and Rocchio algorithm. Versions of nearest
neighbor and Naive Bayes for text.
Module-5
Clustering and Unsupervised Learning: Learning from unclassified data. Clustering. Hierarchical Agglomerative
Clustering. K-means partitional clustering. Expectation maximization (EM) for soft clustering. Semi-supervised
learning with EM using labeled and unlabled data - Language Learning: Classification problems in language: word-
sense disambiguation, sequence labeling. Hidden Markov models (HMM's). Veterbi algorithm for determining
most-probable state sequences. Forward-backward EM algorithm for training the parameters of HMM's. Use of
HMM's for speech recognition, part-of-speech tagging, and information extraction. Conditional random fields
(CRF's). Probabilistic context-free grammars (PCFG). Parsing and learning with PCFGs. Lexicalized PCFGs.

Course outcomes:
At the end of the course the student will be able to:
C01. Choose the learning techniques with this basic knowledge
CO2Apply effectively genetic algorithms for appropriate applications.
CO3Apply bayesian techniques and derive effectively learning rules.
C04. Choose and differentiate Clustering & Unsupervised Learning and Language Learning
Question paper pattern:
The SEE question paper will be set for 100 marks and the marks scored will be proportionately reduced to 60.
The question paper will have ten full questions carrying equal marks.
Each full question is for 20 marks.
There will be two full questions (with a maximum of four sub questions) from each module.
Each full question will have sub question covering all the topics under a module.
The students will have to answer five full questions, selecting one full question from each module. s

Textbook/ Textbooks
1. Tom M. Mitchell , "Machine learning", McGraw Hill 1997
2. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
3. Rajjan Shinghal, "Pattern Recognition", Oxford Press, 2006.
Reference Books
1. Ethem Alpaydin, "Introduction to machine learning", PHI learning, 2008.
2. Hastie, Tibshirani, Friedman, "The Elements of Statistical Learning", Springer 2001.
3. .R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification, Wiley-Interscience, 2nd Edition, 2000. 3. T. Hastie,
R. Tibshirani and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference and Prediction,
Springer, 2nd Edition, 2009
Module-1
Introductions
 Definition of learning systems.
 Goals and applications of machine learning.
Aspects of developing a learning system
 Training data
 Concept representation
 Function approximation
Inductive Classification
 The concept learning task.
 Concept learning as search through a hypothesis space.
 General-to-specific ordering of hypotheses.
 Finding maximally specific hypotheses.
 Version spaces and the candidate elimination algorithm.
 Learning conjunctive concepts.
The importance of inductive bias - Decision Tree Learning
 Representing concepts as decision trees.
 Recursive induction of decision trees.
Picking the best splitting attribute
 entropy and information gain.
 Searching for simple trees and computational complexity.
 Occam's razor.
 Overfitting, noisy data, and pruning.

You might also like