Optimization
Optimization
Hours Component
4 Lecture (1hr = 1 credit)
Credits (L:T:P)
Tutorial (1hr = 1 credit)
(Lecture : Tutorial : Practical) Practical (2hrs = 1 credit)
L:T:P = 3-1-0 Total Credits = 4
Grading Scheme X 4-point scale (A,A-,B+,B,B-,C+,C,D,F)
(Choose by placing X against
appropriate box) Satisfactory/Unsatisfactory (S / X)
Area of Specialization (if applicable)
(Choose by placing X in box against not more than two areas from the list)
Theory and Systems for Computing Networking and
and Data Communication
Artificial Intelligence and Machine Digital Society
Learning
VLSI Systems Cyber Security
X General Elective
Programme / Branch Course is restricted to the following programmes / branch(es):
(Place X appropriately. More than one is okay)
Programme:
Branch: X CSE
X iMTech X ECE
X M.Tech Digital Society
M.Sc.
Yes /
Focus Area Details
No
Direct focus on employability No
Yes Mathematical / Algorithmic /
Implementation skills required for
formulating and solving real-live
Focus on skill development optimization problems.
Focus on entrepreneurship No
Provides value added / life skills No
(language, writing, communication, etc.)
Scope: Optimization is a very broad term and includes a wide range of algorithms and
techniques and it is not practical to do even do passing justice to all of them in a single self-
contained semester course. This edition of the course focuses largely on optimization in the
‘continuous’ world. Discrete optimization problems would include most of the well known NP-
Hard problems that have been studied extensively in computer science --- Problems on Graphs,,
Network Flow Problems, SAT Family of problems on boolean expressions, Knapsack and Bin-
packing, etc. This course excludes all of these problems, though many of these problems are
PO/
Id Course Outcome
PSO
CL KC Class (Hrs)
Course Content
[Provide list-wise topics]
Instruction Schedule
[Provide session-wise schedule]
Lectures 2-4: Recap of Relevant Mathematical Concepts --- Vector Spaces & Matrices, (Rank,
Independence, Transformations, Projections, Eigenvalues and Vectors, Norms, ...), Geometry
(Hyperplanes, Convexity, Polytopes), Calculus (Gradients --- uni and multi dimensional).
Gradient Computation --- Numerical Techniques, Automatic Symbolic Differentiation (this is the
backbone of all modern deep learning libraries)
Unconstrained Optimization
Lectures 8-10: Descent Strategies --- Line Search, Trust Region Methods; Termination Criteria
Lecture 11: Newton’s Method and the Levenberg-Marquardt Variant
Lectures 12: Conjugate Gradient Methods
Lectures 13: Quasi Newton Methods
Lectures 14-15: Momentum, Nesterov Variant; Popular Optimization Algorithms in Deep Learning:
RMSProp, Adagrad, Adam, Hypergradient Descent
Learning Resources
[Mention text books, reference books and other learning resources required as part of the course]
30%: Project
30%: Implementation Assignments
20%: Mid-Term Exam (Theory)
20%: End-Term Exam (Theory)
Assignments / Projects
[List exact number of assignments or projects included (provide generic description)]
These are intended to be naive implementations of the algorithms discussed in the course, to give students
a more hands-on feel for the algorithms. It will also give them an appreciation of what it takes for the
implementations to be ‘industry grade’ like the ones in the standard libraries (commercial or otherwise)
that are available. The focus will be on ground-up implementations with minimal use of ready-made
libraries, and the correctness / asymptotic behavior of the implementations. Edge cases, numerical
stability issues and performance considerations will be minimal, if at all.
Projects and Assignments are expected to done in small groups (pairs preferably).
Unless medically approved excuse, all late submissions are not considered for grading.
Accommodation of Divyangs
[State any enabling mechanisms for accommodating learners with special needs]