Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
62 views

Loss Functions - An Algorithm-Wise Comprehensive Summary

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Loss Functions - An Algorithm-Wise Comprehensive Summary

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

5/25/23, 10:17 AM Yahoo Mail - Loss Functions: An Algorithm-wise Comprehensive Summary

Loss Functions: An Algorithm-wise Comprehensive Summary

From: Daily Dose of Data Science (avichawla@substack.com)

To: chidichekwas@yahoo.com

Date: Tuesday, May 9, 2023 at 04:25 AM CDT

Open in app or online

Loss Functions: An Algorithm-wise


Comprehensive Summary
Loss functions of common ML algorithms depicted in a single frame.
AVI CHAWLA
MAY 9

SHARE

about:blank 1/5
5/25/23, 10:17 AM Yahoo Mail - Loss Functions: An Algorithm-wise Comprehensive Summary

Loss functions are a key component of ML algorithms.

They specify the objective an algorithm should aim to optimize during its
training. In other words, loss functions tell the algorithm what it should be
trying to minimize or maximize in order to improve its performance.

about:blank 2/5
5/25/23, 10:17 AM Yahoo Mail - Loss Functions: An Algorithm-wise Comprehensive Summary

Model training with an objective function

Therefore, knowing which loss functions are best suited for specific ML
algorithms is extremely crucial.

The above visual depicts the most commonly used loss functions with various
ML algorithms.

1. Linear Regression: Mean Squared Error (MSE). This can be used with
and without regularization, depending on the situation.

2. Logistic regression: Cross-Entropy Loss or Log Loss, with and without


regularization.

3. Decision Tree and Random Forest:

a. Classifier: Gini impurity or information gain.

b. Regressor: Mean Squared Error (MSE)

4. Support Vector Machines (SVMs): Hinge loss. Read more: Wikipedia.

5. k-Nearest Neighbors (kNN): No loss function. kNN is a non-parametric


lazy learning algorithm. It works by retrieving instances from the training
data, and making predictions based on the k nearest neighbors to the
test data instance.

6. Naive Bayes: No loss function. Can you guess why? Let me know in
the comments if you need help.

7. Neural Networks: They can use a variety of loss functions depending


on the type of problem. The most common are:

a. Regression: Mean Squared Error (MSE).

b. Classification: Cross-Entropy Loss.

about:blank 3/5
5/25/23, 10:17 AM Yahoo Mail - Loss Functions: An Algorithm-wise Comprehensive Summary

8. AdaBoost: Exponential loss function. AdaBoost is an ensemble learning


algorithm. It combines multiple weak classifiers to form a strong
classifier. In each iteration of the algorithm, AdaBoost assigns weights to
the misclassified instances from the previous iteration. Next, it trains a
new weak classifier and minimizes the weighted exponential loss.

9. Other Boosting Algorithms:

a. Regression: Mean Squared Error (MSE).

b. Classification: Cross-Entropy Loss.

10. KMeans Clustering: No loss function. KMeans is an unsupervised


algorithm.

Over to you: Which algorithms have I missed?

👉 Read what others are saying about this post on LinkedIn and Twitter.

👉 If you liked this post, don’t forget to leave a like ❤️. It helps more
people discover this newsletter on Substack and tells me that you
appreciate reading these daily insights. The button is located towards
the bottom of this email.

👉 If you love reading this newsletter, feel free to share it with friends!

Share Daily Dose of Data Science

Find the code for my tips here: GitHub.

I like to explore, experiment and write about data science concepts and tools.
You can read my articles on Medium. Also, you can connect with me on
LinkedIn and Twitter.

LIKE COMMENT RESTACK

about:blank 4/5
5/25/23, 10:17 AM Yahoo Mail - Loss Functions: An Algorithm-wise Comprehensive Summary

© 2023 Avi Chawla


India
Unsubscribe

about:blank 5/5

You might also like