Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
0 views17 pages

ML-UNIT2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 17

UNIT-2

INTRODUCTION TO MACHINE LEARNING TECHNIQUES:


COMPARISON: SUPERVISED, SEMI-SUPERVISED, AND UNSUPERVISED
LEARNING

Aspect Supervised Learning (SL) Semi-Supervised Unsupervised Learning


Learning (SSL) (USL)

Definition Learning from a fully Learning from a mix of Learning from an


labeled dataset. labeled and unlabeled entirely unlabeled
data. dataset.

Input Data Labeled data (input- Mostly unlabeled, with a Only unlabeled data.
output pairs). small portion labeled.

Output Predicts output labels. Predicts output labels or Finds patterns or


clusters. clusters.

Examples - Classification (spam - Text classification with - Clustering (customer


detection). limited labels. segmentation).

- Regression (price - Image recognition with - Dimensionality


prediction). few labeled images. reduction (PCA).

Algorithms - Linear Regression, - Self-training, Label - K-Means, DBSCAN,


Decision Trees, SVM. Propagation, Co-training. PCA.

Training Requires a large amount Balances labeled and Easier to find data but
Complexity of labeled data. unlabeled data for harder to label outputs.
efficiency.

Use Cases - Fraud detection, - Speech recognition, - Market basket analysis,


medical diagnosis. image classification. anomaly detection.

Strengths High accuracy when Cost-effective with Reveals hidden patterns


enough labeled data is limited labels. and structures.
present.

Weaknesses Expensive and time- Requires a small labeled Hard to evaluate


consuming to label data. set for training. performance.

Key Differences:

 SL: Needs labeled data for both training and testing.

 SSL: Uses a small set of labeled data and a large set of unlabeled data to improve learning.

 USL: Focuses only on uncovering patterns in unlabeled data without specific guidance.
REINFORCEMENT LEARNING
Reinforcement Learning is a type of machine learning where an agent learns to make decisions by
interacting with an environment. It receives feedback in the form of rewards or penalties based on its
actions and seeks to maximize cumulative rewards over time.

Reinforcement Learning is a feedback-based Machine learning technique in which an agent learns to


behave in an environment by performing the actions and seeing the results of actions. For each good
action, the agent gets positive feedback, and for each bad action, the agent gets negative feedback
or penalty.

In Reinforcement Learning, the agent learns automatically using feedbacks without any labeled data,
unlike supervised learning.

Since there is no labeled data, so the agent is bound to learn by its experience only.

RL solves a specific type of problem where decision making is sequential, and the goal is long-term,
such as game-playing, robotics, etc.

The agent interacts with the environment and explores it by itself. The primary goal of an agent in
reinforcement learning is to improve the performance by getting the maximum positive rewards.

Key Components:

1. Agent: The learner or decision-maker.

2. Environment: Everything the agent interacts with.


3. State (S): A specific situation or configuration of the environment.

4. Action (A): Possible moves the agent can take in a given state.

5. Reward (R): Feedback from the environment (positive or negative).

6. Policy (π): The strategy that the agent follows to choose actions.

7. Value Function (V): The expected cumulative reward from a state.

8. Q-Function (Q): The expected cumulative reward from taking an action in a given state.

How RL Works:

1. The agent observes the current state of the environment.

2. It chooses an action based on its policy.

3. The environment responds by moving to a new state and provides a reward.

4. The agent updates its policy based on the received reward and the new state.

Approaches to implement Reinforcement Learning:

 Model-Free RL: The agent learns from experience without a model of the environment.

o Example: Q-Learning, SARSA.

o Value-based:
The value-based approach is about to find the optimal value function, which is the
maximum value at a state under any policy. Therefore, the agent expects the long-
term return at any state(s) under policy π.

o Policy-based:
Policy-based approach is to find the optimal policy for the maximum future rewards
without using the value function. In this approach, the agent tries to apply such a
policy that the action performed in each step helps to maximize the future reward.
The policy-based approach has mainly two types of policy:

 Deterministic: The same action is produced by the policy (π) at any state.
 Stochastic: In this policy, probability determines the produced action.
 Model-Based RL: The agent uses a model of the environment to plan actions.

o Example: Dynamic Programming.

o In the model-based approach, a virtual model is created for the environment, and
the agent explores that environment to learn it. There is no particular solution or
algorithm for this approach because the model representation is different for each
environment.

Common Algorithms:
 Q-Learning: A model-free algorithm that learns the value of actions.

 Deep Q-Networks (DQN): Combines Q-Learning with deep neural networks.

 Policy Gradient Methods: Directly optimize the policy, rather than the value function.

 Actor-Critic Methods: Combine policy-based and value-based methods.

Applications:

 Games: AlphaGo, Chess, and video games.

 Robotics: Training robots to perform tasks autonomously.

 Autonomous Vehicles: Decision-making in dynamic environments.

 Finance: Portfolio management and trading strategies.

 Healthcare: Personalized treatment recommendations.

Strengths and Challenges:

Strengths Challenges

Learns from interactions with the Requires a lot of data and exploration.
environment.

Can solve complex decision-making problems. Balancing exploration and exploitation.

Adapts to dynamic environments. Reward design can be tricky.

Example:

Overview:

Self-driving cars use RL to learn optimal driving policies by interacting with a simulated or real-world
environment. The goal is to maximize safety, efficiency, and passenger comfort while minimizing
accidents and fuel consumption.

Key Components:

Component Explanation

Agent The self-driving car's decision-making system.

Environment Roads, traffic, pedestrians, weather, etc.

State (S) Current sensor readings (e.g., location, speed, distance from other cars).

Actions (A) Steering, acceleration, braking, lane changes, etc.


Component Explanation

Positive rewards for safe driving, reaching destinations efficiently; negative rewards for
Reward (R)
collisions, traffic violations, and discomfort.

How RL Works in Self-Driving Cars:

1. Observation: The car observes its surroundings using sensors like LiDAR, cameras, and GPS.

2. Decision-Making: Based on its current state, the car chooses an action (e.g., accelerate,
brake, turn).

3. Feedback: The environment responds with a reward:

o Positive: Staying in the lane, avoiding obstacles, following traffic rules.

o Negative: Collisions, sudden braking, crossing lanes without signaling.

4. Learning: The car updates its policy to maximize cumulative rewards using algorithms like
Deep Q-Networks (DQN) or Policy Gradient methods.

Example: Lane Keeping Task

 State: The car detects its position within the lane.

 Action: Adjusts the steering angle.

 Reward:

o +1 for staying centered in the lane.

o -10 for crossing lane boundaries.

 Goal: Learn to maintain lane position without continuous corrections.

Key Differences Between RL, Supervised Learning, and Unsupervised Learning

Aspect Reinforcement Learning (RL) Supervised Learning Unsupervised Learning

Goal Maximize cumulative rewards Learn a mapping from Discover patterns or


through actions input to output labels clusters in data

Data No labeled data; feedback Requires labeled data Uses only input data (no
comes from rewards and (input-output pairs) labels)
penalties

Learning Trial-and-error interaction Directly learns from Learns underlying


Process with an environment labeled examples structure from data

Output Policy or action strategy Prediction or Clusters or latent


classification representations

Examples Game playing, robotics, Image classification, Customer segmentation,


autonomous driving fraud detection anomaly detection
EXAMPLES OF SUPERVISED, SEMI-SUPERVISED, AND UNSUPERVISED
LEARNING:
1. Supervised Learning (SL)

Definition: The model is trained on labeled data where each input has a corresponding output
(label).

Examples:

 Spam Detection:

o Input: Email text.

o Output: Spam or not spam (label).

o Algorithm: Logistic Regression, Support Vector Machines (SVM).

 Medical Diagnosis:

o Input: Patient data (symptoms, test results).

o Output: Disease classification.

o Algorithm: Decision Trees, Random Forests.

 House Price Prediction:

o Input: Features like square footage, location, and number of rooms.

o Output: Predicted price.


o Algorithm: Linear Regression.

2. Semi-Supervised Learning (SSL)

Definition: The model is trained on a small amount of labeled data and a large amount of unlabeled
data.

Examples:

 Image Classification (with limited labels):

o Input: Thousands of images, but only a few are labeled.

o Output: Class labels (e.g., dog, cat, car).

o Algorithm: Self-training, Label Propagation.

 Speech Recognition:

o Input: Audio recordings (few transcriptions available).

o Output: Text transcription.

o Algorithm: Deep Semi-Supervised Learning techniques.

 Customer Segmentation:

o Input: Transaction data (only some customers labeled with categories).

o Output: Group customers into meaningful segments.

o Algorithm: Semi-Supervised Clustering.

3. Unsupervised Learning (USL)

Definition: The model is trained on completely unlabeled data to discover patterns or structures.

Examples:

 Clustering (Customer Segmentation):

o Input: Customer transaction data (no labels).

o Output: Group customers with similar behaviors.

o Algorithm: K-Means, DBSCAN.

 Anomaly Detection:

o Input: Network traffic data.

o Output: Identify unusual patterns (potential intrusions).

o Algorithm: Isolation Forest, One-Class SVM.

 Dimensionality Reduction (PCA):


o Input: High-dimensional data (e.g., gene expression data).

o Output: Reduced feature space for visualization.

o Algorithm: Principal Component Analysis (PCA), t-SNE.

 Market Basket Analysis:

o Input: Transaction data from a store.

o Output: Discover associations (e.g., "people who buy bread also buy butter").

o Algorithm: Apriori, FP-Growth.

HOW TO CHOOSE MACHINE LEARNING TECHNIQUE :

Selecting an appropriate machine learning (ML) technique depends on the problem type, the data
available, and the desired outcomes. Here’s a step-by-step guide to help you make the right choice:
Step 1: Define the Problem Type

1. Supervised Learning (Predictive Modeling)

o Use when: You have labeled data and need to predict outcomes.

o Problem types:

 Classification: Predict categories (e.g., spam detection).

 Regression: Predict continuous values (e.g., house prices).

2. Unsupervised Learning (Pattern Discovery)

o Use when: Data is unlabeled, and you want to find hidden patterns.

o Problem types:

 Clustering: Group similar data points (e.g., customer segmentation).

 Dimensionality Reduction: Reduce data complexity (e.g., PCA for


visualization).

3. Semi-Supervised Learning

o Use when: You have a small labeled dataset and a large unlabeled dataset.

o Example: Image classification with limited labeled data.

4. Reinforcement Learning

o Use when: An agent interacts with an environment to maximize cumulative rewards.

o Example: Self-driving cars, game playing (e.g., AlphaGo).

Step 2: Understand the Data Characteristics

1. Size of Data:

o Large datasets may benefit from deep learning models.

o Small datasets might need simpler models (e.g., SVM, Decision Trees).

2. Type of Data:

o Structured data (tabular): Use algorithms like Random Forest, Gradient Boosting.

o Unstructured data (images, text): Use deep learning (CNNs for images,
RNNs/Transformers for text).

3. Label Availability:

o Fully labeled: Supervised learning.

o Partially labeled: Semi-supervised learning.

o Unlabeled: Unsupervised learning.


Step 3: Consider the Desired Output

1. Predictions:

o Use classification or regression models depending on output type.

2. Clustering:

o Use algorithms like K-Means, DBSCAN, or Hierarchical Clustering.

3. Anomaly Detection:

o Use Isolation Forest, One-Class SVM, or Autoencoders.

4. Decision-Making:

o Use Reinforcement Learning algorithms like Q-Learning or DQN.

Step 4: Evaluate Complexity and Interpretability Needs

1. Simple and Interpretable Models:

o Use Logistic Regression, Decision Trees, or Naive Bayes.

2. High Accuracy and Complex Patterns:

o Use Ensemble Methods (Random Forest, XGBoost) or Deep Learning.

Step 5: Check Computational Resources

 Limited resources: Use simpler models (Logistic Regression, Decision Trees).

 Ample resources: Use complex models (Neural Networks, Deep Learning).

Step 6: Match Algorithms to Problem Types

Problem Type Recommended Techniques

Binary Classification Logistic Regression, SVM, Random Forest.

Multi-Class Classification Decision Trees, Gradient Boosting, CNN (images).

Regression Linear Regression, XGBoost, Neural Networks.

Clustering K-Means, DBSCAN, Hierarchical Clustering.

Dimensionality PCA, t-SNE, Autoencoders.


Reduction

Anomaly Detection Isolation Forest, One-Class SVM, Autoencoders.

Reinforcement Learning Q-Learning, Deep Q-Network (DQN), Policy Gradient.


MACHINE LEARNING MODELS

1. Linear-Based Models

These models assume a linear relationship between input features and the output.

Key Characteristics:

 Simple to interpret and implement.

 Assumes a straight-line relationship between input and output.

Examples:

 Linear Regression:

o Predicts continuous output based on a weighted sum of inputs.

o Equation: y=w1x1+w2x2+⋯+wnxn+by = w_1x_1 + w_2x_2 + \dots + w_nx_n +


by=w1x1+w2x2+⋯+wnxn+b

 Logistic Regression:

o Predicts probabilities for classification tasks (binary/multi-class).

o Uses a sigmoid function to map predictions to probabilities.

o Equation: P(y=1∣x)=11+e−(w⋅x+b)P(y=1|x) = \frac{1}{1 + e^{-(w \cdot x +


b)}}P(y=1∣x)=1+e−(w⋅x+b)1

 Support Vector Machines (SVM) (Linear Kernel):

o Finds a hyperplane that separates data into classes with maximum margin.

2. Logic-Based and Algebraic Models

These models use logical rules, decision-making trees, and algebraic equations to map inputs to
outputs.

Key Characteristics:

 Often used in rule-based systems.

 Handle both categorical and numerical data well.

Examples:

 Decision Trees:

o Makes predictions by splitting data based on feature conditions.


o Example:

 If age < 18: Class = Child

 Else: Class = Adult.

 Rule-Based Models:

o Uses sets of if-then rules to classify or predict outcomes.

o Example: Expert systems in medical diagnosis.

 Fuzzy Logic Models:

o Handle uncertainty and imprecision in data.

o Example: Control systems (e.g., washing machines with fuzzy logic).

 Linear Programming Models:

o Solve optimization problems using linear constraints and objectives.

o Example: Resource allocation in logistics.

3. Probabilistic Models

These models predict outcomes based on probability distributions and likelihoods.

Key Characteristics:

 Handle uncertainty and make predictions based on probabilistic reasoning.

 Useful for modeling real-world scenarios with inherent randomness.

Examples:

 Naive Bayes Classifier:

o Assumes feature independence and uses Bayes' theorem for classification.

o Equation: P(C∣X)=P(X∣C)P(C)P(X)P(C|X) = \frac{P(X|C)P(C)}{P(X)}P(C∣X)=P(X)P(X∣C)P(C)

 Hidden Markov Models (HMM):

o Used for sequential data (e.g., speech recognition).

o Example: Predicting weather based on previous states.

 Bayesian Networks:

o Graphical models representing probabilistic relationships between variables.

o Example: Medical diagnosis systems.

 Gaussian Mixture Models (GMM):

o Represents data as a mixture of multiple Gaussian distributions.

o Example: Clustering tasks with overlapping clusters.


Comparison Table:

Category Key Idea Examples Use Cases

Linear-Based Assumes linear Linear Regression, Logistic Predicting prices, binary


Models relationships. Regression, SVM classification.

Logic-Based Uses rules or Decision Trees, Rule- Rule-based classification,


Models decision trees. Based Systems expert systems.

Probabilistic Uses probability Naive Bayes, HMM, Spam detection, time-


Models distributions. Bayesian Networks series analysis.

1. Supervised Learning Models

Models trained on labeled data, where each input has a corresponding output label.

Types:

 Classification: Predicts categorical labels.

o Examples:

 Logistic Regression

 Decision Trees

 Support Vector Machines (SVM)

 Random Forest

 k-Nearest Neighbors (k-NN)

 Neural Networks (for complex tasks)

 Regression: Predicts continuous values.

o Examples:

 Linear Regression

 Polynomial Regression

 Ridge/Lasso Regression

 Support Vector Regression (SVR)

 Gradient Boosting Machines (GBM)

 Neural Networks (for complex tasks)

2. Unsupervised Learning Models


Models trained on unlabeled data to find patterns or structures.

Types:

 Clustering: Groups similar data points.

o Examples:

 K-Means

 DBSCAN (Density-Based Spatial Clustering)

 Hierarchical Clustering

 Gaussian Mixture Models (GMM)

 Dimensionality Reduction: Reduces data complexity while preserving important features.

o Examples:

 Principal Component Analysis (PCA)

 t-SNE (t-Distributed Stochastic Neighbor Embedding)

 Autoencoders (deep learning-based)

 Anomaly Detection: Identifies outliers or unusual data points.

o Examples:

 Isolation Forest

 One-Class SVM

 Autoencoders (for deep learning-based anomaly detection)

3. Semi-Supervised Learning Models

Models trained on a combination of labeled and unlabeled data.

Examples:

 Self-training (bootstrapping)

 Label Propagation

 Label Spreading

 Generative models (e.g., Variational Autoencoders for SSL)

4. Reinforcement Learning Models

Models learn by interacting with an environment to maximize cumulative rewards.

Examples:

 Q-Learning
 Deep Q-Networks (DQN)

 Policy Gradient Methods

 Actor-Critic Models (A3C, DDPG)

 Monte Carlo Methods

5. Ensemble Models

Combine predictions from multiple models to improve performance.

Types:

 Bagging: Reduces variance by training multiple models in parallel.

o Examples: Random Forest, Bagged Decision Trees.

 Boosting: Reduces bias by training models sequentially, focusing on correcting errors.

o Examples: Gradient Boosting, AdaBoost, XGBoost, LightGBM, CatBoost.

 Stacking: Combines multiple models by training a meta-model to aggregate their outputs.

6. Deep Learning Models

Neural network-based models, effective for large datasets and complex tasks.

Types:

 Convolutional Neural Networks (CNNs): For image data.

 Recurrent Neural Networks (RNNs): For sequential data (e.g., time series, text).

 Transformers: For natural language processing (e.g., BERT, GPT).

 Generative Adversarial Networks (GANs): For generating new data.

 Autoencoders: For unsupervised tasks like dimensionality reduction and anomaly detection.

Summary of Model Types and Their Use Cases:

Model Type Common Use Cases

Supervised Learning Spam detection, stock price prediction, image classification.

Unsupervised Learning Customer segmentation, anomaly detection, data visualization.

Semi-Supervised Learning Text classification, image labeling with limited data.

Reinforcement Learning Self-driving cars, robotics, game playing (e.g., AlphaGo).

Ensemble Models Fraud detection, recommendation systems.


Model Type Common Use Cases

Deep Learning Image recognition, natural language processing, speech recognition.

You might also like