Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
7 views

AI6322 - Module 4 - Feature Engineering - MODULE

Uploaded by

JOSHUA DINGDING
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

AI6322 - Module 4 - Feature Engineering - MODULE

Uploaded by

JOSHUA DINGDING
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

[AI6322/ Processes of Intelligent 1 Feature

Data Analysis] Engineering

MODULE 04 - FEATURE ENGINEERING

Module Objectives

At the end of this module, you are expected:

1. Feature engineering improves predictive modeling by crafting new

features from existing data.

2. Techniques like transformation and aggregation create tailored

variables.

3. Apply methods such as one-hot encoding to enrich datasets and

enhance model performance.

4. Evaluate feature engineering's impact on model accuracy.

5. Develop innovative solutions for unique modeling tasks.

6. Assess feature engineering's practicality and effectiveness in real-

world scenarios.

4.1 Introduction to Feature Engineering

4.1.1 Definition of Feature Engineering:

Feature engineering is a crucial step in the process of preparing data

for predictive modeling and machine learning. It involves creating,

selecting, or transforming variables (features) from the raw data to improve

the performance and accuracy of a predictive model. The goal of feature

engineering is to extract meaningful information from the data and

represent it in a way that can be effectively utilized by machine learning

algorithms. This process often requires domain knowledge, creativity, and a

deep understanding of the data to identify and create relevant features.

| Course Module
[AI6322/ Processes of Intelligent 2 Feature
Data Analysis] Engineering

Feature engineering encompasses various tasks, including:

1. Feature Creation: Generating new features from existing ones, such

as combining or transforming variables to capture important

relationships in the data.

2. Feature Selection: Choosing the most relevant features while

discarding irrelevant or redundant ones to reduce dimensionality and

improve model efficiency.

3. Feature Scaling: Scaling or normalizing features to ensure that they

have similar magnitudes, preventing some features from dominating

others in the modeling process.

4. Encoding Categorical Variables: Converting categorical data (e.g.,

text or labels) into numerical representations that machine learning

algorithms can work with.

5. Handling Missing Data: Dealing with missing values in a dataset,

either by imputing values or removing instances with missing data.

4.1.2 Role of Feature Engineering in Predictive Modeling:

Feature engineering is a critical and often underestimated component

of predictive modeling. Its importance can be summarized as follows:

 Improved Model Performance: Well-engineered features can lead

to a significant improvement in the accuracy and generalization of

predictive models. They can help the model capture complex

relationships and patterns within the data, resulting in better

predictions.

| Course Module
[AI6322/ Processes of Intelligent 3 Feature
Data Analysis] Engineering

 Dimensionality Reduction: Effective feature engineering can

reduce the number of features, which is particularly important when

dealing with high-dimensional datasets. Reducing dimensionality can

improve model training speed and reduce the risk of overfitting.

 Domain Knowledge Utilization: Feature engineering allows domain

experts to inject their knowledge and understanding of the data into

the modeling process. This can help create features that are highly

relevant to the specific problem being solved.

 Handling Non-Numeric Data: Many machine learning algorithms

require numeric input, so feature engineering is essential for

converting categorical variables into a format that can be used in

modeling.

 Model Interpretability: Carefully engineered features can enhance

the interpretability of the model. When the features are created to

represent meaningful aspects of the data, it becomes easier to

understand the model's decision-making process.

In summary, feature engineering plays a crucial role in predictive

modeling by transforming raw data into a more suitable format, enhancing

model performance, reducing dimensionality, and allowing domain

knowledge to be leveraged for better results. It is an iterative and creative

process that can make or break the success of a machine learning project.

4.1.3 Methods for Creating New Features and Variables:

A. Feature Selection:

| Course Module
[AI6322/ Processes of Intelligent 4 Feature
Data Analysis] Engineering

Feature selection is the process of choosing a subset of the most

relevant features from the original dataset. This can improve model

performance by reducing dimensionality and eliminating irrelevant or

redundant features.

Example:

Suppose you're working on a classification problem to predict whether

a customer will churn from a subscription service. Your dataset includes

various features like customer ID, age, income, and customer satisfaction

score. You may use feature selection techniques to identify that the

customer ID is not relevant for predicting churn and can be safely removed,

which reduces the dimensionality of your dataset.

B. Feature Extraction:

Feature extraction involves creating new features by applying

mathematical or statistical transformations to the existing variables,

capturing essential information.

Example:

In image processing, you can use techniques like Principal Component

Analysis (PCA) to reduce the dimensionality of pixel values while preserving

critical information. PCA identifies the principal components (new features)

in the images, which are linear combinations of the original pixel values.

| Course Module
[AI6322/ Processes of Intelligent 5 Feature
Data Analysis] Engineering

These components represent the main patterns in the images, such as

edges, textures, or shapes.

C. Feature Transformation:

Feature transformation involves modifying existing features to make

them more suitable for modeling.

Example:

Suppose you have a dataset with income values that have a wide

range. Applying a log transformation to the income feature can compress

the range and make it more normally distributed. This transformation can

help linear models perform better, as they assume that the data is normally

distributed.

D. Interaction Features:

Interaction features are created by combining two or more existing

features to capture relationships or interactions between them.

Example:

In a recommendation system, you might create an interaction feature

by multiplying a user's rating and an item's popularity to represent the

weighted user-item interaction. This feature can help your recommendation

algorithm understand which items are more likely to be preferred by users.

E. Dimensionality Reduction:

Dimensionality reduction techniques aim to reduce the number of

features while preserving essential information.

| Course Module
[AI6322/ Processes of Intelligent 6 Feature
Data Analysis] Engineering

Example:

Consider a dataset with a large number of features, such as gene

expression data for cancer classification. You can use Principal Component

Analysis (PCA) to reduce the dimensionality while retaining the most critical

genes. PCA transforms the data into a lower-dimensional space where the

new features (principal components) capture the primary sources of

variation, making it easier to build a classification model with a reduced risk

of overfitting.

These methods for creating new features are essential in feature

engineering because they help tailor the dataset to the specific modeling

task, improve model performance, and reduce the risk of overfitting. The

choice of which method to use depends on the characteristics of the data

and the goals of the machine learning or data analysis project.

4.1.4 Implementation of feature engineering in machine learning:

Feature engineering is a critical step in the machine learning pipeline,

as it involves creating new features or transforming existing ones to

improve the performance of your model. Here's a high-level overview of how

you can implement feature engineering in machine learning:

1. Data Understanding: Before you can perform feature engineering,

you need a deep understanding of your data. This includes exploring

the dataset, understanding the domain, and identifying the problem

you want to solve.

| Course Module
[AI6322/ Processes of Intelligent 7 Feature
Data Analysis] Engineering

2. Feature Selection: The first step is often to remove irrelevant or

redundant features that do not contribute to the model's performance.

Feature selection methods, like correlation analysis or feature

importance scores, can help in this process.

3. Feature Creation/Transformation: Feature engineering involves

creating new features or transforming existing ones to provide more

meaningful information to the model. Here are some common

techniques:

 Encoding Categorical Variables: Convert categorical variables

into numerical form using techniques like one-hot encoding, label

encoding, or target encoding.

 Feature Scaling: Scale numerical features to have similar ranges.

Common methods include min-max scaling or z-score

normalization.

 Feature Extraction: Use dimensionality reduction techniques like

Principal Component Analysis (PCA) or Linear Discriminant

Analysis (LDA) to create new features from existing ones.

 Binning and Discretization: Divide continuous variables into

bins or intervals to capture non-linear relationships or reduce

noise.

 Feature Crosses: Combine multiple features to create new

interactions. For example, combining "age" and "income" to create

a feature representing "wealth."

| Course Module
[AI6322/ Processes of Intelligent 8 Feature
Data Analysis] Engineering

 Time-Based Features: Extract features from timestamps, such as

day of the week, time of day, or time elapsed since a specific event.

4. Feature Engineering Iteration: Feature engineering is often an

iterative process. You create or modify features, train your model, and

evaluate its performance. If the model isn't performing well, you may

need to go back and refine your features.

5. Domain Knowledge: Incorporate domain knowledge to engineer

features that are specific to the problem you're solving. Domain

expertise can lead to creative and effective feature engineering.

6. Automated Feature Engineering: You can use automated feature

engineering tools like Featuretools or TPOT to help generate new

features or find relevant feature combinations.

7. Validation: Always validate your feature engineering choices using

cross-validation to ensure that they generalize well and don't

introduce overfitting.

B. Case studies and examples of feature engineering:

1. Text Classification:

 In Natural Language Processing (NLP), feature engineering can

involve techniques like TF-IDF (Term Frequency-Inverse Document

| Course Module
[AI6322/ Processes of Intelligent 9 Feature
Data Analysis] Engineering

Frequency) or word embeddings (e.g., Word2Vec or GloVe) to

convert text data into numerical features.

2. Image Classification:

 For image data, you can use techniques like data augmentation

(rotations, flips, zooms), pre-trained deep neural network features

(transfer learning), or edge detection to extract relevant features

from images.

3. Time Series Forecasting:

 In time series data, features can include lag values, rolling

statistics, and seasonal decomposition to capture temporal patterns

and trends.

4. Recommendation Systems:

 Feature engineering in recommendation systems can involve

creating user-item interaction features, user and item embeddings,

and collaborative filtering techniques.

5. Healthcare Predictive Modeling:

 In healthcare, features might include patient demographics,

medical history, lab results, and their temporal evolution to predict

disease outcomes.

6. Financial Predictive Modeling:

 Features for financial data can include technical indicators,

historical price movements, and sentiment analysis scores from

news articles.

| Course Module
[AI6322/ Processes of Intelligent 10 Feature
Data Analysis] Engineering

7. E-commerce:

 In e-commerce, you can engineer features related to user behavior,

product popularity, and time-sensitive promotions to improve

recommendation systems and demand forecasting.

Feature engineering is a crucial skill in machine learning, as the

quality of your features can significantly impact the model's performance. It

requires a balance of domain knowledge, creativity, and a deep

understanding of the data.

4.1.4 Comparative analysis of different feature engineering

approaches:

Comparing different feature engineering approaches involves

assessing the performance of machine learning models with and without

various feature engineering techniques. Here's how you can conduct a

comparative analysis:

1. Baseline Model: Start with a baseline model using the raw dataset,

without any feature engineering. This serves as a reference point for

comparison.

2. Feature Engineering Variations: Apply different feature

engineering techniques or combinations of techniques to create

multiple feature sets. Some of the common techniques include

encoding categorical variables, feature scaling, dimensionality

reduction, and creating new features as mentioned in the previous

response.

| Course Module
[AI6322/ Processes of Intelligent 11 Feature
Data Analysis] Engineering

3. Model Selection: Choose a suitable machine learning algorithm for

your problem, whether it's regression, classification, or clustering.

Ensure consistency in model selection across all feature engineering

variations.

4. Cross-Validation: Use cross-validation to train and evaluate each

model with different feature sets. Cross-validation helps ensure the

results are robust and minimize the risk of overfitting.

5. Evaluation Metrics: Select appropriate evaluation metrics for your

problem. Common metrics include accuracy, precision, recall, F1-

score, RMSE (Root Mean Squared Error), and MAE (Mean Absolute

Error), among others.

6. Comparative Analysis: Compare the model performance metrics for

each feature engineering variation with the baseline model. Analyze

how feature engineering impacts the model's predictive power.

7. Statistical Tests: Conduct statistical tests to determine if the

performance improvements are statistically significant. Techniques

like paired t-tests or ANOVA can be used for this purpose.

8. Visualizations: Create visualizations to better understand the impact

of feature engineering. Visualizations such as ROC curves, precision-

recall curves, or scatter plots can help assess model performance.

9. Ablation Studies: In some cases, conduct ablation studies by

removing certain engineered features to understand their individual

contributions to the model's performance.

| Course Module
[AI6322/ Processes of Intelligent 12 Feature
Data Analysis] Engineering

10. Interpretability: Consider the interpretability of the

engineered features. Complex feature engineering may make it more

challenging to interpret and explain the model's predictions.

4.1.5 Measuring the impact of feature engineering on predictive

modeling:

Measuring the impact of feature engineering on predictive modeling

is essential to determine if the effort invested in creating new features or

modifying existing ones leads to better model performance. Here are some

ways to measure this impact:

1. Performance Metrics: Compare the model's performance metrics

(e.g., accuracy, precision, recall, RMSE) before and after feature

engineering. If the metrics improve significantly, it indicates a

positive impact.

2. Cross-Validation Scores: Use cross-validation to obtain stable

performance scores for models with and without feature engineering.

Compare the mean scores and standard deviations to assess the

consistency of improvements.

3. Feature Importance: If your model supports feature importance

scores (e.g., random forests, gradient boosting), examine these scores

to see which features contributed the most to the model's predictive

| Course Module
[AI6322/ Processes of Intelligent 13 Feature
Data Analysis] Engineering

power. Feature engineering should ideally enhance the importance of

relevant features.

4. Model Complexity: Consider the complexity of the model. Feature

engineering may allow you to build simpler models that generalize

better to unseen data, resulting in a more interpretable and reliable

solution.

5. Overfitting: Watch out for signs of overfitting. Sometimes, aggressive

feature engineering can lead to overfitting if the model captures noise

in the data. Monitor the performance on validation data and use

regularization techniques to mitigate overfitting.

6. Computational Efficiency: Assess the computational cost of feature

engineering. Some complex feature engineering techniques may

significantly increase training and prediction time, which can be a

drawback in real-time applications.

7. Domain Knowledge: Integrate domain knowledge and business

understanding into the evaluation. Feature engineering should align

with the problem domain, and its impact on business outcomes should

be considered.

8. A/B Testing: In some cases, you can conduct A/B testing to measure

the real-world impact of feature engineering on key performance

indicators. This is particularly relevant for applications like

recommendation systems and e-commerce.

| Course Module
[AI6322/ Processes of Intelligent 14 Feature
Data Analysis] Engineering

9. Qualitative Assessment: Beyond quantitative metrics, consider

qualitative aspects such as the interpretability, ease of

implementation, and robustness of the feature engineering

techniques.

Remember that the impact of feature engineering can vary from one

problem to another, and there's no one-size-fits-all approach. It requires

experimentation, careful analysis, and a deep understanding of the data and

the problem domain.

4.1.6 Developing innovative feature engineering techniques:

Innovative feature engineering involves thinking creatively and

coming up with new ways to create features that can enhance the

performance of machine learning models. Here are some strategies for

developing innovative feature engineering techniques:

1. Feature Crosses: Create interaction features by combining two or

more existing features. For example, in a real estate prediction model,

you can combine the "number of bedrooms" and "square footage" to

create a "size per bedroom" feature.

2. Polynomial Features: Consider raising numerical features to higher

powers (e.g., squared, cubed) to capture nonlinear relationships. This

is particularly useful for regression tasks.

| Course Module
[AI6322/ Processes of Intelligent 15 Feature
Data Analysis] Engineering

3. Time-Based Features: When working with time series data,

generate time-based features such as day of the week, time of day, or

holidays. These can help capture temporal patterns.

4. Text Mining Features: For text data, perform advanced text mining

techniques like sentiment analysis, topic modeling, or named entity

recognition to extract meaningful features from textual content.

5. Graph Features: If your data has a graph structure, engineer

features based on network properties, such as node centrality,

shortest path lengths, or community detection metrics.

6. Embeddings: Use techniques like Word2Vec or Doc2Vec for text data

or Graph Embeddings for graph data to create dense vector

representations of entities, which can be used as features.

7. Autoencoders: Implement autoencoder neural networks to learn

compact representations of data, which can serve as novel features

for various tasks.

8. Fuzzy Matching: In data with text or string attributes, apply fuzzy

matching algorithms to find similarities between records or entities,

which can be transformed into features.

9. Geospatial Features: Utilize geospatial data to engineer features

such as distances to important landmarks, density of certain

businesses or facilities, or geographic clusters.

| Course Module
[AI6322/ Processes of Intelligent 16 Feature
Data Analysis] Engineering

10. Statistical Moments: Calculate statistical moments (mean,

variance, skewness, kurtosis) for numerical features to capture

distribution characteristics.

11. Stacking and Ensembling: Create features by stacking

predictions from multiple base models. These predictions can serve as

features for a meta-model.

12. Custom Transforms: Design custom feature transformations

based on domain knowledge. For instance, in the context of

manufacturing, you might engineer features related to machine

operating conditions.

4.1.7 Tailoring feature engineering to specific predictive modeling

tasks:

Feature engineering should be tailored to the specific requirements

and characteristics of the predictive modeling task. Here's how you can

adapt your feature engineering to different types of modeling tasks:

1. Classification: When working on classification tasks, focus on

creating features that help discriminate between different classes.

Use techniques like one-hot encoding for categorical variables and

consider feature scaling.

2. Regression: In regression tasks, engineer features that capture

relationships and trends within the data. Consider feature

transformations, interaction terms, and normalization as needed.

| Course Module
[AI6322/ Processes of Intelligent 17 Feature
Data Analysis] Engineering

3. Time Series Forecasting: Emphasize lagged features, rolling

statistics, and seasonal components for time series forecasting.

Include time-related information and external factors that might affect

the time series.

4. Recommendation Systems: Create features that represent user-

item interactions, user preferences, and collaborative filtering signals.

Matrix factorization and embedding techniques are valuable in

recommendation tasks.

5. Clustering and Segmentation: Tailor features to capture patterns

that distinguish different clusters or segments within the data. Focus

on within-cluster similarity and between-cluster dissimilarity.

6. Anomaly Detection: Design features that highlight deviations from

normal behavior. Use statistical measures and data distribution

properties to create anomaly-detection-specific features.

7. Natural Language Processing (NLP): For NLP tasks, engineer

features that capture semantic and syntactic information, such as TF-

IDF, word embeddings, and named entity recognition.

8. Reinforcement Learning: In reinforcement learning, create state

representations that simplify the environment while preserving

critical information. Feature engineering for reinforcement learning

often requires a deep understanding of the problem.

9. Multimodal Tasks: When dealing with tasks involving multiple data

modalities (e.g., text and images), create features that effectively

| Course Module
[AI6322/ Processes of Intelligent 18 Feature
Data Analysis] Engineering

integrate information from each modality, ensuring cross-modal

consistency.

10. Streaming Data: Adapt feature engineering to streaming data

by considering real-time feature extraction, sliding windows, and

continuous updates to models.

11. Interpretable Models: Focus on creating features that

facilitate the interpretability of models, especially when working with

models where interpretability is crucial, like linear regression or

decision trees.

12. High-Dimensional Data: When dealing with high-dimensional

data, explore dimensionality reduction techniques like PCA or LDA to

reduce feature dimensionality while retaining essential information.

Tailoring feature engineering to the specific modeling task ensures

that the engineered features are relevant, informative, and can lead to

improved model performance. It often involves a deep understanding of the

problem domain and iterative experimentation to find the most effective

feature engineering strategies.

4.1.8 Real-world implications of feature engineering:

Feature engineering has significant real-world implications in

predictive modeling across various domains and applications. Here are

some of the key implications:

| Course Module
[AI6322/ Processes of Intelligent 19 Feature
Data Analysis] Engineering

1. Improved Model Performance: Effective feature engineering can

substantially enhance the predictive power of machine learning

models. By creating relevant, informative features, models can better

capture underlying patterns in the data.

2. Domain Knowledge Utilization: It enables the incorporation of

domain-specific knowledge into the modeling process. Experts can

design features that reflect their understanding of the problem,

leading to more interpretable and accurate models.

3. Reduced Data Dimensionality: Feature engineering can reduce the

dimensionality of high-dimensional datasets, making modeling more

efficient and allowing for better visualization and interpretation of the

results.

4. Interpretability: Carefully engineered features often lead to more

interpretable models. Simple, meaningful features can be easier to

explain to stakeholders and regulators.

5. Feature Importance Insights: The process of feature engineering

provides insights into which features are most relevant for a

particular task. This information is valuable for understanding the

driving factors behind predictions.

6. Efficient Model Training: Well-engineered features can lead to

faster model training times and improved efficiency, especially when

dealing with large datasets.

| Course Module
[AI6322/ Processes of Intelligent 20 Feature
Data Analysis] Engineering

7. Robustness and Generalization: Properly engineered features can

lead to more robust models that generalize well to unseen data. They

help the model focus on meaningful patterns and reduce sensitivity to

noise.

B. Limitations and challenges in practical applications:

While feature engineering is a powerful technique, it also comes with

several limitations and challenges in practical applications:

1. Data Quality: Feature engineering heavily relies on the quality of the

underlying data. No amount of feature engineering can compensate

for fundamentally flawed or biased data.

2. Overfitting: Aggressive feature engineering can lead to overfitting,

where the model learns to fit the training data but performs poorly on

new, unseen data. Balancing complexity is crucial.

3. Computational Cost: Some feature engineering techniques can

significantly increase computational costs, particularly when dealing

with high-dimensional data or complex transformations.

4. Expertise Required: Effective feature engineering often demands

domain expertise. Not all problems can be solved by automated or

generic feature engineering techniques.

5. Data Shift: Feature engineering can introduce data shift if the

features are not consistent between training and deployment

environments. This can degrade model performance.

| Course Module
[AI6322/ Processes of Intelligent 21 Feature
Data Analysis] Engineering

6. Bias and Fairness: Feature engineering may inadvertently introduce

bias if not done carefully. It's important to consider fairness and

ethical considerations in feature design.

7. Dimensionality Reduction Challenges: Reducing data

dimensionality using feature engineering can be challenging, and it

may lead to information loss if not done thoughtfully.

8. Interpretability Trade-off: More complex feature engineering can

make models less interpretable, especially when creating high-level

abstractions from raw data.

9. Feature Selection Complexity: Choosing the right features from a

large set can be challenging. Automated feature selection techniques

may not always yield the best results.

4.1.9 Efficiency and trade-offs in using feature engineering in

predictive modeling:

Efficiency and trade-offs in feature engineering are crucial

considerations in practical predictive modeling:

1. Efficiency vs. Performance: There is often a trade-off between

feature engineering complexity and model performance. Complex

features can lead to better predictive accuracy but may require more

computational resources.

2. Computational Efficiency: Some feature engineering techniques,

such as dimensionality reduction or advanced text processing, can

| Course Module
[AI6322/ Processes of Intelligent 22 Feature
Data Analysis] Engineering

significantly increase computational demands. Balancing

computational efficiency with feature quality is important.

3. Feature Selection: Deciding which features to include is a trade-off

between dimensionality and predictive power. Feature selection

techniques help in finding the right balance.

4. Automation vs. Manual Engineering: Automated feature

engineering tools can be efficient, but they may not capture domain-

specific nuances. Manual feature engineering requires expertise but

can yield more tailored results.

5. Data Shift Considerations: When deploying models, it's essential to

consider the potential shift in data distribution, especially if the

feature engineering is performed on historical data that may differ

from real-time or future data.

6. Model Interpretability: Complex feature engineering may sacrifice

model interpretability, which can be a critical concern in applications

where transparency is necessary.

7. Resource Constraints: In resource-constrained environments (e.g.,

edge devices), feature engineering should be optimized for efficient

model inference, which may limit the complexity of features.

8. Evaluation of Trade-offs: The trade-offs between efficiency and

feature quality should be evaluated through cross-validation and

domain-specific knowledge to make informed decisions.

| Course Module
[AI6322/ Processes of Intelligent 23 Feature
Data Analysis] Engineering

In practice, finding the right balance between feature engineering

complexity and model performance depends on the specific problem, the

available resources, and the project's goals. Careful evaluation and

experimentation are essential to make informed decisions regarding feature

engineering trade-offs.

| Course Module
[AI6322/ Processes of Intelligent 24 Feature
Data Analysis] Engineering

References and Supplementary Materials

Online Supplementary Reading Materials

Becker, R. L., & Cleveland, W. S. (1988). Data cleaning: Rules and best

practices. Duxbury Press.

Bertsimas, D. P., & Tsitsiklis, J. N. (2015). Automated machine learning.

Athena Scientific.

Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2005).

Ensemble methods in machine learning. Springer.

Gelman, A., & Hill, J. (2007). Exploratory data analysis: An introduction

(2nd ed.). Chapman and Hall/CRC.

Géron, A. (2017). Hands-on machine learning with Scikit-Learn, Keras &

TensorFlow: Concepts, tools, and techniques to build intelligent

systems (1st ed.). O'Reilly Media.

Géron, A. C. (2019). Feature engineering for machine learning. O'Reilly

Media.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning (1st ed.).

MIT Press.

| Course Module
[AI6322/ Processes of Intelligent 25 Feature
Data Analysis] Engineering

Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical

learning (2nd ed.). Springer Series in Statistics.

Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: Principles and

practice (3rd ed.). Now Publishers.

Jurafsky, D., & Martin, J. H. (2020). Speech and language processing (3rd

ed.). Pearson Education Limited.

Kotu, V., Rao, V. R., & Krishna, K. (2010). Case studies in machine learning.

Cambridge University Press.

Molnar, C. (2020). Interpretable machine learning: A guide for making

black box models explainable. Wiley.

Müller, A. C., & Guido, S. (2017). Introduction to machine learning with

Python: A guide for data scientists (1st ed.). Springer.

Provost, F., & Fawcett, T. (2013). Data science for business: Forecasting

model selection and performance evaluation. Wiley.

| Course Module

You might also like