Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
146 views

Machine Learning - Introduction

This document provides an introduction to machine learning, including: - Collecting data from experts and observations to create a model to determine when to play golf based on weather conditions. - Machine learning uses algorithms to learn from data without relying on predetermined equations, improving performance as more data becomes available. - Key figures in machine learning history include Arthur Samuel, who coined the term in 1959, and Ray Solomonoff who developed one of the first learning machines.

Uploaded by

firman syah
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views

Machine Learning - Introduction

This document provides an introduction to machine learning, including: - Collecting data from experts and observations to create a model to determine when to play golf based on weather conditions. - Machine learning uses algorithms to learn from data without relying on predetermined equations, improving performance as more data becomes available. - Key figures in machine learning history include Arthur Samuel, who coined the term in 1959, and Ray Solomonoff who developed one of the first learning machines.

Uploaded by

firman syah
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

Introduction of Machine Learning

PT MORA TELEMATIKA INDONESIA


Grha 9 6th Floor Jalan Penataran No. 9 Jakarta Pusat 10320
Phone: +62 21 3199 8600 Fax: +62 21 314 2882
Machine Learning
Introductory Example :
When to play golf?

Collect data :
• Consulting experts
e.g. Golf Player
• Watching players
• Collecting weather data

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Machine Learning
Introductory Example :
When to play golf?

• Create a model
using one/ several classifiers
– e.g., decision trees
• Evaluate model
– e.g., classification error

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Machine Learning

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


What is Machine Learning

Machine learning teaches computers to do what comes naturally to humans


and animals: learn from experience. Machine learning algorithms use
computational methods to “learn” information directly from data without
relying on a predetermined equation as a model. The algorithms
adaptively improve their performance as the number of samples available
for learning increases

“Machine Learning” at the first time introduced by Arthur Samuel, an


Computer Expert from USA, in 1959. Roughly he said machine learning as
a branch of computer science that examines how a machine can solve a
problem without being explicitly programmed

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


What is Machine Learning

Ray Solomonoff(1926 –2009, USA)


Widely considered as the father of machine learning
for his 1956 report “An Inductive Inference
Machine”

Arthur Samuel (1901-1990, USA)


Samuel Checkers-playing Program: considered the
first self-learning program (from the 1950s until mid
1970s)

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


What is Machine Learning

• A Branch of Artificial Intelligent (AI)


• Machine learns from Experiences
• Extract information directly from data without relying on predefined equation
• Accurate Model relying on the number of data and iteration process
• Multidisciplinary Fields, e.g. artificial intelligence, probability and statistics,
computational complexity theory, control theory, information theory,
philosophy, psychology, neurobiology and other fields
• Machine learning can provide more precise analysis or conclusions from
simple algorithms and limited label data such as lookup functions
• Features (Inputs) & Labels (Outputs)

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Machine Learning VS
Traditional Programming
Traditional Programming
Traditional programming is a manual process—meaning a person (programmer) creates the program. But
without anyone programming the logic, one has to manually formulate or code rules.

Machine Learning

Program = Model

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Other Terminologies

• Statistical inference (SI) is a branch of Applied Mathematic which using data analysis
to deduce properties of an underlying probability distribution of a population
• Pattern Recognition (PR) branch of Computer Science focused on the automated
recognition of patterns and regularities in data (image processing, speech recognition,
etc.)
• Data mining is a field of Database Engineering which focus on the extraction of
implicit, previously unknown, and potentially useful information from data. The idea is
to build computer programs that sift through databases automatically, seeking
regularities or patterns

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Branch of Machine Learning
• Artificial intelligence (AI) is the simulation of human intelligence processes by
machines, especially computer systems. These processes include learning (the
acquisition of information and rules for using the information), reasoning (using rules
to reach approximate or definite conclusions) and self-correction.
• Machine Reasoning (MR) systems generate conclusions from previously acquired
knowledge by applying logical techniques like deduction and induction.
• Natural language processing (NLP) is a branch of artificial intelligence that helps
computers understand, interpret and manipulate human language. NLP draws from
many disciplines, including computer science and computational linguistics, in its
pursuit to fill the gap between human communication and computer understanding
• Deep Learning is part of a broader family of machine learning methods based on
structure and function of artificial neural networks that uses multiple layers to
progressively extract higher level features from the raw input. Widely used in Image
Processing

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Branch of Machine Learning
• Deep Learning involves a more complex learning process in studying, analyzing and
classifying data
• Deep Learning is suitable for complex Data or unlimited label data such as language,
voice or image.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Why Machine Learning

Machine learning methodologies have proven to be of great practical value in a


variety of application domains in situations where it is impractical to manually
extract information from data :
• Automatic, or semi-automatic techniques, are more adequate
• Volumes and varieties data increase dramatically while the computational
processing become cheaper and more powerful with affordable storage
Data Mining and Bayesian Analysis become popular than ever.
Machine Learning is going to have huge effects on the economy and living in
general. Entire work tasks and industries can be automated and the job market
will be changed forever.
Machine Learning accelerates the growth of the Industrial Revolution 4.0 which
makes data and information as the most important assets

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


When Machine Learning Used

Consider using machine learning when you have a complex task or problem involving a
large amount of data and lots of variables, but no existing formula or equation

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Type of Machine Learning

CLASSIFICATION
SUPERVISED
LEARNING
REGRESSION

MACHINE LEARNING UNSUPERVISED CLUSTERING


LEARNING

Q-LEARNING
REINFORCED
LEARNING
SARSA (State-Action-
Reward-State-Action)

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Supervised Learning
The aim of supervised machine learning is to build a model
that makes predictions based on evidence in the presence of
uncertainty.
A supervised learning algorithm takes a known set of input
data and known responses to the data (output) and trains a
model to generate reasonable predictions for the response to
new data.

• Classification techniques predict discrete responses — for example,


whether an email is genuine or spam, or whether a tumor is cancerous
or benign. Classification models classify input data into categories.
Typical applications include medical imaging, speech recognition, and
credit scoring.
• Regression techniques predict continuous responses— for example,
changes in temperature or fluctuations in power demand. Typical
applications include electricity load forecasting and algorithmic
trading.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Unsupervised Learning

Unsupervised learning finds hidden patterns or intrinsic


structures in data. It is used to draw inferences from datasets
consisting of input data without labeled responses.
Clustering is the most common unsupervised learning technique.
It is used for exploratory data analysis to find hidden patterns or
groupings in data. Applications for clustering include gene
sequence analysis, market research, and object recognition.

Association allows you to establish associations amongst data


objects inside large databases. This unsupervised technique is
about discovering interesting relationships between variables in
large databases. For example, people that buy a new home most
likely to buy new furniture.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Unsupervised Learning

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Unsupervised Learning

• Let's, take the case of a baby and her


family dog
• She knows and identifies this dog.
• She learns that it has 2 ears, eyes, walking
on 4 legs

• Few weeks later a family friend brings


along a dog and tries to play with the baby.
• Baby has not seen this dog earlier. But it
recognizes many features (2 ears, eyes,
walking on 4 legs) are like her dog.
• She identifies the new animal as a dog.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Unsupervised Learning

This is unsupervised learning,


where you are not taught but
you learn from the data (in this
case data about a dog.)

Had this been supervised


learning, the family friend
would have told the baby that
it's a dog.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Supervised vs Unsupervised
Learning

Reasons of Unsupervised Learning:


• finds all kind of unknown patterns in data.
• help you to find features which can be useful for categorization.
• It is taken place in real time, so all the input data to be analyzed and labeled in the presence of
learners.
• It is easier to get unlabeled data from a computer than labeled data, which needs manual
intervention.
Integrity | Change | Team Work | Service Excellence | Innovative | Passionate
Semi-supervised Learning

Semi-supervised learning is a class of machine learning tasks and techniques that also
make use of unlabeled data for training typically a small amount of labeled data with
a large amount of unlabeled data.
Many machine-learning researchers have found that unlabeled data, when used in
conjunction with a small amount of labeled data, can produce considerable
improvement in learning accuracy

An example of the influence of unlabeled data in semi-supervised learning. The top


panel shows a decision boundary we might adopt after seeing only one positive
(white circle) and one negative (black circle) example. The bottom panel shows a
decision boundary we might adopt if, in addition to the two labeled examples, we
were given a collection of unlabeled data (gray circles). This could be viewed as
performing clustering and then labeling the clusters with the labeled data, pushing
the decision boundary away from high-density regions, or learning an underlying
one-dimensional manifold where the data reside.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Reinforcement Learning
Reinforcement Learning(RL) is a type of machine learning technique that enables an
agent to learn in an interactive environment by trial and error using feedback from its
own actions and experiences.
ITEMS SUPERVISED REINFORCED UNSUPERVISED
Mapping Input - Correct Set of Actions (Labels) Rewards and Punishment as positive and Unavailable
Output as a feedback to Model (Agent) negative feedback to Model (Agent)
Goals Find predictions based on Find suitable action model that Find similarities and
evidence in the presence of maximize total cumulative reward of differences between
uncertainty Model data set

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


More on ML Types

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Which Algorithm to Use

There is no best method or one size fits


all. Finding the right algorithm is partly
just trial and error—even highly
experienced data scientists can’t tell
whether an algorithm will work without
trying it out.
The algorithm selection also depends
on the size and type of data you’re
working with, the insights you want to
get from the data, and how those
insights will be used.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Data
Collection

Model Feature
Evaluation Extraction

Model Feature
Learning Selection

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Data
Collection

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Data Collection
Goals
 First requirement: having good data
• Get meaningful, representatives examples of each concept to capture, balanced
across classes, etc.
• Get accurate annotations e.g., songs with accurate emotion tags might be hard
to get, as emotion is naturally ambiguous.
 Questions can be used for guidance
• What problem do you want to solve?
• Why do you want to solve it?
• Has anybody done it before?
• What is the domain of your problem? is it related to Computer Vision, Natural
Language Processing, Sensor data, or some XYZ?

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Data Collection
Additional Points
It is important to plan ahead on how much data one may acquire. You cannot just store
in a hard-disk and save it in directories and assume you are ready to go.
A lot of effort goes in data storage, organization, annotation and pre-processing. Data
privacy is an important part if individual people’s personal information is to be stored.
Some data can be stored in simple text files but for other you may want to develop a
database (or a light version) for faster access. If the data is too big to fit in memory,
then big data techniques may need to be adopted (e.g. Hadoop framework).

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Feature
Extraction

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Extraction
Goals
 Obtaining meaningful, accurate features
 If the number of features becomes similar (or even bigger!) than the number of
observations stored in a dataset then this can most likely lead to a Machine Learning
model suffering from overfitting.
 In order to avoid this type of problem, it is necessary to apply either regularization
or dimensionality reduction techniques (Feature Extraction)
 Advantages such as:
 Accuracy improvements.
 Overfitting risk reduction.
 Speed up in training.
 Improved Data Visualization.
 Increase in explainability of our model.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Extraction

 Feature Extraction aims to reduce the number of features in a dataset by creating


new features from the existing ones (and then discarding the original features).
These new reduced set of features should then be able to summarize most of the
information contained in the original set of features. In this way, a summarized
version of the original features can be created from a combination of the original set.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Extraction - Techniques :
Principle Components Analysis (PCA) – Unsupervised LM
 PCA is one of the most used linear dimensionality reduction technique.
 We take as input our original data and try to find a combination of the input features which
can best summarize the original data distribution so that to reduce its original dimensions.
 In PCA, our original data is projected into a set of orthogonal axes and each of the axes gets
ranked in order of importance.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Extraction - Techniques :
Independent Component Analysis (ICA)– Unsupervised LM
 ICA is a linear dimensionality reduction technique which takes as input data a mixture of
independent components and it aims to correctly identify each of them (deleting all the
unnecessary noise).

 As a simple example of an ICA application, let’s


consider we are given an audio registration in
which there are two different people talking.
Using ICA we could, for example, try to
identify the two different independent
components in the registration (the two
different people). In this way, we could make
our unsupervised learning algorithm recognize
between the different speakers in the
conversation.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Extraction - Techniques :
Linear Discriminant Analysis (LDA) – Supervised LM
 LDA aims to maximize the distance between the mean of each class and minimize the
spreading within the class itself.
 This is a good choice because maximizing
the distance between the means of each
class when projecting the data in a lower-
dimensional space can lead to better
classification results (thanks to the
reduced overlap between the different
classes).
 When using LDA, is assumed that the input
data follows a Gaussian Distribution (like in
this case), therefore applying LDA to not
Gaussian data can possibly lead to poor
classification results
Integrity | Change | Team Work | Service Excellence | Innovative | Passionate
How Machine Learning Works
Feature Extraction - Techniques :
Locally Linear Embedding (LLE) – Supervised/Unsupervised LM
 Locally Linear Embedding is a High dimensionality reduction technique based on Manifold
Learning. A Manifold is an object of D dimensions which is embedded in an higher-
dimensional space.
 Manifold Learning aims then to make this object representable in its original D dimensions
instead of being represented in an unnecessary greater space

 A typical example used to explain


Manifold Learning in Machine Learning
is the Swiss Roll Manifold. We are given
as input some data which has a
distribution resembling the one of a roll
(in a 3D space), and we can then unroll
it so that to reduce our data into a two-
dimensional space
Integrity | Change | Team Work | Service Excellence | Innovative | Passionate
How Machine Learning Works
Feature Extraction - Techniques :
t-distributed Stochastic Neighbor Embedding (t-SNE) – Supervised/Unsupervised LM
 t-SNE is non-linear dimensionality reduction technique which is typically used to visualize
high dimensional datasets. Some of the main applications of t-SNE are Natural Language
Processing (NLP), speech processing, etc.
 t-SNE works by minimizing the divergence between a distribution constituted by the
pairwise probability similarities of the input features in the original high dimensional space
and its equivalent in the reduced low dimensional space. t-SNE makes then use of the
Kullback-Leiber (KL) divergence in order to measure the dissimilarity of the two different
distributions. The KL divergence is then minimized using gradient descent.
 When using t-SNE, the higher dimensional space is modelled using a Gaussian Distribution,
while the lower-dimensional space is modelled using a Student’s t-distribution.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Extraction - Techniques :
Autoencoders – Supervised/Unsupervised LM
 The main difference between Autoencoders and other dimensionality reduction techniques
is that Autoencoders use non-linear transformations to project data from a high dimension
to a lower one.
 There exist different types of Autoencoders such as:
Denoising Autoencoder, Variational Autoencoder, Convolutional Autoencoder, Sparse
Autoencoder
 The basic architecture of an Autoencoder :
• Encoder: takes the input data and compress it, so that to
remove all the possible noise and unhelpful information.
The output of the Encoder stage is usually called
bottleneck or latent-space.
• Decoder: takes as input the encoded latent space and
tries to reproduce the original Autoencoder input using
just it’s compressed form (the encoded latent space).

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Feature
Selection

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Selection
Goals
 Removal of redundancies -> eliminate irrelevant or redundant features
• e.g., Bayesian models assume independence between features -> redundant
features decrease accuracy
 Dimensionality reduction; simpler, faster, more interpretable models
Why we needs
1. Curse of dimensionality — Overfitting
More columns in the data than the number of rows won’t generalize to the new
samples. And thus we learn absolutely nothing
2. We want our models to be simple and explainable. We lose explainability when we
have a lot of features,
3. Garbage In Garbage out

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Selection

Feature selection methods can be divided into three


Filter based: We specify some metric and based on that filter features. An example of
such a metric could be correlation/chi-square.
Wrapper-based: Wrapper methods consider the selection of a set of features as a
search problem. Example: Recursive Feature Elimination
Embedded: Embedded methods use algorithms that have built-in feature selection
methods. For instance, Lasso and RF have their own feature selection methods.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Selection
Five feature selection methods :
1. Pearson Correlation

This is a filter-based method.


We check the absolute value of the Pearson’s correlation between the target and
numerical features in our dataset. We keep the top n features based on this
criterion.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Selection
Five feature selection methods :
2. Chi-Squared

This is a filter-based method.


In this method, we calculate
the chi-square metric between
the target and the numerical
variable and only select the
variable with the maximum chi-
squared values.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Selection
Five feature selection methods :
3. Recursive Feature Elimination
This is a wrapper based method

we could use any estimator with the method (LogisticRegression), and the RFE
observes the coef_ attribute of the LogisticRegression object.
Integrity | Change | Team Work | Service Excellence | Innovative | Passionate
How Machine Learning Works
Feature Selection
Five feature selection methods :
4. Lasso: SelectFromModel
This is a Embedded method

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Feature Selection
Five feature selection methods :
5. Tree-based: SelectFromMode
This is a Embedded method
• We can also use RandomForest to select features based on feature importance.
We calculate feature importance using node impurities in each decision tree. In
Random forest, the final feature importance is the average of all decision tree
feature importance.
• We could also have used a LightGBM. Or an XGBoost object as long it has a
feature_importances_ attribute

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Model
Learning

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Model Learning
 Several different learning problems…
• Classification, regression, association, clustering, …
 … and learning paradigms
• Supervised, unsupervised, reinforcement learning, …
 Goals
• Tackle the respective learning problem by creating a good model from data
• This often requires
 Defining the train and test sets
 Comparing different models
 Parameter tuning

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Model Learning

Examples of learning algorithms


 Classification: decision trees (e.g., C5.4), Support Vector Machines, K-Nearest
Neighbours, …
 Regression: Support Vector Regression, Linear Regression, Logistics Regression, …
 Association: Apriori, FP-Growth, …
 Clustering: K-means clustering, Expectation-Maximization, Hierarchical Clustering,

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works

Model
Evaluation

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


How Machine Learning Works
Model Evaluation
 Goals
Evaluate how the model will perform on unseen data, i.e., model generalization
capability
 Examples of evaluation metrics
• Classification: precision/recall, f-measure, confusion matrices
• Regression: root mean squared error, R2 statistics
 Examples of model evaluation strategies
• Hold-out
• K-fold cross validation

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Implementation Fields

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Real World Implementation
Machine learning algorithms find natural patterns in data that generate insight and help
you make better decisions and predictions. With the rise in big data, machine learning
has become particularly important for solving problems in many areas

• Computational finance, for credit and scoring and algorithmic trading


• Image processing and computer vision, for face recognition, motion detection, and object detection
• Computational biology, for tumor detection, drug discovery, and DNA sequencing
• Energy production, for price and load forecasting
• Automotive, aerospace, and manufacturing, for predictive maintenance
• Natural language processing

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Real World Examples
Creating Algorithms that Can Analyze Works of Art
• Researchers at the Art and Artificial Intelligence Laboratory at Rutgers
University wanted to see whether a computer algorithm could classify
paintings by style, genre, and artist as easily as a human.
• They began by identifying visual features for classifying a painting’s style
• The researchers hypothesized that visual features useful for style
classification (a supervised learning problem) could also be used to
determine artistic influences (an unsupervised problem).
• They used classification algorithms trained on Google images to identify
specific objects. They tested the algorithms on more than 1,700
paintings from 66 different artists working over a span of 550 years.
• The algorithms they developed classified the styles of paintings in the
database with 60% accuracy, outperforming typical non-expert humans.
It also readily identified connected works, including the influence of
Diego Velazquez’s “Portrait of Pope Innocent X” on Francis Bacon’s
“Study After Velazquez’s Portrait of Pope Innocent X.”

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Real World Examples
Optimizing HVAC Energy Usage in Large Buildings
• The heating, ventilation, and air-conditioning (HVAC) systems
in office buildings, hospitals, and other largescale
commercial buildings are often inefficient because they do
not take into account changing weather patterns, variable
energy costs, or the building’s thermal properties.
• Building IQ’s cloud-based software platform addresses this
problem. The platform uses advanced algorithms and
machine learning methods to continuously process gigabytes
of information from power meters, thermometers, and HVAC
pressure sensors, as well as weather and energy cost.
• In particular, machine learning is used to segment data and
determine the relative contributions of gas, electric, steam,
and solar power to heating and cooling processes.
• The building IQ platform reduces HVAC energy consumption
in largescale commercial buildings by 10% - 25% during
normal operation.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Real World Examples

Detecting Low-Speed Car Crashes


• With more than 8 million members, the RAC is one
of the UK’s largest motoring organizations, providing
roadside assistance, insurance, and other services to
private and business motorists.
• To enable rapid response to roadside incidents,
reduce crashes, and mitigate insurance costs, the
RAC developed an onboard crash sensing system
that uses advanced machine learning algorithms to
detect low speed collisions and distinguish these
events from more common driving events, such as
driving over speed bumps or potholes.
• Independent tests showed the RAC system to be
92% accurate in detecting test crashes.

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Video Machine Learning

Integrity | Change | Team Work | Service Excellence | Innovative | Passionate


Next Lesson Recommended
 Knowledge Base
https://developers.google.com/machine-learning/crash-course/ml-intro?hl=id
https://towardsdatascience.com/machine-learning/home
 Python Developer
PANDAS : https://github.com/pandas-dev/pandas
TensorFlow : https://github.com/tensorflow/tensorflow

 Java Developer
Advanced Data mining And Machine learning System (ADAMS) : https://adams.cms.waikato.ac.nz/
Deeplearning4j : https://deeplearning4j.org/
 PHP Developer
PHP-ML : https://php-ml.readthedocs.io/en/v0.1.0/
nlp-tools : https://libraries.io/packagist/nlp-tools%2Fnlp-tools
 Note that at the moment PHP is not the best choice for machine learning but maybe this will change
...
Integrity | Change | Team Work | Service Excellence | Innovative | Passionate
Integrity | Change | Team Work | Service Excellence | Innovative | Passionate

You might also like