“Automatically learning multiple levels of representations of the underlying distribution of the data to be modelled”
Deep learning algorithms have shown superior learning and classification performance.
In areas such as transfer learning, speech and handwritten character recognition, face recognition among others.
(I have referred many articles and experimental results provided by Stanford University)
Transfer Learning and Fine-tuning Deep Neural NetworksPyData
This document outlines Anusua Trivedi's talk on transfer learning and fine-tuning deep neural networks. The talk covers traditional machine learning versus deep learning, using deep convolutional neural networks (DCNNs) for image analysis, transfer learning and fine-tuning DCNNs, recurrent neural networks (RNNs), and case studies applying these techniques to diabetic retinopathy prediction and fashion image caption generation.
Convolutional neural networks (CNNs) learn multi-level features and perform classification jointly and better than traditional approaches for image classification and segmentation problems. CNNs have four main components: convolution, nonlinearity, pooling, and fully connected layers. Convolution extracts features from the input image using filters. Nonlinearity introduces nonlinearity. Pooling reduces dimensionality while retaining important information. The fully connected layer uses high-level features for classification. CNNs are trained end-to-end using backpropagation to minimize output errors by updating weights.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
The document provides an overview of Long Short Term Memory (LSTM) networks. It discusses:
1) The vanishing gradient problem in traditional RNNs and how LSTMs address it through gated cells that allow information to persist without decay.
2) The key components of LSTMs - forget gates, input gates, output gates and cell states - and how they control the flow of information.
3) Common variations of LSTMs including peephole connections, coupled forget/input gates, and Gated Recurrent Units (GRUs). Applications of LSTMs in areas like speech recognition, machine translation and more are also mentioned.
The document discusses convolutional neural networks (CNNs). It begins with an introduction and overview of CNN components like convolution, ReLU, and pooling layers. Convolution layers apply filters to input images to extract features, ReLU introduces non-linearity, and pooling layers reduce dimensionality. CNNs are well-suited for image data since they can incorporate spatial relationships. The document provides an example of building a CNN using TensorFlow to classify handwritten digits from the MNIST dataset.
The document provides an overview of LSTM (Long Short-Term Memory) networks. It first reviews RNNs (Recurrent Neural Networks) and their limitations in capturing long-term dependencies. It then introduces LSTM networks, which address this issue using forget, input, and output gates that allow the network to retain information for longer. Code examples are provided to demonstrate how LSTM remembers information over many time steps. Resources for further reading on LSTMs and RNNs are listed at the end.
This document provides an agenda for a presentation on deep learning, neural networks, convolutional neural networks, and interesting applications. The presentation will include introductions to deep learning and how it differs from traditional machine learning by learning feature representations from data. It will cover the history of neural networks and breakthroughs that enabled training of deeper models. Convolutional neural network architectures will be overviewed, including convolutional, pooling, and dense layers. Applications like recommendation systems, natural language processing, and computer vision will also be discussed. There will be a question and answer section.
Introduction to Recurrent Neural NetworkKnoldus Inc.
The document provides an introduction to recurrent neural networks (RNNs). It discusses how RNNs differ from feedforward neural networks in that they have internal memory and can use their output from the previous time step as input. This allows RNNs to process sequential data like time series. The document outlines some common RNN types and explains the vanishing gradient problem that can occur in RNNs due to multiplication of small gradient values over many time steps. It discusses solutions to this problem like LSTMs and techniques like weight initialization and gradient clipping.
Learn the fundamentals of Deep Learning, Machine Learning, and AI, how they've impacted everyday technology, and what's coming next in Artificial Intelligence technology.
This document provides an overview and introduction to deep learning including: what deep learning is, how it utilizes neural networks to learn patterns from large amounts of data, a brief history of deep learning, the differences between machine learning and deep learning, common deep learning architectures like artificial neural networks and deep neural networks, applications of deep learning like computer vision and natural language processing, and some of the major companies utilizing deep learning.
This document provides an introduction to deep learning. It defines artificial intelligence, machine learning, data science, and deep learning. Machine learning is a subfield of AI that gives machines the ability to improve performance over time without explicit human intervention. Deep learning is a subfield of machine learning that builds artificial neural networks using multiple hidden layers, like the human brain. Popular deep learning techniques include convolutional neural networks, recurrent neural networks, and autoencoders. The document discusses key components and hyperparameters of deep learning models.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
A Convolutional Neural Network (CNN) is a type of neural network that can process grid-like data like images. It works by applying filters to the input image to extract features at different levels of abstraction. The CNN takes the pixel values of an input image as the input layer. Hidden layers like the convolution layer, ReLU layer and pooling layer are applied to extract features from the image. The fully connected layer at the end identifies the object in the image based on the extracted features. CNNs use the convolution operation with small filter matrices that are convolved across the width and height of the input volume to compute feature maps.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
This document provides an introduction to machine learning. It discusses how machine learning allows computers to learn from experience to improve their performance on tasks. Supervised learning is described, where the goal is to learn a function that maps inputs to outputs from a labeled dataset. Cross-validation techniques like the test set method, leave-one-out cross-validation, and k-fold cross-validation are introduced to evaluate model performance without overfitting. Applications of machine learning like medical diagnosis, recommendation systems, and autonomous driving are briefly outlined.
We trained a large, deep convolutional neural network to classify the 1.2 million
high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif-
ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5%
and 17.0% which is considerably better than the previous state-of-the-art. The
neural network, which has 60 million parameters and 650,000 neurons, consists
of five convolutional layers, some of which are followed by max-pooling layers,
and three fully-connected layers with a final 1000-way softmax. To make train-
ing faster, we used non-saturating neurons and a very efficient GPU implemen-
tation of the convolution operation. To reduce overfitting in the fully-connected
layers we employed a recently-developed regularization method called “dropout”
that proved to be very effective. We also entered a variant of this model in the
ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%,
compared to 26.2% achieved by the second-best entry.
Notes from Coursera Deep Learning courses by Andrew NgdataHacker. rs
Deep learning uses neural networks to process data and create patterns in a way that imitates the human brain. It has transformed industries like web search and advertising by enabling tasks like image recognition. This document discusses neural networks, deep learning, and their various applications. It also explains how recent advances in algorithms and increased data availability have driven the rise of deep learning by allowing neural networks to train on larger datasets and overcome performance plateaus.
The document discusses transfer learning and building complex models using Keras and TensorFlow. It provides examples of using the functional API to build models with multiple inputs and outputs. It also discusses reusing pretrained layers from models like ResNet, Xception, and VGG to perform transfer learning for new tasks with limited labeled data. Freezing pretrained layers initially and then training the entire model is recommended for transfer learning.
Deep Learning - Speaker Verification, Sound Event DetectionSai Kiran Kadam
The document discusses several papers on using deep learning techniques for speaker recognition and identification. It describes using convolutional neural networks with spectrograms as input to identify speakers and cluster them without prior identity knowledge. It also discusses using BLSTM recurrent neural networks for polyphonic sound event detection and spoofing detection. An end-to-end attention model with CNNs and temporal pooling is presented for text-dependent speaker verification. Embedding's from deep neural networks are investigated as an alternative to i-vectors for text-independent speaker verification. Related research applying CNNs, DNNs and BLSTM RNNs to speaker recognition tasks is also cited.
Big Data Malaysia - A Primer on Deep LearningPoo Kuan Hoong
This document provides an overview of deep learning, including a brief history of machine learning and neural networks. It discusses various deep learning models such as deep belief networks, convolutional neural networks, and recurrent neural networks. Applications of deep learning in areas like computer vision, natural language processing, and robotics are also covered. Finally, popular platforms, frameworks and libraries for developing deep learning systems are mentioned.
This document provides an introduction to deep learning. It discusses the history of machine learning and how neural networks work. Specifically, it describes different types of neural networks like deep belief networks, convolutional neural networks, and recurrent neural networks. It also covers applications of deep learning, as well as popular platforms, frameworks and libraries used for deep learning development. Finally, it demonstrates an example of using the Nvidia DIGITS tool to train a convolutional neural network for image classification of car park images.
MDEC Data Matters Series: machine learning and Deep Learning, A PrimerPoo Kuan Hoong
The document provides an overview of machine learning and deep learning. It discusses the history and development of neural networks, including deep belief networks, convolutional neural networks, and recurrent neural networks. Applications of deep learning in areas like computer vision, natural language processing, and robotics are also covered. Finally, popular platforms, frameworks and libraries for developing deep learning models are presented, along with examples of pre-trained models that are available.
Deep learning is introduced along with its applications and key players in the field. The document discusses the problem space of inputs and outputs for deep learning systems. It describes what deep learning is, providing definitions and explaining the rise of neural networks. Key deep learning architectures like convolutional neural networks are overviewed along with a brief history and motivations for deep learning.
Deep Learning for NLP: An Introduction to Neural Word EmbeddingsRoelof Pieters
Deep learning uses neural networks with multiple layers to learn representations of data with multiple levels of abstraction. Word embeddings represent words as dense vectors in a vector space such that words with similar meanings have similar vectors. Recursive neural tensor networks learn compositional distributed representations of phrases and sentences according to the parse tree by combining the vector representations of constituent words according to the tree structure. This allows modeling the meaning of complex expressions based on the meanings of their parts and the rules for combining them.
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
https://www.youtube.com/watch?v=5ZUlVlumIQo&list=PLqJzTtkUiq54DDEEZvzisPlSGp_BadhNJ&index=10
Over the last years, deep learning is rapidly advancing with impressive results obtained in several areas including computer vision, machine translation and speech recognition. Deep learning attempts to learn complex function through learning hierarchical representation of data. A deep learning model is composed of non-linear modules that each transforms the representation from lower layer to the higher more abstract one. Very complex functions can be learned using enough composition of the non-linear modules. Furthermore, the need for manual feature engineering can be obviated by learning features themselves through the representation learning. In this talk, we first explain how deep learning architecture in particular and neural networks in general are loosely inspired by mammalian visual cortex and nervous system respectively. We also discuss about the reason for big and successful comeback of neural networks with the deep learning models. Finally, we give a brief introduction of various deep structures and their applications to several domains.
References:
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444.
Socher, Richard, Yoshua Bengio, and Chris Manning. "Deep learning for NLP." Tutorial at Association of Computational Logistics (ACL), 2012, and North American Chapter of the Association of Computational Linguistics (NAACL) (2013).
Lee, Honglak. "Tutorial on deep learning and applications." NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. 2010.
LeCun, Yann, and M. Ranzato. "Deep learning tutorial." Tutorials in International Conference on Machine Learning (ICML’13). 2013.
Socher, Richard, et al. "Recursive deep models for semantic compositionality over a sentiment treebank." Proceedings of the conference on empirical methods in natural language processing (EMNLP). Vol. 1631. 2013.
https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ
https://www.udacity.com/course/deep-learning--ud730
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
Robust Feature Learning with Deep Neural Networks
http://snu-primo.hosted.exlibrisgroup.com/primo_library/libweb/action/display.do?tabs=viewOnlineTab&doc=82SNU_INST21557911060002591
Deep generative and discriminative models for speech recognition. The document outlines the history of speech recognition models including early neural networks, hidden dynamic models, and deep belief networks. It describes how deep learning entered speech recognition around 2009 through the collaboration of Microsoft Research and academics. This led to replacing generative models with discriminative deep neural networks which achieved large error reductions. The talk outlines further innovations in deep learning for speech including context-dependent models and better optimization techniques.
This document provides a summary of topics covered in a deep neural networks tutorial, including:
- A brief introduction to artificial intelligence, machine learning, and artificial neural networks.
- An overview of common deep neural network architectures like convolutional neural networks, recurrent neural networks, autoencoders, and their applications in areas like computer vision and natural language processing.
- Advanced techniques for training deep neural networks like greedy layer-wise training, regularization methods like dropout, and unsupervised pre-training.
- Applications of deep learning beyond traditional discriminative models, including image synthesis, style transfer, and generative adversarial networks.
Deep neural networks learn hierarchical representations of data through multiple layers of feature extraction. Lower layers identify low-level features like edges while higher layers integrate these into more complex patterns and objects. Deep learning models are trained on large labeled datasets by presenting examples, calculating errors, and adjusting weights to minimize errors over many iterations. Deep learning has achieved human-level performance on tasks like image recognition due to its ability to leverage large amounts of training data and learn representations automatically rather than relying on manually designed features.
Recognizing Facial Expression Through Frequency Neural Network.pptxsrajece
The document discusses recognizing facial expressions through a frequency neural network. It aims to analyze emotional expressions using fast Fourier transform. The proposed system uses a frequency neural network that utilizes multiplication layers and summarization layers to construct a basic network based on fast Fourier transform features. This achieves 95% accuracy in recognizing facial expressions, compared to 75% for existing systems. Key aspects of the proposed system include data transformation using FFT, training a machine learning model, and predicting expressions on test frames.
Deep learning is a type of machine learning that uses neural networks with multiple layers between the input and output layers. It allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Deep learning has achieved great success in computer vision, speech recognition, and natural language processing due to recent advances in algorithms, computing power, and the availability of large datasets. Deep learning models can learn complex patterns directly from large amounts of unlabeled data without relying on human-engineered features.
The document discusses two NSF-funded research projects on intelligence and security informatics:
1. A project to filter and monitor message streams to detect "new events" and changes in topics or activity levels. It describes the technical challenges and components of automatic message processing.
2. A project called HITIQA to develop high-quality interactive question answering. It describes the team members and key research issues like question semantics, human-computer dialogue, and information quality metrics.
Similar to Deep Learning - A Literature survey (20)
Securiport Gambia is a civil aviation and intelligent immigration solutions provider founded in 2001. The company was created to address security needs unique to today’s age of advanced technology and security threats. Securiport Gambia partners with governments, coming alongside their border security to create and implement the right solutions.
Leading Bigcommerce Development Services for Online RetailersSynapseIndia
As a leading provider of Bigcommerce development services, we specialize in creating powerful, user-friendly e-commerce solutions. Our services help online retailers increase sales and improve customer satisfaction.
Webinar: Transforming Substation Automation with Open Source SolutionsDanBrown980551
This webinar will provide an overview of open source software and tooling for digital substation automation in energy systems. The speakers will provide a brief overview of how open source collaborative development works in general, then delve into how it is driving innovation and accelerating the pace of substation automation. Examples of specific open source solutions and real-world implementations by utilities will be discussed. Participants will walk away with a better understanding of the challenges of automating substations, the ecosystem of solutions available to help, and best practices for implementing them.
Project Delivery Methodology on a page with activities, deliverablesCLIVE MINCHIN
I've not found a 1 pager like this anywhere so I created it based on my experiences. This 1 pager details a waterfall style project methodology with defined phases, activities, deliverables, assumptions. There's nothing in here that conflicts with commonsense.
IVE 2024 Short Course Lecture 9 - Empathic Computing in VRMark Billinghurst
IVE 2024 Short Course Lecture 9 on Empathic Computing in VR.
This lecture was given by Kunal Gupta on July 17th 2024 at the University of South Australia.
Planetek Italia is an Italian Benefit Company established in 1994, which employs 120+ women and men, passionate and skilled in Geoinformatics, Space solutions, and Earth science.
We provide solutions to exploit the value of geospatial data through all phases of data life cycle. We operate in many application areas ranging from environmental and land monitoring to open-government and smart cities, and including defence and security, as well as Space exploration and EO satellite missions.
Discover practical tips and tricks for streamlining your Marketo programs from end to end. Whether you're new to Marketo or looking to enhance your existing processes, our expert speakers will provide insights and strategies you can implement right away.
IVE 2024 Short Course - Lecture 2 - Fundamentals of PerceptionMark Billinghurst
Lecture 2 from the IVE 2024 Short Course on the Psychology of XR. This lecture covers some of the Fundamentals of Percetion and Psychology that relate to XR.
The lecture was given by Mark Billinghurst on July 15th 2024 at the University of South Australia.
How CXAI Toolkit uses RAG for Intelligent Q&AZilliz
Manasi will be talking about RAG and how CXAI Toolkit uses RAG for Intelligent Q&A. She will go over what sets CXAI Toolkit's Intelligent Q&A apart from other Q&A systems, and how our trusted AI layer keeps customer data safe. She will also share some current challenges being faced by the team.
3. INTRODUCTION
• What is Deep Learning?
• Some successful stories.
• Examples of Deep learning.
• Learning and training of Objects.
• Conclusion & Future scope
Dept. of ISE, RVCE.
4. What is Deep learning?
• “Automatically learning multiple levels of representations of
the underlying distribution of the data to be modelled”
• Deep learning algorithms have shown superior learning and
classification performance
• In areas such as transfer learning, speech and handwritten
character recognition, face recognition among others.
5. • A deep learning algorithm automatically extracts
the low & high-level features necessary for
classification.
• By high level features, one means feature that
hierarchically depends on other features.
• “Automatic representation learning” is key point of
interest of this kind of approach as the need for
potentially time consuming handcrafted feature
design is eliminated.
7. Hierarchies in Vision
• Lampert et al. CVPR’09
• Learn attributes, then classes
as combination of attributes
8. What we can do ? (With the right dataset)
• Recognize faces
• Categorize scenes
• Detect, segment and track objects
• 3D from multiple images or stereo
• Classify actions
9. What we can do..
Detect and Localize ObjectsCategorize Scenes
BEACH
Face Detection and
Recognition
10. Why Deep Learning ?
• Data mining: using historical data to improve decision
– medical records ⇒ medical knowledge
– log data to model user
• Software applications we can’t program by hand
– autonomous driving
– speech recognition
• Self customizing programs
– Newsreader that learns user interests
11. Some success stories
• Data Mining
• Analysis of astronomical data
• Human Speech Recognition
• Handwriting recognition
• Face recognition
• Fraudulent Use of Credit Cards
• Drive Autonomous Vehicles
• Predict Stock Rates
• Intelligent Elevator Control
• DNA Classification
14. Probabilistic max pooling
X3X1 X2 X4
max {x1, x2, x3, x4}
Convolutional Neural net:
Convolutional DBN:
X3X1 X2 X4
max {x1, x2, x3, x4}
Where xi are real numbers.
Where xi are {0,1}, and mutually
exclusive. Thus, 5 possible cases:
Collapse 2n configurations into n+1
configurations. Permits bottom up and
top down inference.
0
0 0 0 0
0
0 0 0 0 0 0 0 0 0 0
00000 0
1 1
1
1
11
1
1
15. Convolutional DBN for audio
One CDBN
layerDetection units
Max pooling
Detection units
Max pooling
Second CDBN
layer
16. Convolutional DBN for Images
Wk
Detection layer H
Max-pooling layer P
Hidden nodes (binary)
“Filter” weights (shared)
‘’max-pooling’’ node (binary)
Input data V
17. Convolutional DBN on face images
pixels
edges
object parts
(combination
of edges)
object models
18. Learning of object parts
Examples of learned object parts from object categories
Faces Cars Elephants Chairs
19. Training on multiple objects
Plot of H(class|neuron active)
Trained on 4 classes (cars, faces, motorbikes, airplanes).
Second layer: Shared-features and object-specific features.
Third layer: More specific features.
22. State-of-the-art task performance
TIMIT Phone classification Accuracy
Prior art (Clarkson et al.,1999) 79.6%
Stanford Feature learning 80.3%
TIMIT Speaker identification Accuracy
Prior art (Reynolds, 1995) 99.7%
Stanford Feature learning 100.0%
Audio
Images
Multimodal (audio/video)
CIFAR Object classification Accuracy
Prior art (Yu and Zhang, 2010) 74.5%
Stanford Feature learning 75.5%
NORB Object classification Accuracy
Prior art (Ranzato et al., 2009) 94.4%
Stanford Feature learning 96.2%
AVLetters Lip reading Accuracy
Prior art (Zhao et al., 2009) 58.9%
Stanford Feature learning 63.1%
Video
UCF activity classification Accuracy
Prior art (Kalser et al., 2008) 86%
Stanford Feature learning 87%
Hollywood2 classification Accuracy
Prior art (Laptev, 2004) 47%
Stanford Feature learning 50%
23. • Fig. 1. DeSTIN Hierarchy for the MNIST dataset studies. Four layers are
used with 64, 16, 4 and 1 node per layer arranged in a hierarchical
manner.
• At each node the output belief b(s) at each temporal step is fed to a
parent-node.
• At each temporal step the parent receives input beliefs from four
child nodes to generate its own belief (fed to its parent) and an
advice value a which is fed back to the child nodes.
25. Named-entity
recognition (NER)
• Also known as entity identification and entity
extraction is a subtask of information extraction that
seeks to locate and classify atomic elements in text
into predefined categories such as the names of
persons, organizations, locations, expressions of
times, quantities, monetary
values, percentages, etc.
• Most research on NER systems has been structured
as taking an unannotated block of text, such as this
one:
• “Jim bought 300 shares of Acme Corp. in 2006.”
26. • And producing an annotated block of text, such as
this one:
<ENAMEX TYPE="PERSON"> Jim </ENAMEX> bought
<NUMEX TYPE="QUANTITY"> 300 </NUMEX> shares of
<ENAMEX TYPE="ORGANIZATION"> Acme Corp.
</ENAMEX> in <TIMEX TYPE="DATE">2006</TIMEX>
• State-of-the-art NER systems for English produce
near-human performance. For example, the best
system entering MUC-7 scored 93.39% of F
measure while human annotators scored 97.60%
and 96.95%
27. CONCLUSION & FUTURE WORK
• Test result shows that a deep learning approach
allows better classification than popular classifiers
on the handcrafted features chosen in this work.
• This is a significant advantage over the typical
classification approach that requires careful (and
possibly time consuming) selection of features.
• Instead of hand-tuning features, use unsupervised
feature learning
• Advanced topics:
o Self-taught learning
o Scaling up
28. • More practical implementations must be done.
• Researches are going on by Stanford University.
29. REFERENCES
• [1]D. Erhan, Y. Bengio, A. Courville, P. A. Manzagol, P. Vincent, and S. Bengio, "Why
Does Unsupervised Pre-training Help Deep Learning?," Journal of Machine Learning
Research, vol. 11, pp. 625-660, Feb 2010.
• [2] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, "Stacked
Denoising Autoencoders: Learning Useful Representations in a Deep Network with a
Local Denoising Criterion," Journal of Machine Learning Research, vol. 11, 2010.
• [3] G. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for deep belief
nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.
• [4] D. Keysers, “Comparison and Combination of State-of-the-art Techniques for
Handwritten Character Recognition: Topping the MNIST Benchmark,” Arxiv preprint
arXiv:0710.2231, 2007.
• [5] H. Lee, Y. Largman, P. Pham, and A. Ng, “Unsupervised feature learning for
audio classification using convolutional deep belief networks,”Advances in neural
information processing systems, vol. 22, pp. 1096– 1104, 2009.
• [6] Francis, Quintal, Lauzon, “An introduction to deep learning,” IEEE Transactions
on Deep Learning, pp. 1438–1439, 2012.