10. More Complex Languages Than English
• German: Donaudampfschiffahrtsgesellschaftskapitän (5 “words”)
• Chinese: 50,000 different characters (2-3k to read a newspaper)
• Japanese: 3 writing systems
• Thai: Ambiguous word boundaries and sentence concepts
• Slavic: Different word forms depending on gender, case, tense
11. Write Traditional “If-Then-Else” Rules?
BIG NOPE!
Leads to very large and complex codebases.
Still struggles to capture trivial cases (for a human).
12. Better Approach: Machine Learning
“ • A computer program is said to learn from experience E
• with respect to some class of tasks T and performance measure P,
• if its performance at tasks in T, as measured by P,
• improves with experience E.
— Tom M. Mitchell
14. Before We Begin: Disclaimer
• This will be a very quick description of ML. By no means exhaustive.
• Only the essential background for what we’ll have in Part 2.
• To fit everything into a small timeframe, I’ll simplify some aspects.
• I encourage you to read ML books or watch videos to dig deeper.
15. Common ML Tasks
• Regression
• Classification (Binary or Multi-Class)
1. Supervised Learning
2. Unsupervised Learning
• Clustering
• Anomaly Detection
• Latent Variable Models (Dimensionality Reduction, EM, …)
34. Development & Troubleshooting
• Picking the right metric: MAE, RMSE, AUC, Cross-Entropy, Log-Loss
• Training Set / Validation Set / Test Set split
• Picking hyperparameters against Validation Set
• Regularization to prevent OF
• Plotting learning curves to check for UF/OF
35. Deep Learning
• Core idea: instead of hand-crafting complex features, use increased computing
capacity and build a deep computation graph that will try to learn feature
representations on its own.
End-to-end learning rather than a cascade of apps.
• Works best with lots of homogeneous, spatially related features
(image pixels, character sequences, audio signal measurements).
Usually works poorly otherwise.
• State-of-the-art and/or superhuman performance on many tasks.
• Typically requires massive amounts of data and training resources.
• But: a very young field. Theories not strongly established, views change.
43. “Classical” way: Training a NER Tagger
Task: Predict whether the word is a PERSON, LOCATION, DATE or OTHER.
Could be more than 3 NER tags (e.g. MUC-7 contains 7 tags).
1. Current word.
2. Previous, next word (context).
3. POS tags of current word and nearby words.
4. NER label for previous word.
5. Word substrings (e.g. ends in “burg”, contains “oxa” etc.)
6. Word shape (internal capitalization, numerals, dashes etc.).
7. …on and on and on…
Features:
44. Feature Representation: Bag of Words
A single word is a one-hot encoding vector with the size of the dictionary :(
45. Problem
• Manually designed features are often over-specified, incomplete,
take a long time to design and validate.
• Often requires PhD-level knowledge of the domain.
• Researchers spend literally decades hand-crafting features.
• Bag of words model is very high-dimensional and sparse,
cannot capture semantics or morphology.
Maybe Deep Learning can help?
46. Deep Learning for NLP
• Core enabling idea: represent words as dense vectors
[0 1 0 0 0 0 0 0 0] [0.315 0.136 0.831]
• Try to capture semantic and morphologic similarity so that the features
for “similar” words are “similar”
(e.g. closer in Euclidean space).
• Natural language is context dependent: use context for learning.
• Straightforward (but slow) way: build a co-occurrence matrix and SVD it.
48. Benefits
• Learns features of each word on its own, given a text corpus.
• No heavy preprocessing is required, just a corpus.
• Word vectors can be used as features for lots of supervised
learning applications: POS, NER, chunking, semantic role labeling.
All with pretty much the same network architecture.
• Similarities and linear relationships between word vectors.
• A bit more modern representation: GloVe, but requires more RAM.
53. Deep Learning Way: Recurrent NN (RNN)
Can use past information without restricting the size of the context.
But: in practice, can’t recall information that came in a long time ago.
54. Long Short Term Memory Network (LSTM)
Contains gates that control forgetting, adding, updating and outputting information.
Surprisingly amazing performance at language tasks compared to vanilla RNN.
55. Tackling Hard Tasks
Deep Learning enables end-to-
end learning for Machine
Translation, Image Captioning,
Text Generation, Summarization:
NLP tasks which are inherently
very hard!
RNN for Machine Translation