A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf.function decorator) and TF 1.x functionality that's been removed from TF 2 (yes, tf.Session() has retired).
Some concise code samples are presented to illustrate how to use new features of TensorFlow 2.
2. Highlights/Overview
What is TensorFlow 2?
Major Changes in TF 2
Working with strings/arrays/tensors
Working with TF 2 @tf.function decorator
Working with TF 2 generators
Working with TF 2 tf.data.Dataset
Datasets in TF 1.x versus TF 2
Working with TF 2 tf.keras
CNNs, RNNs, LSTMs, Bidirectional LSTMs
Reinforcement Learning/TF Agents
3. Highlights/Overview
This session does not cover:
“the new vision” of TF 2
“the common practices” of TF 2
“lessons learned” from TF 1.x
a detailed “roadmap” from TF 1.x to TF 2
4. What is TensorFlow 2?
An open source framework for ML and DL
Created by Google (TF 1.x released 11/2015)
Evolved from Google Brain
Linux and Mac OS X support (VM for Windows)
TF home page: https://www.tensorflow.org/
5. Status of TensorFlow 2
Alpha release: around March/2019
Other “Nightly” builds:
https://pypi.org/project/tf-nightly-2.0-preview/2.0.0.dev20190417/#files
Production release: TBD (later this year)
6. TF 2 Platform Support
Tested and supported on 64-bit systems
Ubuntu 16.04 or later
Windows 7 or later
macOS 10.12.6 (Sierra) or later (no GPU support)
Raspbian 9.0 or later
7. TF 2 Docker Images
https://hub.docker.com/r/tensorflow/tensorflow/
$ docker run -it --rm tensorflow/tensorflow bash
Start a CPU-only container with Python 2:
$ docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest-
gpu-py3 python
Start a GPU container with Python 3 & Python interpreter:
$ docker run -it --rm -v $(realpath ~/notebooks):/tf/notebooks -p
8888:8888 tensorflow/tensorflow:late
8. Status of TensorFlow 1.x
1.13.1: current stable release
1.14: latest release
After TF 2 is in production:
Only security-related updates to TF 1.x
TF 1.x support for one year after TF 2 production
9. What is TensorFlow 2?
Support for Python, Java, C++
Desktop, Server, Mobile, Web
CPU/GPU/TPU support
Visualization via TensorBoard
Can be embedded in Python scripts
Installation: pip install tensorflow
Ex: pip install tensorflow==2.0.0-alpha0
11. Major Changes in TF 2
TF 2 Eager Execution: default mode
@tf.function decorator: instead of tf.Session()
AutoGraph: graph generated in @tf.function()
Generators: used with tf.data.Dataset (TF 2)
Differentiation: calculates gradients
tf.GradientTape: automatic differentiation
12. Removed from TensorFlow 2
tf.Session()
tf.placeholder()
tf.global_initializer()
feed_dict
Variable scopes
tf.contrib code
tf.flags and tf.contrib
Global variables
=> TF 1.x functions moved to compat.v1
13. Upgrading Files/Directories to TF 2
The TF 2 upgrade script:
https://www.tensorflow.org/alpha/guide/upgrade
Script name: tf_upgrade_v2
install TF to get the upgrade script:
pip install tensorflow (NOT pip3)
14. Upgrading Files/Directories to TF 2
1) upgrade one file:
tf_upgrade_v2 --infile oldtf.py --outfile newtf.py
The preceding command creates report.txt
Upgrade an entire directory:
tf_upgrade_v2 –intree mydir --outtree newtf.py –
copyotherfiles False
NB: do NOT make manual upgrade changes
15. Migrating to TensorFlow 2
1) do not manually update parts of your code
2) script won't work with functions with reordered arguments
3) script assumes this TF statement: "import tensorflow as tf"
4) the script does not reorder arguments
5) the script adds keyword arguments to functions that have their
arguments reordered.
Upgrade Jupyter notebooks and Python files in a GitHub repo:
http://tf2up.ml/
=> Replace prefix https://github.com/ with http://tf2up.m
16. What is a Tensor?
TF tensors are n-dimensional arrays
TF tensors are very similar to numpy ndarrays
scalar number: a zeroth-order tensor
vector: a first-order tensor
matrix: a second-order tensor
3-dimensional array: a 3rd order tensor
https://dzone.com/articles/tensorflow-simplified-
examples
18. TensorFlow Eager Execution
integration with Python tools
Supports dynamic models + Python control flow
support for custom and higher-order gradients
=> Default mode in TensorFlow 2.0
19. TensorFlow Eager Execution
import tensorflow as tf
x = [[2.]]
# m = tf.matmul(x, x) <= deprecated
m = tf.linalg.matmul(x, x)
print("m:",m)
print("m:",m.numpy())
#Output:
#m: tf.Tensor([[4.]], shape=(1,1),dtype=float32)
#m: [[4.]]
20. Basic TF 2 Operations
Working with Constants
Working with Strings
Working with Tensors
Working Operators
Arrays in TF 2
Multiplying Two Arrays
Convert Python Arrays to TF Arrays
Conflicting Types
21. TensorFlow “primitive types”
tf.constant:
> initialized immediately and immutable
import tensorflow as tf
aconst = tf.constant(3.0)
print(aconst)
# output for TF 2:
#tf.Tensor(3.0, shape=(), dtype=float32)
# output for TF 1.x:
#Tensor("Const:0", shape=(), dtype=float32)
22. TensorFlow “primitive types”
tf.Variable (a class):
+ initial value is required
+ updated during training
+ in-memory buffer (saved/restored from disk)
+ can be shared in a distributed environment
+ they hold learned parameters of a model
23. TensorFlow Arithmetic
import tensorflow as tf # arith1.py
a = tf.add(4, 2)
b = tf.subtract(8, 6)
c = tf.multiply(a, 3)
d = tf.div(a, 6)
print(a) # 6
print(b) # 2
print(c) # 18
print(d) # 1
print("a:",a.numpy())
print("b:",b.numpy())
print("c:",c.numpy())
print("d:",d.numpy())
28. A “while” Loop Example
import tensorflow as tf
a = tf.constant(12)
while not tf.equal(a, 1):
if tf.equal(a % 2, 0):
a = a / 2
else:
a = 3 * a + 1
print(a)
31. Some TF 2 “lazy operators”
map()
filter()
flatmap()
batch()
take()
zip()
flatten()
Combined via “method chaining”
32. TF 2 “lazy operators”
filter():
uses Boolean logic to "filter" the elements in an array to
determine which elements satisfy the Boolean condition
map(): a projection
this operator "applies" a lambda expression to each input
element
flat_map():
maps a single element of the input dataset to a Dataset of
elements
33. TF 2 “lazy operators”
batch(n):
processes a "batch" of n elements during each
iteration
repeat(n):
repeats its input values n times
take(n):
operator "takes" n input values
34. TF 2 tf.data.Dataset
TF “wrapper” class around data sources
Located in the tf.data.Dataset namespace
Supports lambda expressions
Supports lazy operators
They use generators instead of iterators (TF 1.x)
35. tf.data.Dataset.from_tensors()
Import tensorflow as tf
#combine the input into one element
t1 = tf.constant([[1, 2], [3, 4]])
ds1 = tf.data.Dataset.from_tensors(t1)
# output: [[1, 2], [3, 4]]
37. TF 2 Datasets: code sample
import tensorflow as tf # tf2-dataset.py
import numpy as np
x = np.arange(0, 10)
# create a dataset from a Numpy array
ds = tf.data.Dataset.from_tensor_slices(x)
38. TF 1.x Datasets: iterator
import tensorflow as tf # tf1x-plusone.py
import numpy as np
x = np.arange(0, 10)
# create dataset object from Numpy array
ds = tf.data.Dataset.from_tensor_slices(x)
ds.map(lambda x: x + 1)
# create a one-shot iterator <= TF 1.x
iterator = ds.make_one_shot_iterator()
for i in range(10):
val = iterator.get_next()
print("val:",val)
39. TF 2 Datasets: generator
import tensorflow as tf # tf2-plusone.py
import numpy as np
x = np.arange(0, 5) # 0, 1, 2, 3, 4
def gener():
for i in x:
yield (3*i)
ds = tf.data.Dataset.from_generator(gener, (tf.int64))
for value in ds.take(len(x)):
print("1value:",value)
for value in ds.take(2*len(x)):
print("2value:",value)
44. TF 2 map() and take()
import tensorflow as tf
import numpy as np
x = np.array([[1],[2],[3],[4]])
# make a ds from a numpy array
ds = tf.data.Dataset.from_tensor_slices(x)
ds = ds.map(lambda x: x*2).map(lambda x:
x+1).map(lambda x: x**3)
for value in ds.take(4):
print("value:",value)
46. TF 2 batch(): EXTRA CREDIT
import tensorflow as tf
import numpy as np
x = np.arange(0, 12)
def gener():
i = 0
while(i < len(x/3)):
yield (i, i+1, i+2) # three integers at a time
i += 3
ds = tf.data.Dataset.from_generator(gener, (tf.int64,tf.int64,tf.int64))
third = int(len(x)/3)
for value in ds.take(third):
print("value:",value)
47. TF 2 batch() & zip(): EXTRA CREDIT
import tensorflow as tf
ds1 = tf.data.Dataset.range(100)
ds2 = tf.data.Dataset.range(0, -100, -1)
ds3 = tf.data.Dataset.zip((ds1, ds2))
ds4 = ds3.batch(4)
for value in ds.take(10):
print("value:",value)
48. TF 2: Working with tf.keras
tf.keras.layers
tf.keras.models
tf.keras.optimizers
tf.keras.utils
tf.keras.regularizers
49. TF 2: tf.keras.models
tf.keras.models.Sequential(): most common
clone_model(...): Clone any Model instance
load_model(...): Loads a model saved via save_model
model_from_config(...): Instantiates a Keras model from its
config
model_from_json(...):
Parses a JSON model config file and returns a model instance
model_from_yaml(...):
Parses a yaml model config file and returns a model instance
save_model(...): Saves a model to a HDF5 file
57. Other TF 2 Namespaces
tf.keras.regularizers (L1 and L2)
tf.keras.utils (to_categorical)
tf.keras.preprocessing.text.Tokenizer
58. TF 1.x Model APIs
tf.core (“reduced” in TF 2)
tf.layers (deprecated in TF 2)
tf.keras (the primary API in TF 2)
tf.estimator.Estimator (implementation in tf.keras)
tf.contrib.learn.Estimator (Deprecated in TF 2)
59. Other TF 2 Features
TF Probability (Linear Regression)
TF Agents (Reinforcement Learning)
TF Text (NLP)
TF Federated
TF Privacy
Tensor2Tensor
TF Ranking
TFX
60. TF 2 (Keras), DL, and RL
The remaining slides briefly discuss TF 2 and:
CNNs (Convolutional Neural Networks)
RNNs (Recurrent Neural Networks)
LSTMs (Long Short Term Memory)
Autoencoders
Variational Autoencoders
Reinforcement Learning
Some Keras-based code blocks
Some useful links
62. Transition Between Layers
W = weight matrix between adjacent layers
x = an input vector of numbers
B = a bias vector of numbers
F = activation function (sigmoid, tanh, etc)
Y = output vector of numbers
Y = F(x*W + b)
67. What are RNNs?
RNNs = Recurrent Neural Networks
RNNs capture/retain events in time
FF networks do not learn from the past
=> RNNs DO learn from the past
RNNs contain layers of recurrent neurons
68. Common Types of RNNs
1) Simple RNNs (minimal structure)
2) LSTMs (input, forget, output gates)
3) Gated Recurrent Unit (GRU)
NB: RNNs can be used with MNIST data
69. Some Use Cases for RNNs
1) Time Series Data
2) NLP (Natural Language Processing):
Processing documents
“analyzing” Trump’s tweets
3) RNNs and MNIST dataset
74. RNN and MNIST Dataset
Sample initialization values for RNN:
n_steps = 28 # # of time steps
n_inputs = 28 # number of rows
n_neurons = 150 # 150 neurons in one layer
n_outputs = 10 # 10 digit classes (0->9)
Remember that:
one layer = one memory cell
MNIST class count = size of output layer
75. TF 2/Keras and RNNs
import tensorflow as tf
...
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.SimpleRNN(20,
input_shape=(input_length, input_dim),
return_sequences=False))
...
76. Problems with RNNs
lengthy training time
Unrolled RNN will be very deep
Propagating gradients through many layers
Prone to vanishing gradient
Prone to exploding gradient
77. What are LSTMs?
Long short-term memory (LSTM):
a model for the short-term memory
Contains more state than a “regular” RNN
which can last for a long period of time
can forget and remember things selectively
Hochreiter/Schmidhuber proposed in 1997
improved in 2000 by Felix Gers' team
set accuracy records in multiple apps
78. Features of LSTMs
Used in Google speech recognition + Alpha Go
they avoid the vanishing gradient problem
Can track 1000s of discrete time steps
Used by international competition winners
79. Use Cases for LSTMs
Connected handwriting recognition
Speech recognition
Forecasting
Anomaly detection
Pattern recognition
80. Advantages of LSTMs
well-suited to classify/process/predict time series
More powerful: increased memory (=state)
Can explicitly add long-term or short-term state
even with time lags of unknown size/duration between
events
81. The Primary LSTM Gates
Input, Forget, and Output gates (FIO)
The main operational block of LSTMs
an "information filter"
a multivariate input gate
some inputs are blocked
some inputs "go through"
"remember" necessary information
FIO gates use the sigmoid activation function
The cell state uses the tanh activation function
82. LSTM Gates and Cell State
1) the input gate controls:
the extent to which a new value flows into the cell state
2) the forget gate controls:
the extent to which a value remains in the cell state
3) the output gate controls:
The portion of the cell state that’s part of the output
4) the cell state maintains long-term memory
=> LSTMs store more memory than basic RNN cells
83. Common LSTM Variables
# LSTM unrolled through 28 time steps (with MNIST):
time_steps = 28
# number of hidden LSTM units:
num_units = 128
# number of rows of 28 pixels:
n_inputs = 28
# number of pixels per row:
input_size = 28
# mnist has 10 classes (0-9):
n_classes = 10
# size of input batch:
batch_size = 128
86. TF 2/Keras and LSTMs
import tensorflow as tf
. . .
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTMCell(6,batch_input_shape=(1,1,1),kernel_i
nitializer='ones',stateful=True))
model.add(tf.keras.layers.Dense(1))
. . .
=> LSTM versus LSTMCell:
https://stackoverflow.com/questions/48187283/whats-the-difference-
between-lstm-and-lstmcell
=> How to define a custom LSTM cell:
https://stackoverflow.com/questions/54231440/define-custom-lstm-cell-in-
keras
87. TF 2/Keras and LSTM/RNN/GRU
import tensorflow as tf
...
model = Sequential()
model.add(tf.keras.layers.LSTM(64,
return_sequences=True, input_shape=(10, 64)))
model.add(tf.keras.layers.SimpleRNN(32,
return_sequences=True))
model.add(tf.keras.layers.GRU(10, ...)
...
96. Reinforcement Learning
Dopamine on TensorFlow (Research Framework):
https://github.com/google/dopamine
TF-Agents library for RL in TensorFlow:
https://github.com/tensorflow/agents
Keras and Reinforcement Learning:
https://github.com/keras-rl/keras-rl
OpenAI universe, DeepMind Lab, TensorForce
97. About Me: Recent Books
1)TensorFlow 2 Pocket Primer (2019)
2) TensorFlow 1.x Pocket Primer (2019)
3) Python for TensorFlow (2019)
4) C Programming Pocket Primer (2019)
5) RegEx Pocket Primer (2018)
6) Data Cleaning Pocket Primer (2018)
7) Angular Pocket Primer (2017)
8) Android Pocket Primer (2017)
9) CSS3 Pocket Primer (2016)
10) SVG Pocket Primer (2016)
11) Python Pocket Primer (2015)
12) D3 Pocket Primer (2015)
13) HTML5 Mobile Pocket Primer (2014)
98. What I do (Training)
Instructor at UCSC (Santa Clara):
Machine Learning (Python) (06/18/2019) 10 weeks
Deep Learning (Keras) (07/10/2019) 10 weeks
Deep Learning (Keras) (07/17/2019) 10 weeks
Machine Learning (Python) (09/09/2019) 10 weeks
DL, NLP, Deep RL (Adv Topics) (10/02/2019) 10 weeks
Deep Learning (Keras/TF 2) (10/04/2019) 10 weeks
UCSC link:
https://www.ucsc-extension.edu/certificate-program/offering/deep-
learning-and-artificial-intelligence-tensorflow