Keras Tutorial: What Is Keras? How To Install in Python (Example)
Keras Tutorial: What Is Keras? How To Install in Python (Example)
Keras Tutorial: What Is Keras? How To Install in Python (Example)
How to Install
in Python [Example]
ByDaniel JohnsonUpdatedApril 21, 2022
What is Keras?
Keras is an Open Source Neural Network library written in Python that runs on top
of Theano or Tensorflow. It is designed to be modular, fast and easy to use. It was
developed by François Chollet, a Google engineer. Keras doesn’t handle low-level
computation. Instead, it uses another library to do it, called the “Backend.
Keras is high-level API wrapper for the low-level API, capable of running on top of
TensorFlow, CNTK, or Theano. Keras High-Level API handles the way we make
models, defining layers, or set up multiple input-output models. In this level, Keras
also compiles our model with loss and optimizer functions, training process with fit
function. Keras in Python doesn’t handle Low-Level API such as making the
computational graph, making tensors or other variables because it has been
handled by the “backend” engine.
In this Keras tutorial for beginners, you will learn Keras basics like:
What is Keras?
What is a Backend?
Theano, Tensorflow, and CNTK Backend
Comparing the Backends
Keras vs Tensorflow
Advantages of Keras
Installing Keras
Direct install or Virtual Environment
Amazon Web Service (AWS)
How to Install Keras on Amazon SageMaker
How to Install Keras on Windows
Keras Fundamental for Deep Learning
Fine-Tune Pre-Trained Models in Keras and How to Use Them
Face Recognition Neural Network with Keras
What is a Backend?
Backend is a term in Keras that performs all low-level computation such as tensor
products, convolutions and many other things with the help of other libraries such
as Tensorflow or Theano. So, the “backend engine” will perform the computation
and development of the models. Tensorflow is the default “backend engine” but
we can change it in the configuration.
Theano is an open source project that was developed by the MILA group at the
University of Montreal, Quebec, Canada. It was the first widely used Framework. It
is a Python library that helps in multi-dimensional arrays for mathematical
operations using Numpy or Scipy. Theano can use GPUs for faster computation, it
also can automatically build symbolic graphs for computing gradients. On its
website, Theano claims that it can recognize numerically unstable expressions and
compute them with more stable algorithms, this is very useful for our unstable
expressions.
On the other hand, Tensorflow is the rising star in deep learning framework.
Developed by Google’s Brain team it is the most popular deep learning tool. With a
lot of features, and researchers contribute to help develop this framework for deep
learning purposes.
Another backend engine for Keras is The Microsoft Cognitive Toolkit or CNTK. It is
an open-source deep learning framework that was developed by Microsoft Team. It
can run on multi GPUs or multi-machine for training deep learning model on a
massive scale. In some cases, CNTK was reported faster than other frameworks
such as Tensorflow or Theano. Next in this Keras CNN tutorial, we will compare the
backends of Theano, TensorFlow and CNTK.
So, between Theano, Tensorflow and CTK it’s obvious that TensorFlow is better
than Theano. With TensorFlow, the computation time is much shorter and CNN is
better than the others.
Next in this Keras Python tutorial, we will learn about the difference between Keras
and TensorFlow (Keras vs Tensorflow).
Keras vs Tensorflow
Parameters Keras Tensorflow
Type High-Level API Wrapper Low-Level API
You need to learn the syntax of using some of
Complexity Easy to use if you Python language
function
Rapid deployment for making model with standard Allows you to make an arbitrary computationa
Purpose
layers layers
Tools Uses other API debug tool such as TFDBG You can use Tensorboard visualization tools
Communit
Large active communities Large active communities and widely shared r
y
Advantages of Keras
Fast Deployment and Easy to understand
Keras is very quick to make a network model. If you want to make a simple network
model with a few lines, Python Keras can help you with that. Look at the Keras
example below:
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=50)) #input shape of 50
model.add(Dense(28, activation='relu')) #input shape of 50
model.add(Dense(10, activation='softmax'))
Because of friendly the API, we can easily understand the process. Writing the code
with a simple function and no need to set multiple parameters.
Disadvantages of Keras
Cannot handle low-level API
Keras only handles high-level API which runs on top other framework or backend
engine such as Tensorflow, Theano, or CNTK. So it’s not very useful if you want to
make your own abstract layer for your research purposes because Keras already
have pre-configured layers.
Installing Keras
In this section, we will look into various methods available to install Keras
For example, I have a project that needs Python 3.5 using OpenCV 3.3 with older
Keras-Theano backend but in the other project I have to use Keras with the latest
version and a Tensorflow as it backend with Python 3.6.6 support
We don’t want the Keras library to conflict at each other right? So we use a Virtual
Environment to localize the project with a specific type of library or we can use
another platform such as Cloud Service to do our computation for us like Amazon
Web Service.
Note on the AMI: You will have the following AMI available
AWS Deep Learning AMI is a virtual environment in AWS EC2 Service that helps
researchers or practitioners to work with Deep Learning. DLAMI offers from small
CPUs engine up to high-powered multi GPUs engines with preconfigured CUDA,
cuDNN, and comes with a variety of deep learning frameworks.
If you want to use it instantly, you should choose Deep Learning AMI because it
comes preinstalled with popular deep learning frameworks.
But if you want to try a custom deep learning framework for research, you should
install the Deep Learning Base AMI because it comes with fundamental libraries
such as CUDA, cuDNN, GPUs drivers, and other needed libraries to run with your
deep learning environment.
As a beginner, this is by far the easiest method to use Keras. Below is a process on
how to install Keras on Amazon SageMaker:
user@user:~$ python
Python 3.6.4 (default, Mar 20 2018, 11:10:20)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>>
if there is no error message, the installation process is successful
Install Keras
After we install Tensorflow, let’s start installing keras. Type this command in the
terminal
Verifying
Before we start using Keras, we should check if our Keras use Tensorflow as it
backend by open the configuration file:
gedit ~/.keras/keras.json
you should see something like this
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "tensorflow",
"image_data_format": "channels_last"
}
as you can see, the “backend” use tensorflow. It means that keras are using
Tensorflow as it backend as we expected
user@user:~$ python3
Python 3.6.4 (default, Mar 20 2018, 11:10:20)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import keras
Using TensorFlow backend.
>>>
This is used to isolate the working system with the main system.
.\venv\Scripts\activate
After preparing the environment, Tensorflow and Keras installation remains same
as Linux. Next in this Deep learning with Keras tutorial, we will learn about Keras
fundamentals for Deep learning.
AD
Here’s how to make a Sequential Model and a few commonly used layers in deep
learning
1. Sequential Model
model = Sequential()
2. Convolutional Layer
This is a Keras Python example of convolutional layer as the input layer with the
input shape of 320x320x3, with 48 filters of size 3×3 and use ReLU as an activation
function.
To downsample the input representation, use MaxPool2d and specify the kernel
size
model.add(MaxPooling2D(pool_size=(2, 2)))
4. Dense Layer
adding a Fully Connected Layer with just specifying the output Size
model.add(Dense(256, activation='relu'))
5. Dropout Layer
Adding dropout layer with 50% probability
model.add(Dropout(0.5))
Compiling, Training, and Evaluate
After we define our model, let’s start to train them. It is required to compile the
network first with the loss function and optimizer function. This will allow the
network to change weights and minimized the loss.
model.compile(loss='mean_squared_error', optimizer='adam')
Now to start training, use fit to fed the training and validation data to the model.
This will allow you to train the network in batches and set the epochs.
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
import matplotlib.pyplot as plt
x = data = np.linspace(1,2,200)
y = x*4 + np.random.randn(*x.shape) * 0.3
model = Sequential()
model.add(Dense(1, input_dim=1, activation='linear'))
weights = model.layers[0].get_weights()
w_init = weights[0][0][0]
b_init = weights[1][0]
print('Linear regression model is initialized with weights w: %.2f, b: %.2f' % (w_init, b_init))
weights = model.layers[0].get_weights()
w_final = weights[0][0][0]
b_final = weights[1][0]
print('Linear regression model is trained to have weight w: %.2f, b: %.2f' % (w_final, b_final))
predict = model.predict(data)
Fine-tuning is a task to tweak a pre-trained model such that the parameters would
adapt to the new model. When we want to train from scratch on a new model, we
need a large amount of data, so that the network can find all parameters. But in
this case, we will use a pre-trained model so the parameters are already learned
and have a weight.
For example, if we want to train our own Keras model to solve a classification
problem but we only have a small amount of data, then we can solve this by using
a Transfer Learning + Fine-Tuning method.
Using a pre-trained network & weights we don’t need to train the whole network.
We just need to train the last layer that is used to solve our task as we call it Fine-
Tuning method.
VGG16
InceptionV3
ResNet
MobileNet
Xception
InceptionResNetV2
But in this process, we will use VGG16 network model and the imageNet as our
weight for the model. We will fine-tune a network to classify 8 different types of
classes using Images from Kaggle Natural Images Dataset
Amazon S3 Bucket
Step 1) After login to your S3 account, let’s create a bucket by clocking Create
Bucket
Step 2) Now choose a Bucket Name and your Region according to your account.
Make sure that the bucket name is available. After that click Create.
Step 3) As you can see, your Bucket is ready to use. But as you can see, the Access
is Not public, it is good for you if you want to keep it private for yourself. You can
change this bucket for Public Access in the Bucket Properties
Step 4) Now you start uploading your training data to your Bucket. Here I will
upload the tar.gz file which consist of pictures for training and testing process.
Step 5) Now click on your file and copy the Link so that we can download it.
Data Preparation
We need to generate our training data using the Keras ImageDataGenerator.
First you must download using wget with the link to your file from S3 Bucket.
!wget https://s3.us-east-2.amazonaws.com/naturalimages02/images.tar.gz
!tar -xzf images.tar.gz
After you download the data let’s start the Training Process.
train_path = 'images/train/'
test_path = 'images/test/'
batch_size = 16
image_size = 224
num_class = 8
train_datagen = ImageDataGenerator(validation_split=0.3,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(
directory=train_path,
target_size=(image_size,image_size),
batch_size=batch_size,
class_mode='categorical',
color_mode='rgb',
shuffle=True)
The ImageDataGenerator will make an X_training data from a directory. The sub-
directory in that directory will be used as a class for each object. The image will be
loaded with the RGB color mode, with the categorical class mode for the Y_training
data, with a batch size of 16. Finally, shuffle the data.
fig=plt.figure()
columns = 4
rows = 4
for i in range(1, columns*rows):
num = np.random.randint(batch_size)
image = x_batch[num].astype(np.int)
fig.add_subplot(rows, columns, i)
plt.imshow(image)
plt.show()
After that let’s create our network model from VGG16 with imageNet pre-trained
weight. We will freeze these layers so that the layers are not trainable to help us
reduce the computation time.
print(base_model.summary())
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=1e-3),
metrics=['accuracy'])
history = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.n/batch_size,
epochs=10)
model.save('fine_tune.h5')
plt.plot(history.history['loss'])
plt.title('loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['loss'], loc='upper left')
plt.show()
Results
Epoch 1/10
432/431 [==============================] - 53s 123ms/step - loss: 0.5524 - acc: 0.9474
Epoch 2/10
432/431 [==============================] - 52s 119ms/step - loss: 0.1571 - acc: 0.9831
Epoch 3/10
432/431 [==============================] - 51s 119ms/step - loss: 0.1087 - acc: 0.9871
Epoch 4/10
432/431 [==============================] - 51s 119ms/step - loss: 0.0624 - acc: 0.9926
Epoch 5/10
432/431 [==============================] - 51s 119ms/step - loss: 0.0591 - acc: 0.9938
Epoch 6/10
432/431 [==============================] - 51s 119ms/step - loss: 0.0498 - acc: 0.9936
Epoch 7/10
432/431 [==============================] - 51s 119ms/step - loss: 0.0403 - acc: 0.9958
Epoch 8/10
432/431 [==============================] - 51s 119ms/step - loss: 0.0248 - acc: 0.9959
Epoch 9/10
432/431 [==============================] - 51s 119ms/step - loss: 0.0466 - acc: 0.9942
Epoch 10/10
432/431 [==============================] - 52s 120ms/step - loss: 0.0338 - acc: 0.9947
As you can see, our losses are dropped significantly and the accuracy is almost
100%. For testing our model, we randomly picked images over the internet and put
it on the test folder with a different class to test
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
directory=train_path,
target_size=(image_size,image_size),
batch_size=batch_size,
class_mode='categorical',
color_mode='rgb',
shuffle=True)
test_generator = test_datagen.flow_from_directory(
directory=test_path,
target_size=(image_size, image_size),
color_mode='rgb',
shuffle=False,
class_mode='categorical',
batch_size=1)
filenames = test_generator.filenames
nb_samples = len(filenames)
fig=plt.figure()
columns = 4
rows = 4
for i in range(1, columns*rows -1):
x_batch, y_batch = test_generator.next()
name = model.predict(x_batch)
name = np.argmax(name, axis=-1)
true_name = y_batch
true_name = np.argmax(true_name, axis=-1)
label_map = (test_generator.class_indices)
label_map = dict((v,k) for k,v in label_map.items()) #flip k,v
predictions = [label_map[k] for k in name]
true_value = [label_map[k] for k in true_name]
image = x_batch[0].astype(np.int)
fig.add_subplot(rows, columns, i)
plt.title(str(predictions[0]) + ':' + str(true_value[0]))
plt.imshow(image)
plt.show()
And our test is as given below! Only 1 image is predicted wrong from a test of 14
images!
Face Recognition Neural Network with Keras
Why we need Recognition
We need Recognition to make it easier for us to recognize or identify a person’s
face, objects type, estimated age of a person from his face, or even know the facial
expressions of that person.
Maybe you realize every time you try to mark your friend’s face in a photo, the
feature in Facebook has done it for you, that is marking your friend’s face without
you needing to mark it first. This is Face Recognition applied by Facebook to make
it easier for us to tag friends.
So how does it work? Every time we mark the face of our friend, Facebook’s AI will
learn it and will try to predict it until it gets the right result. The same system we
will use to make our own Face Recognition. Let’s start making our own Face
Recognition using Deep Learning
Network Model
We will use a VGG16 Network Model but with VGGFace weight.
face_model = VGGFace(model='vgg16',
weights='vggface',
input_shape=(224,224,3))
face_model.summary()
As you can see the network summary
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
conv1_1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
conv1_2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
pool1 (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
conv2_1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
conv2_2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
pool2 (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
conv3_1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
conv3_2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
conv3_3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
pool3 (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
conv4_1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
conv4_2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
conv4_3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
pool4 (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
conv5_1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
conv5_2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
conv5_3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
pool5 (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc6 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc6/relu (Activation) (None, 4096) 0
_________________________________________________________________
fc7 (Dense) (None, 4096) 16781312
_________________________________________________________________
fc7/relu (Activation) (None, 4096) 0
_________________________________________________________________
fc8 (Dense) (None, 2622) 10742334
_________________________________________________________________
fc8/softmax (Activation) (None, 2622) 0
=================================================================
Total params: 145,002,878
Trainable params: 145,002,878
Non-trainable params: 0
_________________________________________________________________
Traceback (most recent call last):
we will do a Transfer Learning + Fine Tuning to make the training quicker with
small datasets. First, we will freeze the base layers so that the layers are not
trainable.
person_count = 5
last_layer = face_model.get_layer('pool5').output
x = Flatten(name='flatten')(last_layer)
x = Dense(1024, activation='relu', name='fc6')(x)
x = Dense(1024, activation='relu', name='fc7')(x)
out = Dense(person_count, activation='softmax', name='fc8')(x)
Jack Ma
Jason Statham
Johnny Depp
Robert Downey Jr
Rowan Atkinson
Each folder contains 10 pictures, for each training and evaluation process. It is a
very small amount of data but that is the challenge, right?
We will use the help of Keras tool to help us prepare the data. This function will
iterate in the dataset folder and then prepare it so it can be used in the training.
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
valid_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(
train_path,
target_size=(image_size,image_size),
batch_size=batch_size,
class_mode='sparse',
color_mode='rgb')
valid_generator = valid_datagen.flow_from_directory(
directory=eval_path,
target_size=(224, 224),
color_mode='rgb',
batch_size=batch_size,
class_mode='sparse',
shuffle=True,
)
Training Our Model
Let’s begin our training process by compiling our network with loss function and
optimizer. Here, we use sparse_categorical_crossentropy as our loss function, with
the help of SGD as our learning optimizer.
custom_face.compile(loss='sparse_categorical_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
history = custom_face.fit_generator(
train_generator,
validation_data=valid_generator,
steps_per_epoch=49/batch_size,
validation_steps=valid_generator.n,
epochs=50)
custom_face.evaluate_generator(generator=valid_generator)
custom_face.save('vgg_face.h5')
Epoch 25/50
10/9 [==============================] - 60s 6s/step - loss: 1.4882 - acc: 0.8998 - val_loss: 1.5659 -
val_acc: 0.5851
Epoch 26/50
10/9 [==============================] - 59s 6s/step - loss: 1.4882 - acc: 0.8998 - val_loss: 1.5638 -
val_acc: 0.5809
Epoch 27/50
10/9 [==============================] - 60s 6s/step - loss: 1.4779 - acc: 0.8597 - val_loss: 1.5613 -
val_acc: 0.5477
Epoch 28/50
10/9 [==============================] - 60s 6s/step - loss: 1.4755 - acc: 0.9199 - val_loss: 1.5576 -
val_acc: 0.5809
Epoch 29/50
10/9 [==============================] - 60s 6s/step - loss: 1.4794 - acc: 0.9153 - val_loss: 1.5531 -
val_acc: 0.5892
Epoch 30/50
10/9 [==============================] - 60s 6s/step - loss: 1.4714 - acc: 0.8953 - val_loss: 1.5510 -
val_acc: 0.6017
Epoch 31/50
10/9 [==============================] - 60s 6s/step - loss: 1.4552 - acc: 0.9199 - val_loss: 1.5509 -
val_acc: 0.5809
Epoch 32/50
10/9 [==============================] - 60s 6s/step - loss: 1.4504 - acc: 0.9199 - val_loss: 1.5492 -
val_acc: 0.5975
Epoch 33/50
10/9 [==============================] - 60s 6s/step - loss: 1.4497 - acc: 0.8998 - val_loss: 1.5490 -
val_acc: 0.5851
Epoch 34/50
10/9 [==============================] - 60s 6s/step - loss: 1.4453 - acc: 0.9399 - val_loss: 1.5529 -
val_acc: 0.5643
Epoch 35/50
10/9 [==============================] - 60s 6s/step - loss: 1.4399 - acc: 0.9599 - val_loss: 1.5451 -
val_acc: 0.5768
Epoch 36/50
10/9 [==============================] - 60s 6s/step - loss: 1.4373 - acc: 0.8998 - val_loss: 1.5424 -
val_acc: 0.5768
Epoch 37/50
10/9 [==============================] - 60s 6s/step - loss: 1.4231 - acc: 0.9199 - val_loss: 1.5389 -
val_acc: 0.6183
Epoch 38/50
10/9 [==============================] - 59s 6s/step - loss: 1.4247 - acc: 0.9199 - val_loss: 1.5372 -
val_acc: 0.5934
Epoch 39/50
10/9 [==============================] - 60s 6s/step - loss: 1.4153 - acc: 0.9399 - val_loss: 1.5406 -
val_acc: 0.5560
Epoch 40/50
10/9 [==============================] - 60s 6s/step - loss: 1.4074 - acc: 0.9800 - val_loss: 1.5327 -
val_acc: 0.6224
Epoch 41/50
10/9 [==============================] - 60s 6s/step - loss: 1.4023 - acc: 0.9800 - val_loss: 1.5305 -
val_acc: 0.6100
Epoch 42/50
10/9 [==============================] - 59s 6s/step - loss: 1.3938 - acc: 0.9800 - val_loss: 1.5269 -
val_acc: 0.5975
Epoch 43/50
10/9 [==============================] - 60s 6s/step - loss: 1.3897 - acc: 0.9599 - val_loss: 1.5234 -
val_acc: 0.6432
Epoch 44/50
10/9 [==============================] - 60s 6s/step - loss: 1.3828 - acc: 0.9800 - val_loss: 1.5210 -
val_acc: 0.6556
Epoch 45/50
10/9 [==============================] - 59s 6s/step - loss: 1.3848 - acc: 0.9599 - val_loss: 1.5234 -
val_acc: 0.5975
Epoch 46/50
10/9 [==============================] - 60s 6s/step - loss: 1.3716 - acc: 0.9800 - val_loss: 1.5216 -
val_acc: 0.6432
Epoch 47/50
10/9 [==============================] - 60s 6s/step - loss: 1.3721 - acc: 0.9800 - val_loss: 1.5195 -
val_acc: 0.6266
Epoch 48/50
10/9 [==============================] - 60s 6s/step - loss: 1.3622 - acc: 0.9599 - val_loss: 1.5108 -
val_acc: 0.6141
Epoch 49/50
10/9 [==============================] - 60s 6s/step - loss: 1.3452 - acc: 0.9399 - val_loss: 1.5140 -
val_acc: 0.6432
Epoch 50/50
10/9 [==============================] - 60s 6s/step - loss: 1.3387 - acc: 0.9599 - val_loss: 1.5100 -
val_acc: 0.6266
As you can see, our validation accuracy is up to 64%, this is a good result for a small
amount of training data. We can improve this by adding more layer or add more
training images so that our model can learn more about the faces and achieving
more accuracy.
labels = (train_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions = [labels[k] for k in predicted_class]
print(predictions)
['RobertDJr']
using Robert Downey Jr. picture as our test picture, it shows that the predicted face
is true!
The first step is to prepare you and your friend’s faces. The more data we have then
the better the result is!
Prepare and train your network like the previous step, after training is complete,
add this line to get the input image from cam
image_size = 224
device_id = 0 #camera_device id
cascade_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
camera = cv2.VideoCapture(device_id)
while camera.isOpened():
ok, cam_frame = camera.read()
if not ok:
break
gray_img=cv2.cvtColor(cam_frame, cv2.COLOR_BGR2GRAY)
faces= cascade_classifier.detectMultiScale(gray_img, minNeighbors=5)
cv2.putText(cam_frame,str(name),
(x + 10, y + 10), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,255), 2)
cv2.imshow('video image', cam_frame)
key = cv2.waitKey(30)
if key == 27: # press 'ESC' to quit
break
camera.release()
cv2.destroyAllWindows()
On the other hand, Tensorflow is the low-level operations that offer flexibility and
advanced operations if you want to make an arbitrary computational graph or
model. Tensorflow also can visualize the process with the help of TensorBoard and
a specialized debugger tool.
So, if you want to start working with deep learning with not that much complexity,
use Keras. Because Keras offers simplicity and user-friendly to use and easy to
implement than Tensorflow. But if you want to write your own algorithm in deep
learning project or research, you should use Tensorflow instead.
Summary
So let’s summarize everything we have discussed and done in this tutorial.
Keras in a high-level API that is used to make deep learning networks easier
with the help of backend engine.
Keras is easy to use and understand with python support so its feel more
natural than ever. It is good for beginners that want to learn about deep
learning and for researchers that want easy to use API.
The installation process is easy and you can use a virtual environment or
using an external platform such as AWS.
Keras also comes with various kind of network models so it makes us easier
to use the available model for pre-trained and fine-tuning our own network
model.
Also, there are a lot of tutorials and articles about using Keras from
communities worldwide codes for deep learning purposes.