Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Python Machine Learning Errata 2nd

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Errata

- 19 submitted: last submission 08 Oct 2018

Page no. 386

In the diagram of multilayer perceptron, the hidden layer description is: "Hidden layer with 3 hidden
units plus bias unit (d = 3+1)"

It should be:

Hidden layer with 4 hidden units plus bias unit (d = 4+1)

Page no. 437

In the implementation of generating batches of data, the following line is incorrect:

yield (X[i:i+batch_size, :], y[i:i+batch_size])

It should be:

yield (X_copy[i:i+batch_size, :], y_copy[i:i+batch_size])

Page no. 468

The following paragraph is bit confusing:

"In the following example, we assume that data from source A is fed through placeholder, and
source B is the output of a generator network which we will build the generator network by calling the
build_generator function within the generator scope, then we will add a classifier by calling
build_classifier within the classifier scope:"

Consider this paragraph:

"In the following example, we assume that data from source A is fed through placeholder, and
source B is the output of a generator network. We will build the generator network by calling the
build_generator function within the generator scope, then we will add a classifier by calling
build_classifier within the classifier scope:"

Page no. 482

"t1 = tf.ones(shape=(10, 1)," should be "t1 = tf.ones(shape=5, 1),"

and
"t2 = tf.zeros(shape=(10, 1)," should be "t2 = tf.zeros(shape=(5, 1),"

Page no. 499, Chapter 15

It is:

So, you can see that this different treatment of elements of x can artifcially put more emphasize on
the middle element, x[2], since it has appeared in most computations.

It should be:

So, you can see that this different treatment of elements of x can artifcially put
more emphasis on the middle element, x[2], since it has appeared in most computations.

Page no. 517, Chapter 15

X_valid_centered = X_valid - mean_vals

This line should be:

X_valid_centered = (X_valid - mean_vals)/std_val

Page no. 518, Chapter 15

weights = tf.get_variable(name='_weights',

shape=weights_shape,

It should be:

weights = tf.get_variable(name='_weights',

shape=weights_shape)

Page no. 527, Chapter 15

The following code shows how to restore a saved mode.

It should be:

The following code shows how to restore a saved model.

Page no. 542, Chapter 16

"the activation of the same hidden layer from the previous time step t=1"

It should be:
"the activation of the same hidden layer from the previous time step t-1"

Chapter 3, Page 55

It is:

( 6 / 45 ≈ 0.067 )

It should be:

( 3 / 45 ≈ 0.067 )

Chapter 5, Page 155

It is:

Both LDA and PCA are linear transformation techniques that can be used to reduce the number of
dimensions in a dataset; the former is an unsupervised algorithm, whereas the latter is supervised.

It should be:

Both PCA and LDA are linear transformation techniques that can be used to reduce the number of
dimensions in a dataset; the former is an unsupervised algorithm, whereas the latter is supervised.

Errata Type: Typo | Page 28

It is:
sum([j * j for i, j in zip(a, b)]

Should be:
sum([i * j for i, j in zip(a, b)]

Errata type: Technical l Page no: 72

This: The first row corresponds to the class-membership probabilities of the first flower,the second
row corresponds to the class-membership probabilities of the third flower, and so forth.

Should be: The first row corresponds to the class-membership probabilities of the first flower,the
second row corresponds to the class-membership probabilities of the second flower, and so forth.

Errata Type: Code|Page Number: On page: 385

It is:

Wk,j(l)." // --> [W {subscript k,j}


{superscript(l)}].
should be:

Wk,j(l+1). //--> [ W {subscript k,j}


{superscript(l)}].

Errata Type: Code I Page number: 66

It is:

J(w) = - sum_i y^(i) * log(Phi(z^(i))) + (1 - y^(i)) * log(1 - Phi(z^(i)))

Should be:

> J(w) = - sum_i [y^(i) * log(Phi(z^(i))) + (1 - y^(i)) * log(1 - Phi(z^(i)))]

Page : 91

It is:

Here, p(i|t) is the proportion of the samples that belong to class c for a
particular node t."

should be:

Here, p(i|t) is the proportion of the samples that belong to class i for a particular node t."

Errata Type: Technical |Page Number: 385

It is:
w(l)k,j

Should be:
w(l+1)k,j

testing errata upload

Errata Type: Code | Chapter 5 | Page 159


This:
n = X_train[y_train == i + 1, :].shape[0]

Should Be:
n = X_train_std[y_train == i + 1, :].shape[0]

You might also like