Python Machine Learning Errata 2nd
Python Machine Learning Errata 2nd
Python Machine Learning Errata 2nd
In the diagram of multilayer perceptron, the hidden layer description is: "Hidden layer with 3 hidden
units plus bias unit (d = 3+1)"
It should be:
It should be:
"In the following example, we assume that data from source A is fed through placeholder, and
source B is the output of a generator network which we will build the generator network by calling the
build_generator function within the generator scope, then we will add a classifier by calling
build_classifier within the classifier scope:"
"In the following example, we assume that data from source A is fed through placeholder, and
source B is the output of a generator network. We will build the generator network by calling the
build_generator function within the generator scope, then we will add a classifier by calling
build_classifier within the classifier scope:"
and
"t2 = tf.zeros(shape=(10, 1)," should be "t2 = tf.zeros(shape=(5, 1),"
It is:
So, you can see that this different treatment of elements of x can artifcially put more emphasize on
the middle element, x[2], since it has appeared in most computations.
It should be:
So, you can see that this different treatment of elements of x can artifcially put
more emphasis on the middle element, x[2], since it has appeared in most computations.
weights = tf.get_variable(name='_weights',
shape=weights_shape,
It should be:
weights = tf.get_variable(name='_weights',
shape=weights_shape)
It should be:
"the activation of the same hidden layer from the previous time step t=1"
It should be:
"the activation of the same hidden layer from the previous time step t-1"
Chapter 3, Page 55
It is:
( 6 / 45 ≈ 0.067 )
It should be:
( 3 / 45 ≈ 0.067 )
It is:
Both LDA and PCA are linear transformation techniques that can be used to reduce the number of
dimensions in a dataset; the former is an unsupervised algorithm, whereas the latter is supervised.
It should be:
Both PCA and LDA are linear transformation techniques that can be used to reduce the number of
dimensions in a dataset; the former is an unsupervised algorithm, whereas the latter is supervised.
It is:
sum([j * j for i, j in zip(a, b)]
Should be:
sum([i * j for i, j in zip(a, b)]
This: The first row corresponds to the class-membership probabilities of the first flower,the second
row corresponds to the class-membership probabilities of the third flower, and so forth.
Should be: The first row corresponds to the class-membership probabilities of the first flower,the
second row corresponds to the class-membership probabilities of the second flower, and so forth.
It is:
It is:
Should be:
Page : 91
It is:
Here, p(i|t) is the proportion of the samples that belong to class c for a
particular node t."
should be:
Here, p(i|t) is the proportion of the samples that belong to class i for a particular node t."
It is:
w(l)k,j
Should be:
w(l+1)k,j
Should Be:
n = X_train_std[y_train == i + 1, :].shape[0]