Unit-V Deep Generative Models Part-01
Unit-V Deep Generative Models Part-01
• where:
• σ (x)=1 / 1+e−x1
Training RBMs
• Forward pass
multiple inputs.
• The “Input Layer” represents the initial layer, which has one
neuron for each input vector.
• Architecture:
• E(v,h1,h2)=−i∑vibi−j∑hj1bj1−k∑hk2bk2−ij∑viWijhj1
−jk∑hj1Wjk′hk2
Architecture of DBM
• Where v is the visible layer, h1 and h2 are the hidden
layers, b, b1, and b2 are biases, and W and W′ are
the weights connecting the layers.
• Training:
• Advantages: