This document discusses using a cascade correlation neural network (CCNN) to capture the drawing style of a caricaturist in order to automatically generate caricatures. It proposes extracting facial components from original images, mean faces, and caricatures to create training data. The CCNN is trained using this data to learn the exaggerations made by the caricaturist. Experiments show the CCNN can accurately predict nonlinear exaggerations to components. The approach aims to address limitations of existing caricature generation systems by learning an individual artist's unique style through training on their deformations of facial objects.
Report
Share
Report
Share
1 of 32
More Related Content
neuralAC
1. Company
LOGO
Use of Neural Networks in Automatic Caricature
Generation:
An Approach based on Drawing Style Capture
By
Rupesh Shet , K.H.Lai
Dr. Eran Edirisinghe
2. Agenda
1. Introduction to ACCR1. Introduction to ACCR
2. Cascade Correlation Neural Network2. Cascade Correlation Neural Network
3. Capturing the Drawing Style of a Caricaturist3. Capturing the Drawing Style of a Caricaturist
4. Conclusion4. Conclusion
5. Questions5. Questions
3. Automated Caricature Creation
Observation:
• The understanding between the mapping of input and output
which is not necessary.
• Able to capture the non-linear relationships
Now to do this we suggested to Use Neural Network
4. Formulization of Idea
• Exaggerating the Difference from the Mean (EDFM) which is
widely accepted among caricaturists to be the driving factor
behind caricature generation
Input Images Corresponding caricature
(Cartoon)
Mean Face
What is the
distinctive
feature?
What has
changed?
∆S
∆S’
Why make this change?
(Rules in the brain)
Mean Face Input Image
Corresponding
Caricature
6. Cascade Correlation Neural Network
Artificial neural networks are the combination of artificial neurons
After testing and analysing various neural networks we found that the
CCNN is the best for the application domain under consideration.
The CCNN is a new architecture and is a generative, feed forward,
supervised learning algorithm.
An artificial neural network (ANN) is composed neurons, connections
between neurons and layer.
Connection weights determine an organizational topology for a
network and allow neurons to send activation to each other.
7. NN Terminology
There are three layer:
Input layer the problem being presented to the network.
Output layer the network’s response to the input problem.
Hidden layer perform essential intermediate computations.
Input function is a linear component which computes the weighted
sum of the units input values.
Activation function is a non-linear component which transforms the
weighted sum in to final output value
8. CCNN Algorithm
Cascade-Correlation (CC) combines two ideas:
The first is the cascade architecture, in which hidden neuron are
added only one at a time and do not change after they have
been added.
The second is the learning algorithm, which creates and installs
the new hidden neurons. For each new hidden neuron, the
algorithm tries to maximize the magnitude of the correlation
between the new neuron’s output and the residual error signal of
the network.
9. CCNN Algorithm
The algorithm is realized in the following way:
1. CC starts with a minimal network consisting only of an input
and an output layer but with no hidden neuron. Both layers are
fully connected with an adjustable weight and bias input is
permanently set to +1.
2. Train all the connections ending at an output neuron with a
usual learning algorithm until the error of the network no
longer decreases.
11. CCNN Algorithm
3. Generate the so-called candidate neurons. Every candidate neurons is
connected with all input neuron and with all existing hidden neuron.
Between the pool of candidate neuron and the output neuron there are no
weights.
I
+1
bias
Input unit
II
III
IV
V
Add Hidden Unit 1
Output Unit
12. CCNN Algorithm
4. Try to maximize the correlation between the activation of the
candidate neuron and the residual error of the net by training
all the links leading to a candidate neuron. Learning takes
place with an ordinary learning algorithm. The training is
stopped when the correlation scores no longer improves.
5. Choose the candidate neuron with the maximum correlation
freeze its incoming weights and add it to the network. To
change the candidate neuron into a hidden neuron, generate
links between the selected neuron and all the output neuron.
Since the weights leading to the new hidden neuron are
frozen, a new permanent feature detector is obtained.
6. Loop back to step 2.
7. This algorithm is repeated until the overall error of the net fall
below a given value
14. Advantages of CCNN
In addition the CCNN has several other advantages namely:
• It learns very quickly and is at least ten times faster than
traditional back-propagation algorithms.
• The network determines its own size and topology and retains the
structure.
• It is useful for incremental learning in which new information is
added to the already trained network.
16. The Proposed Methodology
New Facial
Component
Training
Original
Facial
Image
Caricature
Image
Mean
Face
Generator
Facial Component Extractor/
Separator
Original
Component
Caricature
Component
Mean
Component
Cascaded Correlation Neural Network (CCNN)
Automatically
Generated
Caricature Component
output2input1input
17. Step 1:Generating Mean Face
Generating Mean Face:
• For the purpose of our present research which is focused only on a
proof of concept, the mean face (and thus the facial components) was
hand drawn for experimental use and analysis. However, in a real
system one could use one of the many excellent mean face generator
programs made available in the World Wide Web
18. Step 2: Extraction
Facial Sketch
LipNose
Face
Eyes
Facial Component Extraction/Separation:
• To extract/separate various significant facial components such as, ears,
eyes, nose and mouth from the original, mean and caricature facial images
• Many such algorithms and commercial software packages exists that
could identify facial components from images/sketches.
19. Step 3: Creating Training Data Sets
Creating Data Sets for Training the Neural Network:
• The original, mean and caricature images of the component under
consideration are overlapped
• Subsequently using cross sectional lines centred at the above point and
placed at equal angular separations.
Caricature ImageX7
(mm)
Original Image
Node
direction
1
2
3
4
5
6
7
8
Mean Image
20. Step 4: Tabulating Data Sets
Tabulating Data Sets:
• The higher the number of cross sectional lines that are used, the
more accurate the captured shape would be.
• However for ease of experimentation, we have only used four
cross sectional lines, which results in eight data sets
Original Mean Caricature Original Mean Caricature Original
X1 25 37 13 X5 86 75 100
Y1 99 99 99 Y5 99 99 99
X2 47 53 37 X6 62 60 68
Y2 108 102 118 Y6 93 95 87
X3 56.8 56.8 56.8 X7 56.8 56.8 56.8
Y3 109 102 125 Y7 92 95 86
X4 66 59 76 X8 50 52 45
Y4 109 102 119 Y8 93 95 87
21. Step 5,6: NN Training & Setting NN
Neural Network Training:
• we consider the data points obtained from the caricature image
above to be the output training dataset of the neural network.
• The data sets obtained from the original and mean images to
formulate input training dataset of the neural network.
[This follows the widely accepted strategy used by the human
brain to analyse a given facial image in comparison to a known
mean facial image. ]
Setting up the Neural Network Parameters:
• We propose the use of the following training parameters for a
simple, fast and efficient training process.
22. Step 7: Testing
Testing
• Once training has been successfully concluded as described
above, the relevant facial component of a new original image
is sampled and fed as input to the trained neural network
along with the matching data from the corresponding mean
component.
Parameter Choice
Neural Network Name Cascade Correlation
Training Function Name Levenberg-marquardt
Performance Validation Function Mean squared error
Number of Layers 2
Hidden Layer Transfer Function Tan-sigmoid with one neuron at the start
Output Layer Transfer Function Pure-linear with eight neurons
24. Experiment : 1
This experiment is design to investigate the CCNN is capable for
predicting orientation and direction.
25. Experiment: 2
This experiment is designed to prove that it is able to accurately predict
exaggeration in addition to the qualities tested under experiment 1
26. Experiment: 3 (Training)
Training1 Training 2 Training 3
Training 5 Training 6 Training 7
This experiment we test CCNN on a more complicated shape depicting
a mouth (encloses lower and upper lips).
27. Experiment: 3 (Results)
Result 1 Result 6 Result 7
The results demonstrate that the successful training of the CCNN has
resulted in it’s ability to accurately predict exaggeration of non-linear
nature in all directions.
Note:An increase in the amount of the training data set would result in
an increase of the prediction accuracy for a new set of test data.
28. Conclusion
•In this research we have identified an important shortcoming of
existing automatic caricature generation systems in that their inability
to identify and act upon the unique drawing style of a given artist.
•We have proposed a CCNN based approach to identify the drawing
style of an artist by training the neural network on unique non-linear
deformations made by an artist when producing caricature of
individual facial objects.
•The trained neural network has been subsequently used
successfully to generate the caricature of the facial component
automatically.
29. Conclusion
• The above research is a part of a more advanced research
project that is looking at fully automatic, realistic, caricature
generation of complete facial figures. One major challenge faced
by this project includes, non-linearities and unpredictabilities of
deformations introduced in exaggerations done between
different objects within the same facial figure, by the same artist.
We are currently extending the work of this paper in combination
with artificial intelligence technology to find an effective solution
to the above problem.
30. References
[1] Susan e Brennan “Caricature generator: the dynamic exaggeration of
faces by computer”, LEONARDO, Vol. 18, No. 3, pp170-178, 1985
[2] Z. Mo, J.P.Lewis, and U. Neumann,
Improved Automatic Caricature by Feature Normalization and Exaggeration,
SIGGRAPH 2003 Sketches and Applications, San Diego, CA: ACM
SIGGRAPH, July (2003).
[3] P.J. Benson, D.I. Perrett. “Synthesising Continuous-tone Caricatures”,
Image & Vision Computing, Vol 9, 1991, pp123-129.
[4] H. Koshimizu, M. Tominaga, T. Fujiwara, K. Murakami. “On Kansei Facial
Caricaturing System PICASSO”, Proceedings of IEEE International
Conference on Systems, Man, and Cybernetics, 1999, pp294-299.
[5] G. Rhodes, T. Tremewan. “Averageness, Exaggeration and Facial
Attractiveness”, Psychological Science, Vol 7, pp105-110, 1996.
[6] J.H. Langlois, L.A. Roggman, L. Mussleman. “What Is Average and What
Is Not Average About Attractive Faces”, Psychological Science, Vol 5,
pp214-220, 1994.
31. References
[7] L. Redman. “How to Draw Caricatures”, McGraw-Hill Publishers,
1984.
[8] “Neural Network” http://library.thinkquest.org/C007395/tqweb
/index.html (Access date 13th
Oct 2004)
[9] The Maths Works Inc, User’s Guide version 4, Neural Network
Toolbox, MATLAB
[10] http://www.cs.cornell.edu/boom/2004sp/ProjectArch/AppoNeuralNet
/ BNL/NeuralNetworkCascadeCorrelation.htm (Access date 13th
Oct
2004)
[11] S. E. Fahlman, C. Lebiere. “The Cascade-Correlation Learning
Architecture”, Technical Report CMU-CS-90- 100, School of
Computer Science, Carnegie Mellon University, 1990.
[12] Carling, A. (1992). “Introducing Neural Networks”. Wilmslow, UK:
Sigma Press.
[13] Fausett, L. (1994). “Fundamentals of Neural Networks”. New
York: Prentice Hall.