Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Using Artificial Intelligence Techniques To Emulate The Creativity of A Portrait Painter

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

https://www.scienceopen.com/hosted-document?doi=10.14236/ewic/EVA2016.

32
2022/12/20
http://dx.doi.org/10.14236/ewic/EVA2016.32

Using Artificial Intelligence Techniques to


Emulate the Creativity of a Portrait Painter

Steve DiPaola Graeme McCaig


Simon Fraser University Simon Fraser University
Canada Canada
sdipaola@sfu.ca graeme_mccaig@sfu.ca

We present three new machine learning based artificial intelligence (AI) techniques, which we have
added to our parameterised computational painterly rendering framework and show their benefits
in computational creativity and non-photorealistic rendering (NPR) of portraits. Traditional portrait
artists use a specific but open human creativity, vision, technical and perception methodologies to
create a painterly portrait of a live or photographed sitter. By incorporating more open ended
creative, semantic and concept blending techniques, these new neural network based AI
techniques allow us to better model the creative cognitive thinking process that human painters
employ. We analyse these AI based methods for their operating principles and outputs that
together along with our parameterised NPR modules can be relevant to the field of computational
creativity research and computational painterly rendering.

Artificial intelligence. Computation creativity. Painterly rendering. Computer graphics. Cognitive science.

1. INTRODUCTION Intelligence techniques that we add to our Painterly


Rendering software framework that are bringing
Traditional portrait artists use a specific but open more high level tools and insights to model the
human creativity, vision, technical and perception cognitive creative process of a portrait artist.
methodologies to create a painterly portrait of a live
or photographed sitter. To create a portrait, a
human portrait painter setups up the room/lighting,
positions the sitter, interviews the sitter to
understand/capture inner (personality, …) and outer
resemblance, and at the same time merges these
goals with how the artist wants to convey their own
painting style in the trajectory of their painting a
career as well as striving for some personal and
universal (cultural, political, philosophical) truth of
the current world they live in. Balancing these goals
(sitter inner/outer resemblance, artists goals,
universal statements) an artists has a palette of
choices of themes, brush style, colour plan, edge
and line plan, abstraction style, and emotional b
narrative at their disposal to create the final c
painting.
Figure 1: Outputs of guided Deep Dream (c.) blending
Our research uses several artificial intelligence inputs (a.,b.) then put through our NPR system.
techniques including Genetic Algorithms, Neural
Networks and Deep Learning neural networks in an 2. COMPUTATIONAL PAINTERILY RENDERING
attempt to begin to understand and emulate this
creative process. For instance our Deep Learning Non Photorealistic Rendering (NPR) is a computer
Networks can understand how to balance or blend graphics technique, which creates imagery with a
different aesthetic, conceptual, abstraction concerns wide variety of expressive styles inspired by
at a semantic level. We review three new Artificial painting, drawing, technical illustration, cartoons

158
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

and mapping (Gooch and Gooch 2001). This is in explore art topics through a cognitive science
contrast to typical computer graphics, which focus perspective with the aim of enriching our
on photorealism. NPR already has applications in understanding of both art practice (i.e. the act of fine
video games, animation, movies, architectural and art painting) and the underlying perceptual
technical illustration, and rising fields such as mechanisms.
computational photography, art therapy and virtual The work we present here attempts to use strong
reality. analysis of the artistic painterly process through a
lens of new scientific understanding of human
Many current computer painterly rendering systems cognition to create a cognitive knowledge based
rely on computer imaging approaches (e.g. edge painterly NPR software toolkit that can have both
detection, image segmentation) that model at the wider range and improved results compared to
physical level such as blobs, strokes and lines. Our many current NPR techniques. By limiting the
novel painterly rendering research approach relies investigation to fine art portraits (as opposed to all
more on parameterising a cognitive knowledge painterly art forms) and how they vary from their
space of how a painter creatively thinks/paints. photography analogues, the system can use strong
knowledge (e.g. salience) of portraits and faces to
Our cognitive painting system, Painterly (DiPaola inform the system.
2007, 2009, DiPaola et al. 2013) which models the
cognitive processes of artists, uses algorithmic,
particle system and noise modules to generate 3. PORTRAIT PAINTING AND SALEINCE
artistic colour palettes, stroking and style
techniques. This paper will explore three new AI Fine art painting involves making subjective
“thinking about what to paint first” pre-painting decisions about what aspects of the source are
systems to our NPR framework. particularly noticeable, important, prominent or
salient. Painterly salience involves emphasising
Artists and scientists have different approaches to details using painterly techniques. Later we will
knowledge acquisition, usage and dissemination. discuss how Neural Network based AI techniques
This research work is one attempt to bridge these such as Deep Learning (DL) which we have begun
different fields. Our domain of inquiry is the creation to employ, give us a more sematic toolset to work
and viewing of fine art portrait painting – we are with such as conceptual blending, salience and
interested in elucidating cognitive and perceptual creative goals (e.g. style, abstraction
mechanisms or ‘cognitive correlates’ which resemblance).For example, given the goal of
correspond and relate to artists’ techniques and creating a regal but approachable portrait of the
conceptions regarding fine art painting and then Bishop sitting in front of them, an artist considers
modelling the human process in software. what elements or regions of the portrait should
stand out, what methods to use and modifies these
Here we are interested in the process of fine art regions using specific artistic manipulations or
painting, how it is created and perceived as well as conceptual blends that exert a psychological impact
the cognitive phenomena, which underlay it. on viewers. Cognitive researchers believe that
Therefore the process of fine art painting is the some of the techniques used by artists include the
behaviour we attempt parse from a ‘cognitive following:
correlate’ perspective. We are motivated in part by
 “simplify, compose and leave out what’s
the growing recognition that ‘Artists are
irrelevant, emphasising what’s important”.
Neuroscientists’: i.e. they have discovered valuable
(DiPaola 2007);
ways of understanding and working with human
cognitive and perceptual mechanisms to achieve  “direct the viewer’s attention to the relevant
authorship techniques to convey desired content and to influence their perception of it”
experiences and narrative in their created art works (Santella and Decarlo, 2002);
(Cavanagh 2005, Zeki 2001). Humans like art  create “abstractions of photorealistic scenes in
paintings because our brain is stimulated by it more which the salient elements are emphasized”
than say a photograph or real life and artists have (Vanderhaeghe 2013).
intuited how to exploit these neural mechanisms in
There has been much debate amongst
the brain by specific painterly knowledge. Whether
psychologists, vision scientists, image processing
artists are or act like a type of neuroscientist as
experts and even NPR researchers about the true
many cognitive scientists like to state or more
meaning of salience and how to measure it.
simply that artists have a passed down
Santella and Decarlo (2002) tracked a causal
methodology and talent space where they use their
viewer’s gaze through a scene in a NPR painting
eyes, perception and mind to produce a painting in
(with an eye tracker) to determine what part of the
a way that when analysed through a certain
scene needed to be emphasised. Our view is that
cognitive lens can benefit both the arts and
painterly salience (emphasising a style or details via
cognitive sciences, is less the point. Hence, we

159
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

painterly techniques) is authored not by the viewer Some artists stick tightly to their basic formula or
but by the artist – to emphasise what she believes plans, while others deviate widely based on the
she wants to convey. It is a goal-oriented source and the narrative. As they begin, they exploit
endeavour. In a landscape scene, for example, with a limited tonal and colour range where they can re-
mountains and a stream in the background and two scale and re-centre the relative range of all aspects
horses and a tree in the grassy foreground – what of the source image and source masses (tonally,
should be emphasised? The tree or perhaps one or tonal balance, concepts, themes, colour, colour
both of the horses? Zeng and Zhao have attempted balance, detail, edging, shapes, …) in ways they
to deal with this problem in NPR by creating perceive fit their style and narrative goal. They work
hierarchical salient parse trees of a scene. In the through the painting, taking anywhere from hours to
above example, the mountain, stream, background, days to weeks. They frequently make mental
grass, tree and horse objects would be mapped by comparisons of how different elements or regions of
attention priority in a salient priority tree structure the painting appear relative to one another. They
form. Then emphasis and filtering painterly usually start with large tonally limited masses (i.e.,
techniques may be used to complete the painting, two or three values) that they tone sample mentally.
say with one house at the top of the priory tree Some artists squint to help both blur out details and
(Zeng et al. 2009). Since we take a knowledge- remove colour importance. Then they shift to
based approach, we again limit the problem space another region of the painting. As they continue,
to one specific genre with considerable knowledge they increasingly focus in on smaller details i.e.,
data: fine art portraits. A portrait contains the smaller brush strokes, and more colour choices.
beginning of a hierarchical tree structure of These details (e.g., tonal range) are not only re-
salience. The approach to salience taken here is to scaled and re-centred from the source but typically
start with portraiture and then expand to other are picked using different aesthetic rules from those
genres as techniques are developed and refined. used initially, such as a warm/cool remapping.
Portraiture makes more extensive use of cognitive These decisions fall into place depending on what
science research than other art forms such as commitments they have already made or how the
landscapes (research that extends considerably initial potentiality of the yet-to-be painted has been
beyond face-specific knowledge rules). actualised thus far.

In this work, we attempt to collect and use both At some point in this progressively refined process,
general artistic methodology and (when appropriate they cycle into a ‘see, think, paint’ loop, deciding
or needed) specific portrait knowledge (what we call what area to work on next, sampling a cognitive
face semantics, i.e., the fact that eyes are more region (like minded tone sampled) from the source,
salient than hair). Hence, our painterly NPR system thinking about it – remapping it conceptually, and
includes processes to deal with face semantics, through the craft of stroking on canvas, commit this
style considerations, sematic and conceptual new conception to paper. It is this “think” area our
blending of ideas, all in the scripting systems that new AI techniques are most interested in. To model
allow known portrait rules to be exploited. in your head as a pre-act to the implementation
stave of painting. With each cycle, some of the
potentiality of the previous cycle gets actualised,
4. PAINTERLY CREATIVE PROCCESS and simultaneously, potentialities for future
iterations arise. They cycle through this process
From a process standpoint, artists investigate repeatedly until the semantic area they are working
something of interest about a scene or sitter they on (e.g., the lower face) is complete. They may then
have chosen to paint, and come up with a ‘visual make a more detailed pass involving progressive
narrative’ of this scene that includes how it will be refinement of the area. Light is the primary
expressed using the language and techniques of consideration in deciding how the painting unfolds
painting, and how the painting will deviate from the (in the form of tone sampled shapes), although
real scene. Factors that may be taken into account artists also consider volume to a lesser extent (i.e.,
during this process include (1) the content of the stroke up the volume of the nose) and content areas
scene, (2) their career trajectory (e.g., techniques (working in the background or a cast shadow or for
and innovations they have experimented with, an eye, I do something different and specific).
advice they have received from mentors, and so
forth) mixed with (3) personal feelings and intuitions We have attempt to model this process in Painterly
as they arise in the moment about such factors as our NPR toolkit, which uses a knowledge-based
what will invite exploration or provide closure. Much approach to painterly rendering to create a wide
of an artist’s time is spent re-assessing what is now variety of computational paintings styles based on
there in front of them, and re-evaluating what to do source portrait photograph and semantic knowledge
next based on that data, a process that Gabora maps (DiPaola et al. 2010, DiPaola 2009, DiPaola
(2010) refers to as honing through re-iterated 2007). The knowledge rules were sourced/encoded
context-driven actualisation of potential. by categorising traditional ‘artistic painter process’

160
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

and linking the findings to theories from human therefore called Deep Learning (DL) neural
vision, colour and perception, as well as semantic networks.
models of the face.
5.1 Planar or Cubism based Abstraction
Painterly has contributed to research understanding
in the cognitive nature of art and vision science, Our first AI thinker system takes on planar (e.g.
mainly with in empirical techniques (i.e., eye cubist style) abstraction. Many AI creativity and
tracking studies) that allow images to be varied in NPR systems strive for abstraction by simply
systematic ways while still being judged as plausible varying parameters like stroke length to achieve a
works of art (DiPaola et al. 2010, DiPaola 2009, level of abstraction. This is a worthy way to create
DiPaola 2005). However, it is still difficult for
researchers/users to script the scores of parameters
to make a strong painterly recipe. Painterly has two
main sections. The first ‘Thinker’ section mimics the
cognitive high level painterly process deciding
progressively detailed passes and the cognitive
blobs (shapes) that painters work in. Next the
‘Painter’ section implements the Thinker’s plans in
low-level variables of brush’, size, length and
transparency per pass and per cognitive blob.
Lastly, the painter section uses a colour system
which translates tonal value into final colour based
on the semantic regions (eyes, clothes, etc.) of the
destination painting. While our Painterly system has
been used for both stages (thinker-painter), we
have begun to update the thinker stage which is
concerned with modelling the perceptual and
creative process of human artists and painters, with
new AI based “inner” modules (the thinker) that then
output to the NPR outer or implementation modules
(the painter).This follow sections of this paper
mainly deal with three new AI based “thinker”
techniques that we combine with our NPR “painter” Figure 2: Cubist like polygonal abstraction from severe to
output. These new AI modules better emulate the mild levels using a Neural Network AI Technique.
cognitive semantics we have described a human
painter develops in the artistic painterly process.
Currently the modules are separate interchangeable
systems where several different AI thinker systems
create intermediate output that is then further
refined by our main NPR implementation systems
for stroke based painterly rendering. These new AI
“thinker” modules or systems that now can give us a
greater level of high level semantic and concept
blending processes to better emulate a cognitive
approach to the thinker stage.

5. AI NEURAL NETWORK PAINTERLY SYSTEMS

We know look at three new systems we have added


to our NPR framework. All use the AI technique
called neural networks. Classified in the AI domain
of machine learning and cognitive science, neural
networks (NNs) are a family of techniques inspired
by biological neural networks (the central nervous in
the brain). These interconnected "neurons" are
layered at different levels with approximate
functions and numeric weights that can be tuned
based on experience, making neural nets adaptive
to inputs and capable of learning. Two of our
systems use a large or deep level of layers and are Figure 3:From the author’s photo (top left) we combine
varying AI abstraction levels with many NPR techniques.

161
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

rendered abstraction but leaves out cognitive of the technique in Gatys et al. (2015), abbreviating
abstraction where the artist is using abstraction in Deep Dream and Deep Style as DD and DS
deeper ways for instance to concept blend different respectively.
thoughts or meanings. Our first attempt at this
deeper “thinker” abstraction is with deep or severe DCNNs are typically trained on large datasets of
planar abstraction. This neural network system uses images in order to build up a multi-level, feature-
a regressor based image stylisation which has been based re-encoding system, in which low-level
influenced by blends of genetic algorithms and hill features represent local features such as lines and
climbing but rephrases the problem as a much curves while high-level features represent more
speedier machine learning technique. With this abstract, composite visual “concepts” such as
neural network AI technique, pixels of a source “spoked wheel pattern” or “animal hind-leg shape”.
image are treated as a learning problem: so the
system takes the (x,y) position (each pixel) on a grid This method of representing images in a multi-layer
and learns to predict the colour (the pixel colour) at network with increasing abstraction is thought to
that point using regression to (r,g,b). The image bear resemblance to the way the human brain
information is encoded in the weights of the processes visual perception (DiCarlo 2012). This
network. In Figure 2 we see a series of planar structure facilitates performance in discrimination
abstractions from a photo source from our system. /classification tasks such as recognising objects in
an image as belonging to a certain learned
The Python based system allows us to control the category; however, as found with DD and DS, it is
level from very severe abstraction (i.e., 3 planes) all also possible to use DCNNs generatively, creating
the way through mild abstraction with 100’s of images which emphasise certain features or
planes as in Figure 2. It should be noted that this feature-layers of an image, or combine the features
“thinking” AI phase first outputs less painterly low of one image with features from a second image to
resolution simple planes that have one colour per create an output image sharing qualities of both.
plane, simulating not a finish painting at this point
but thinking model about abstracted planes in the These generative abilities resonate with the idea
artists mind. To make the finished thinker-painter art from Neuroaesthetics (Zeki 2001) that a possible
pieces in Figures 2 and 3, we then take this inner role and motivation of art is for audiences/artists to
abstraction output and send it through our Painterly reveal or stimulate the neural mechanisms of
NPR system where artistic distortion, human level perception – we can view the different low- and
variation and brush stroking complete the “painting high-level feature encodings within a DCNN as
on canvas” look of the final work. This is more different perspectives on the essence of an image
evident in Figure 3 where several different levels of as analysed within a brain. In a different paper
first, AI planer abstraction are completed with (McCaig et al. 2016), we have examined how the
several different NPR recipes of painterly rendering. combination of image features amounts to a
Our research lab is now in performing user studies computational model of visual concept blending and
where we testing users reaction to a range of relates to Computational Creativity as a field.
abstractions from this AI output. Our early test
results show that users both prefer the abstractions We have implemented both DD (github.com/
to the original photography they were created from google/deepdream) and DS (github.com/fzliu/style-
in terms of a work of art, as well are able to identify transfer), using the Caffe deep learning framework
correct aesthetic descriptors (e.g. cheerful, trust, (Jia 2014), as modules within our AI-based painting
awe, scary, …) that people see the original source software toolset, currently using them as a pre-
photograph in even severely abstracted versions. processing stage which simulates an artists’
This preliminary study data supports that this AI imagination and perception, transforming an image
based abstraction technique seems to have some before it is sent to the second, artistic stroke-
qualities of human based abstract art – that the placement phase.
output holds the qualities inherent to the original
sitter but within a simpler aesthetic form. We now compare and demonstrate the operation of
DD and DS within our system. Deep Dream
5.2 Deep Dream and Deep Style Techniques (Mordyintsev et al. 2015) has two basic modes of
operation. The mode we might call “free
Deep Dream (Mordyintsev et al. 2015) and Neural hallucination” begins with a source image, and uses
Artistic Style (Gatys et al. 2015) are two techniques back propagation and gradient ascent to gradually
for modifying images through a process of analysis transform the image pixels in order to emphasise
and search involving Deep Convolutional Neural the most strongly-activated features from a certain
Networks (DCNNs) (Krizhevsky 2012, LeCun 1998). user-selected network layer. This results in the
DCNNs fall within the rapidly growing field of “Deep emphasis of pre-existing shapes and patterns as
Learning” research (Bengio 2013). We adopt the well as the appearance of hallucinated patterns in
shortened name “Deep Style” for ease of discussion which the network gravitates towards “seeing”

162
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

patterns it has learned to recognise. Figure 4


presents results from our DD implementation
running in free hallucination mode.

Deep Dream also has a guide-image mode, which


again uses back propagation and gradient ascent,
to analyse the strong features from one “guide”
image and emphasise the best-matching features
from a second source image by transforming the
pixels in this second image. In Figure 5 we show
how the algorithm transfers visual attributes from
one image to another, depending on the network
layer used for feature comparison. In Figure 6 we
show further examples of how different guide
images can affect different visual attributes.

Deep Style (Gatys et al. 2015) is somewhat similar


in concept to Deep Dream’s guide-image mode.
However, notably, it initialises its output image
starting from random noise, and then optimises the
image based on multiple network layers
simultaneously. The output image is optimised to
closely match features at a certain higher
layer/layers with a “content” input image (capturing
the semantic object identity and placement from that
image). It simultaneously is optimised to match a
correlation-based metric on features from multiple
layers with a secondary “style” image (capturing Figure 4: Deep Dream and our NPR system creating
different techniques depending on the layer.
colours and textures from that image). Due to this
separation of content and style, and the multi-layer
technique which tends to closely capture the look of
specific style image fragments, DS been found to be
quite successful at painting style transfer; i.e.
applying the colourist and brush stroking style from
a painting to a new source image to create, e.g. an
“artificial Rembrandt forgery”. Figure 7 shows two
examples of DS used to transplant painting style.

We plan to further explore the possibilities for


dynamic interaction between the perception/
imagination phase and the stroke-painting phase. In
this current system we apply the DD/DS module to
the source photo first, followed by our NPR painting
phase. Therefore all figures except 5 and 6 are
completed with treatment with the Painterly
cognitively-inspired painting module.
Figure 5: Deep Dream where patterning from the source
Our Painterly module which is an extension to our butterfly (a) creates output based on a lower (b) or higher
cognitive painting system, Painterly (DiPaola 2009) (c) network level. Source photo by Ano Lobb.
which models the cognitive processes of artists
based on years of research in this area, uses 6. CONCLUSION
algorithmic, particle system and noise modules to
generate artistic colour palettes, stroking and style We have demonstrated how Machine Learning
techniques. It is in the NPR subclass of stroke techniques such as Neural Networks and Deep
based rendering and is used as the final part of the Learning can play an important part in bringing
process to realise the internal DCNN models with higher level of creative semantics to painterly
stroke based output informed by historic art making. rendering and computational creativity. We have
Specifically in this example, aesthetic advantages begun to use them to blend high level visual,
include reducing some noisy artefacting of the sematic and creative concepts based on their wider
generated DCNN output via cohesive stroke based context and associations which contribute to new
clustering as well a better distributed colour space. forms of cognitively-relevant models of artist

163
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

creativity and imagination. We have examined three 6. ACKNOWLEDGEMENTS


new AI subsystems, analysing their operating
principles and outputs that together along with our We would like to thank Liane Gabora, Sara Salevati,
parameterised NPR modules can be relevant to the Daniel McVeigh and Jon Waldie for their thoughts
field of Computational Creativity research and and contributions as well as NSERC and SSHRC
computational painterly rendering. funding agencies.

7. REFERENCES

Bengio, Y. (2009) Learning deep architectures for


AI. Foundations and Trends in Machine Learning,
2(1), pp. 1–127.

Boden, M. A. (2004) The creative mind: Myths and


mechanisms. Psychology Press.

Cavanagh, P. (2005) The artist as neuroscientist.


a. b. Nature, 434(7031), pp. 301–307, 17 March.

DiCarlo, J., Zoccolan, D., and Rust, N. (2012) How


does the brain solve visual object recognition?
Neuron, 73(3), pp. 415–434.

DiPaola, S. (2007) A Knowledge Based Approach to


Modelling Portrait Painting Methodology. Electronic
Visualisation and the Arts, EVA, London, UK.

DiPaola, S. (2009) Exploring a Parameterized


Portrait Painting Space, International Journal of Art
c. d. and Technology, 2(1–2), pp. 82–93.
Figure 6: Deep Dream techniques using a) geometric
and b) bee guided images and the results. Green photo
DiPaola, S. and Salevati, S. (2014) Using a Creative
by Goran Konjevod. Evolutionary System for Experiencing the Art.
Electronic Imaging & the Visual Arts, Italy, pp. 88–
93.

DiPaola, S., McCaig, R., Carson, K., and Salevati,


S. (2013) Adaptation of an Autonomous Creative
Evolutionary System for Real-World Design
Application Based on Creative Cognition.
Computational Creativity (ICCC), pp. 40–47.

Gabora, L. (2010) Revenge of the “neurds”:


Characterizing creative thought in terms of the
structure and dynamics of memory. Creativity
a. b. Research Journal, 22(1), pp. 1–13.

Gatys, L. A., Ecker, A. S., and Bethge, M. (2015) A


neural algorithm of artistic style.arXiv:1508.06576.

Gooch, B. and Gooch, A. (2001) Non-Photorealistic


Rendering. AK Peters, Ltd.

Jia, Y., Shelhamer, E., Donahue, J., Karayev, S.,


Long, J., Girshick, R., and Darrell, T. (2014) Caffe:
Convolutional architecture for fast feature
embedding. In Proceedings of the ACM
International Conference on Multimedia, pp. 675–
c. d. 678, ACM.
Figure 7: Deep Style using Rembrandt (a.,b.) and Freud
paintings (c.,d) as style guides.
Krizhevsky, A., Sutskever, I., and Hinton, G. E.
(2012) ImageNet Classification with Deep

164
Using Artificial Intelligence Techniques to Emulate the Creativity of a Portrait Painter
Steve DiPaola & Graeme McCaig

Convolutional Neural Networks. In NIPS Vol. 1, pp. Santella, A. and Decarlo. (2002) Abstracted
4. painterly renderings using eye-tracking data. NPAR
‘02 Proceedings of Non-photorealistic animation
Le, Q. V., Ranzato, M., Monga, R., Devin, M., Chen, and rendering, p. 75, ACM.
K., Corrado, G., and Ng, A. Y. (2013) Building high-
level features using large scale unsupervised Simonyan, K. and Zisserman, A. (2014) Very deep
learning. In Acoustics, Speech and Signal convolutional networks for large-scale image
Processing (ICASSP), pp. 8595–8598. recognition.arXiv Preprint arXiv:1409.1556.

LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed,
(1998) Gradient-based learning applied to S., Anguelov, D., and Rabinovich, A. (2015) Going
document recognition. Proceedings of the IEEE, deeper with convolutions. Proceedings of Computer
86(11), pp. 2278–2324. Vision and Pattern Recognition, pp. 1–9.

McCaig, G., DiPaola, S., and Gabora, L. (2016) Vanderhaeghe, D. and Collomosse, J. (2013)
Deep Convolutional Networks as Models of Stroke Based Painterly Rendering. Image and
Generalization and Blending within Visual Video-Based Artistic Stylisation.pp.3–21. Springer.
Creativity.In Submission.
Zeki, S. (2001) Essays on Science and Society:
Mordvintsev, A., et al. (2015) Online Blog. Artistic Creativity and the Brain. Science,
http://googleresearch.blogspot.ca/2015/06/inception 293(5527), pp. 51–52.
ism-going-deeper-into-neural.html
Zeng, K., Zhao, M.,Xiong, C., and Zhu, S.C. (2009)
Salakhutdinov, R. and Hinton, G. E. (2009) Deep From image parsing to painterly rendering. ACM
boltzmann machines. In Proc Artificial Intelligence Trans. Graph., 29(1), 2:1–2:11.
and Statistics, vol. 5, pp. 448–455.

165

You might also like