4.3 Diffractive Analysis
In the following diffractive analysis, we identify and unpack the shared and the distinct experiences across each participant.
4.3.1 Creative Agency.
All participants in the study expressed a preference for their personalised model over the generic Stable Diffusion 1.4v model. In this particular study, personalisation of the model to a participant’s cultivated style is intended to place the creative collaboration in the context of their existing artistic practice. Participants (Alex, Georgia, Andre) ascribed the model with the capability of providing an alternative interpretation of their visual aesthetic, and so they naturally assess the model output within this frame of reference.
Alex: “It’s like this very, like sensory sensory experience where you really feel like it’s something you recognise, but then it’s mixed with something you don’t recognise and the experience is really beautiful”
A number of participants described their enjoyment of the unexpected quality of the generated images. As discussed in Section
2.1, the ascribed ‘creativity’ of AI models emerges precisely from this handing over of control to the machine; the models themselves appear to have a creative agency in the sense that they offer imagery beyond what is directly asked of them.
The discourse around machine agency has traditionally placed AI on a spectrum between a mere tool to independent agent [
16]. On one hand, having the AI model generate entirely irrelevant output is not conducive to a collaborative environment. Conversely, if the AI model merely replicates the training data, yielding imagery that appears overly familiar or repetitive, it, too, renders the collaboration fruitless. In the space of human-AI collaboration, there is a need to strike a balance; to have the AI take clear direction, and at the same time to offer something novel.
Identifying the optimal balance of agencies was a challenge for many participants. This was discussed by way of the CFG parameter
5 that controls how much a given prompt has influence over an output image.
Georgia: “I remember reading the scale, and it was always the ones that I liked the best were towards following my prompt religiously, but allowing it a little bit of leeway, like it was never fully up the scale. Otherwise I think it became a bit too basic, I suppose. And it didn’t have those interesting qualities that the AI would like bring to the table”
Georgia touched upon the value of AI models in their ability to offer a unique artistic perspective; to be treated entirely as a tool is to undermine their creative agency, and to compromise the model’s ability to offer creative inspiration:
Georgia:“I do want the AI to take a little bit of the lead because it is a bit more interesting. Otherwise, it’s like, what’s the point? I’ll just do the work myself.”
On the other hand, Kyle expressed a preference for the AI to generate exactly what they had envisioned in their head
Kyle: “it was also just because I maybe had a preconceived idea on what I was looking for the machines to produce almost, you know what I mean? So it’s like if I’m putting in, you know, a metal flower and it’s not metal, it’s a bit like, this isn’t quite the idea generation I’m after [...] I just wasn’t getting anything that was sort of resembling what I was looking for.”
Kyle generated 20 ‘metal flower’ images that they were not satisfied with. After increasing the image, resolution and honing in on the prompt, Kyle was finally able to reach output closer to their personal style (Figure
4). Nevertheless, Kyle describes this process as tedious and unsatisfying (
“I think it’s definitely not quite as engaging as it would be if I’m just trying to sketch ideas out on an iPad or something like that, you know”), with resulting images falling short of their standard of quality (
“They’re still not quite there for me’).
Both participants compared working with the system to their existing workflow, developed as the basis of their creative practice. Interestingly, they come to the same question from opposite sides: If the AI system is simply mimicking ones work, then what value does engaging with the model add to ones practice? In contrast, if the AI system doesn’t generate precisely what the participant is imagining, then, again, it is perhaps easier not use the model at all.
We highlight the discordance between these two participants experiences in order to stress the following point: There is no ideal balance of agencies between human and AI, rather this is entirely context dependent. The perception of machine agency (as too dominating, or as too muted) depends upon the particular needs of the human participant. Georgia expressed interacting with the AI model as a way to gather novel visual ideas; “ I think I would use it early on in the process of making a work, or if I was sketching around [...] definitely a tool for early artistic process and experimenting”. Whereas Kyle’s primary interest in the model was to see if it could produce convincing variants of his work; “the thought of it maybe being able to sort of recreate my own work, I guess is definitely something that I find pretty interesting”. The source of this difference in usecase can be attributed to each participants relative professional context. Kyle is a working artist who expressed an interest in whether automation may increase his productive, and hence economic, output. Georgia, on the other hand, presented with no direct economic incentive, rather an interest in how AI could influence or inspire their practice.
Georgia approached the system as an idea generator; a sort of mirror reflecting back their personal aesthetic in a novel way, with the goal of experimenting and pushing their style. For Kyle, on the other hand, one expectation of the model was to reproduce their work convincingly. As such, its performance was measured against an entirely different set of standards. The distinction between AI as an ideation tool, and AI as production tool, reemerges several times throughout the following analysis. With respect to the above discussion, however, we see that AI for ideation necessitates contribution of machine agency, whereas AI for content production requires a greater degree of human control.
4.3.2 Ideation.
A majority of participants enjoyed engaging with the model for the purposes of ideation. This stage generally took place at the beginning of their normal workflow, as a catalyst to creative thinking. As discussed above, Georgia primarily used the model to generate ideas before redrawing elements (Figure
5) of the model output:
“If I just save these on my computer, I’ll probably never look at them again. But if I recreate them by hand, they’ll be able to stick in my brain a little more”.
For Alex, engaging with the model was done primarily in the stages of design conceptualisation. In this case, the value of the model was less to do with whether the output was perceived to be ’good’ or ’bad’, but rather as a tool to encourage divergence of ones visual style :
Alex: “I feel like sometimes you need to kill your baby to like, you know, get to other places. And like, putting into AI is kind of like a really big version of just killing a baby. It’s just like, take my baby and just spit out heaps of stuff. And maybe I’ll like it, or maybe I’ll hate it, and there’s some things that it makes that I hate. And like, that’s cool. I make things I hate as well.”
Divergent thinking has been explored extensively in the literature on human creativity and creative practice [
60,
61]. Neuroscientific studies corroborate the role of divergent thinking as a mental exercise that can stimulate creative outcomes, thereby validating the value attributed to the model as a tool for encouraging this kind of exploratory process. In this light, we see that the model itself doesn’t necessarily require greater degree of ’controlability’ or accuracy in order to facilitate ideation. The capacity for generative AI systems to foster creative inspiration lies precisely in their perceived unpredictability. The inherent qualities of such systems lend themselves naturally towards divergent perspectives, simply for the reason that they exert a new kind of creative agency, and in doing so generate new creative possibilities.
Viewing diffusion models as an artistic medium furthermore warrants a highlighting of their unique characteristics, rather than attempting to conceal them, or to imitate traditional mediums (i.e. to generate a convincingly human painting). In light of the above discussion on machine agency; “it is precisely the resistance of the medium to being
moulded that leads to it’s true creative potential” [
47]. In this case of Stable Diffusion, this unique quality includes errors in the systems ability to produce legible text, or correct anatomical structure (see Figure).
In contrast, other participants articulated limitations regarding the model as an ideation tool. The primary critique revolved around the machine’s inability to independently generate conceptual ideas, suggesting that the creative onus is left for the human participant to determine:
Andre: “I suppose I had this idea that maybe it would make life a whole lot easier, and it would kind of generate ideas for me, but, you know, you’re still the one that has to input the ideas. So there is still the human element, which is, arguably, the most difficult to create, you’re still kind of like left on your own to do it”
Here, Andre provides a critique of the prompt interface, rather than of the aesthetic qualities of the model itself. The design of such an interface presumes a collaboration in which the human participant holds a preexisting mental image or vision, and that the model’s role is to realise that vision. Yet, idea generation occurs largely before this stage of the creative process. In reference to the two modes identified in
4.3.1, ideation and production, it is clear that the current interface lends itself primarily towards the latter. TTI systems do not provide conceptual suggestions, only responsive imagery. Andre’s comment reflects a broader feature of mainstream TTI systems and LLMs alike; they are designed precisely to be
general. The human participant, then, is left to determine the application of the technology. While being an intentional and pertinent feature of generative AI systems, it does not bode well for the purpose of ideation.
A number of participants expressed frustration with the prompt-based interface, and instead began to enter obtuse or poetic prompts with no singular expectation of the output; opting to engage in a creative dialogue with the machine. Darien, for instance, described enjoying the interaction more when using non-literal text prompts:
Darien: “When the prompt itself was quite abstract, then it came up with things that I actually found visually interesting... Because when I draw things, a lot of the time, I’m not so much necessarily giving myself the instruction of like, okay, I’m gonna draw, you know, a flash sheet with a skull. You know, it’s more like, I’m in a certain mood, or like, I feel like celebrating something. Yeah. Like a feeling or like a song or, it’s just like, some kind of abstract feeling related to something else. And then I’ll draw something that I think represents that thing.”
The generated image produced with a non-literal prompt stood out to Darien as one of the few images they liked; “I was kinda into this one” in reference to Figure
6 (left).
Synthesising insight from these differing experiences, we gather that generative systems certainly do encourage divergent thinking due to the quality of the medium. Nevertheless, the specific interface design of TTI systems means that human input is necessary. Moreover, it is not just human input, but rather human guidance from beginning to end. For the purposes of ideation, perhaps the system could benefit from a greater degree of agency; a creative system that is able to offer divergence unprompted.
4.3.3 Plagiarism.
When examining models such as Stable Diffusion, the ethical implications of data sourcing cannot be overlooked. Most participants expressed some awareness of the ongoing ethical debate surrounding the dataset scraping and subsequent training of the Stable Diffusion. While these models present groundbreaking advancements in technology, they source their training data through web scraping methods that bypass artists’ consent. This practice has been the subject of intense debate, questioning not just the legality but the ethical validity of using creative works without permission. The discussion extends beyond copyright infringement, touching upon moral rights, the decontextualization of art, and the potential exploitation of unrecognized creators [
4,
50].
Artists in particular have become increasingly vocal regarding the perceived threat to their livelihood and craft posed by such technologies [
38]. While echoing a similar sentiment, a number of participants identified parallels between the plagiarism underpinning TTI models and the culture of ‘referencing’ in particular creative industries. Georgia, for instance, expressed a feeling that their participation was ethically dubious:
Georgia: “when I was younger, I used to print out, like drawings and then trace over them. You know what I mean? And I guess that’s, like, it’s sort of replicated the same feeling like, Oh, my God, I’m just like, copying something that’s not really mine. And it feels a bit gross, I suppose, a bit plagiaristic.”
Drawing inspiration from and referencing other artists’ work is a standard practice across all creative industries. Referencing, citing, and pulling inspiration from other sources are all integral to developing and carving out ones unique personal style. Kyle draws parallels between this aspect of creative work and an AI model trained on human art, building something of an argument in defense of generative models.
Kyle: “when we create art, sort of, organically or whatever, we’re still using a whole tonne of reference that we’ve already got, whether it’s inside our head, or we’re like looking at another picture. I guess in that sense, it’s still quite a similar process.”
Lingering on Kyle’s phrasing – "organically or whatever” – we see discourse unfold around the fluid concept of what is and is not deemed ‘natural’ in the face of technological progress. In Kyle’s practice, image aggregators such as Google Images and Pinterest are utilised on a near daily basis to source inspiration and reference imagery; a repertoire of tools that have fundamentally transformed Kyle’s trade of tattoo artistry. There is no line to be drawn here between organic or non-organic, for the line is always moving. What is considered ‘organic’ in today’s lexicon would have been alien half a century ago. What may be classified as ’artificial’ now is likely to be subsumed into future definitions of the human.
Kyle conflation of an AI system’s training dataset, and a lifetime of human experiences, of course, falls short in a number of areas. As Darien argues, one key difference lies in the inability to properly acknowledge and cite human work when pulling inspiration from AI generated imagery.
Darien: “ I’ve always respected the idea of sourcing imagery, like going into the person directly, an AI just seems like so impersonal, and so removed from that idea, that doesn’t sit right.”
As Christina similarly notes, “there’s a bit of history behind every image”. Without the ability to acknowledge the lineage of visual and aesthetic development, Christina argues, “you can’t make something new”. Nevertheless, to flag TTI models as plagiaristic implicitly assumes that traditional practices employed in the creation of artistic works are ethically sound. In some cases, these practices abide by a legal framework such as copyright laws. In other cases, there has emerged a social norm regarding what does and doesn’t count as plagiarism, which is commonly enforced among social groups through often unwritten and fluid rules.
Darien: “it’s something that is worked out personally. Like, so many people are using the same symbols and things, but you can just sort of tell when someone’s put certain things in combination, and it’s trying to emanate the same thing in the same way as someone else. And can just look at it and be like, are you copying?”
Meanwhile, in the fashion industry, Alex highlights the pervasive practice of large brands pilfering designs from smaller creators.
Alex: “some brands, just copy people on the internet, and then make it, and that’s their workflow. You know, I don’t agree with that. But some corporations just take an interns work, and then just make it their way, And then they’re like, yeah, it’s our workflow”
In a similar vein, Zak notes that contemporary art practices also see an outsourcing of the material production to often uncredited and poorly compensated workers:
Zak: “there’s so many contemporary artists out there that just give prompts to the artist assistants, and they make things and that’s like, you see so many shows and it’s like that”
In a recent and notable legal dispute, the sculptor who fabricated the artworks attributed to Maurizio Cattelan filed a lawsuit, alleging misappropriation of credit for the creations
6. Despite the legal challenge, the case was ultimately dismissed, highlighting the complexities of authorship and creative ownership within the art world. Alex suggests that with plagiarism already rife in a number of creative industries, the introduction of TTI models are in fact democratising creative production:
Alex: “if you’re going against people who have more money than you and more time than you, or like more people than you, you could use that as a tool to really even the playing field a little bit”
The difference in perspectives across participants perhaps shines light on the preexisting societal context that these technological advancements are affecting. Established names may profit from the work of lesser known artists and interns, often without credit. As alluded to by Alex, TTI models allow for the converse. Emerging artists are now granted a new kind of outsourcing power, which has potential to shrink the gap caused by an artists financial means. In the current debate around plagiarism of the dataset
7, we see a disproportionate representation of voices whose work is public enough to have been scraped, and who consequently have a financial stake in the matter. The participants in this study comprise of young emerging and professional artists who perhaps have little to lose from the development of such technology, and proportionally more to gain.
Alex: “The art scene is very notorious, like burning yourself out, dedicating yourself, and I have done that, that’s happened to me. I’ve just dedicated all my time and energy into it. And it’s, it’s because I’m passionate about it, like I’m okay with that. But they just really, really, really grind it out of you. So I think AI has the, I don’t know, has that sort of potential to ease that strain a little bit”
4.3.4 Efficiency.
When asked to envision how they would ideally utilise an AI image model in their practice, a number of participants (Christina, Andre, Zak, Kyle) describe simple tasks that they would happily have outsourced, purely for the sake of saving time. Andre notes that creative professionals don’t exist outside of the bounds of profit-driven society.
Andre: “We’re no better than the people in in like, high corporations that use [generative AI] to kind of optimise so they can fire people because the computer can do the work that they can do. You know, we’re no different to that. We want it so that we can optimise our time. So we can go and sit back in the staff room and chat”
Kyle outlines a number of technical interface-level improvements that would make a tool such as Stable diffusion worthwhile; if it were integrated into their existing suite of creative tools, if the model could generate high-quality images instantly, and if it could respond collaboratively to work-in-progress sketches. Christina furthermore lists highly domain-specific features that would save time in their particular practice, such as changing line width of a drawing, or creating a stencil of a drawing with one click.
These suggestions are often fueled by a desire to maximise time spent on more fulfilling work. As Andre expresses, “maybe I can get faster at the the most inane part of the job”. Or alternatively, as Zak notes, to simply make time to produce more.
Zak: “I can spend that time to maximise profit, make more designs. I mean, the ideal answer would be leisure. But we don’t exist in a world that allows that, to that extent, yet”
Notably, the participants who were more inclined to adopt time-saving technologies, were those who have managed to transform their creative practice into a full-time trade, all of which are tattoo artists. This is contrasted with the remaining participants (Georgia, Alex, Darien) who have yet to achieve financial sustainability through their practice. This difference speaks to the difficulty of making a living from ones art, with artists often supplementing or centering their practice around trade. With the tattooing industry currently booming
8, it presents as a desirable and marketable skill for artists. This serves to remind us of the ways in which creativity must be co-opted towards the production of value in order to sustain itself.
Kyle: “For a lot of my stuff, it goes hand in hand, most creativity ends up being somewhat monetized.”
According to [
41], creativity is in fact a major driving force of a capitalist economy. Within this framework, creativity is ceaselessly appropriated to generate economic value; it becomes “difficult to know precisely where the individual use value of creativity stops and the exchange value of the original and creative talents begins” [
41]. Likewise, the advancement of generative AI models, particularly those that automate creative tasks, is propelled by capitalism’s fixation on economic growth. The intention for these systems appear to be not so much to enrich our lives in a creative or meaningful manner, but rather to perpetuate this cycle of accelerated production and consumption driven by capital. In attempts to ’automate creativity’, the recent advancement of generative AI systems echo the economic revolutions of the 19th and 20th centuries; in which manual labour was outsourced to the machine [
2].
Returning to our categorisation of AI towards ideation, and production, respectively, it is not so clear where one draws the line between creativity as subjective experience, and creativity as a productive force. The two modes are inextricably linked; a tool for generating creative innovation is ultimately used towards the production of value. Moreover, maintaining a financially viable practice enables artists to further refine and expand their creative work.
4.3.5 Identity and Authorship.
As previously mentioned, all participants expressed a fondness for their personalised version of Stable Diffusion, as opposed to the generic models. They describe the unique interactions it allowed and the novel perspectives it offered into their own aesthetic and creative identity. The customisation, as such, offered a mirror to their artistic identities, prompting introspective thoughts and fresh viewpoints on their own style.
Alex: “There is this kind of essence of like, when I see an AI image, I know it’s not mine. I know, I didn’t create it like, technically, even though it has my style on it, there’s this little distance between it”
The personalised model was perceived by some participants to capture a small piece of their artistic identity; extending beyond their ’natural’ capacities and into a kind of immutable, digital artefact. As expected, this extending of the self into AI provoked both delight and concern across participants. More often than not, simultaneously. Christina was initially impressed with the capability of the model to produce convincing work in their style. Yet, after a few days of use, the participant describes an unsettling experience in which they felt as if the system was improving the more they engaged with it.
Christina: “It felt like I was feeding it”
This led Christina to stop using the system for the remainder of the two week period. These fears were, in part, driven by a lack of understanding into the internal working of such TTI systems. In reality, the system appeared to be improving simply because Christina was becoming more adept at the language and art of prompting – no further training on the participant’s creative work was used beyond the initial sample.
Christina’s withdrawal was symptomatic of a broader anxiety emerging concerning the sanctity of a creators role in the age of generative AI [
33]. This concern is accentuated in a landscape where digital dissemination of artwork is not only common, but increasingly necessary for career success. The accessibility of tools like Stable Diffusion present the risk of anyone appropriating and commodifying an artists individual style, undermining its originality and, by extension, its monetary value.
All participants articulated reservations in claiming authorship of the images generated by the model. A number of participants ascribe this to the lack of effort and physical labour invested into the production of the final image.
Kyle: “it hasn’t come from my hand [...] I’m still not the one who actually created it”
Zak: “I wouldn’t probably ever use a design fully from there, even if it could render my style faithfully, sort of like wrap it, put a bow on it, and it’s done. I still would have to redraw it, put some of my own hand in it to feel like it was mine.”
Georgia: “It felt like, it wasn’t mine, because I didn’t conceptually come up with it. I didn’t experiment with the techniques. I didn’t experiment with the mediums. I didn’t sketch it out. It just kind of popped up on my screen.”
For these participants, claiming authorship would require them incorporating the output into their creative practice
9. For example, redrawing the image by hand, using only elements of the generated output, or generating variations of an original drawing, were just a few methods envisioned by participants for collaboration. For all participants, the generated image did not qualify as an artwork in its own right. Rather, engaging with the system over time presented potential for creative collaboration. For artists with an established practice and workflow, generative AI and TTI systems could be envisioned as an extra step in their process, rather than replacing a component of it.
Christina: “The way that I’m using it, as opposed to like, it being perfect. I’m almost like, I’m cool with it not being perfect. Because I wouldn’t want to rely on it. I could definitely figure out a place that it fits into the process”
This sentiment resonates with the extensive literature promoting co-creative AI as the favoured paradigm for designing creative systems [
15,
58]. As discussed in Section
2.1.1, co-creative frameworks encourage the enhancement and extension of human creativity, rather than the outsourcing of creative production to the machine. In contrast, the stream of research into Computational Creativity [
13] has explored precisely this possibility; whether a machine is or can ever be considered as author, creator, as artist in it’s own right.
By virtue of our study design, participants were not prompted to conceptualise the machine as one or the other. Yet, each participant engaged with the system as a tool, a creative assistant, but also as an extension of their identity; a means to increase and enhance their creative output. Notably, the system was conceptualised as an extension of their creative capacities rather than an independent and intentioned collaborator, with imagery generated by the model to be ultimately subsumed into the participant’s greater artistic intention.
This raises an important point for co-creative systems; Human-AI creative collaborative frameworks must not be built solely from our understanding of human-human creative collaboration, for the machine possesses an entirely different ontological and epistemological sensibility. To understand the models intention behind the generated output was often irrelevant to the participants, or rather, it is impossible for them to comprehend without falling back on anthropomorphic illusions. In the development and analysis of co-creative systems, reverting to anthropocentric conceptualisations leads us to overlook the way in which these new generative technologies are shaping human creativity. These systems do not simply serve as replication of (human) creative agents, they extend upon human capacities for creativity [
31]. Human and AI are not independent and analogous participants, but as mutually constitutive aspects of a whole. The human is in the machine (built for human purposes, trained on human data), and the machine is in the human (a technological extension of human capability, perception, and identity).
4.3.6 Materiality.
A recurring theme emerged around the tangible and material nature of the artists’ practices. All of the artists who participated in this study engage in a creative practice that is founded in physical materials. Often times this material component is the most crucial; often associated with one’s mastering of a ’skill’ or ’craft’. This includes, drawing, tattooing, painting, sewing, sewing, and designing. In each case, participants describe an inability for the TTI model to comprehend the embodied knowledge that underpins creative production, alluding to an ontological difference between the embodied, materially-grounded knowledge in their creative practices and the abstract computational underpinnings of the AI model. In some cases, participants highlighted the impossibility of certain machine-generated images being made in the real world.
Georgia: “you can actually kind of tell it’s AI because some of the images are almost impossible to create.”
Christina similarly puts into question whether they, personally, would be able to recreate a generated image, even though it was originally trained on their hand-drawn work.
Christina: “This is just straight visualisation. Like, you can see where it would be. It feels like it almost cuts out like the steps in between. But then it’s like, can I still reproduce that? I don’t know.”
In one sense, the model is detached from the ‘material’ world in that it appears unable to comprehend it. This comes as a detriment to its perceived creative potential, while at the same time minimising participant’s fears around the colonisation of their creative pursuits. When discussing perceived fear of automation and threats to their livelihood, Kyle expressed comfort in the knowledge that an AI model could not encroach on this space as their practice is rooted in working directly with physical materials; a territory that generative AI has not yet invaded.
Kyle: “It is a slightly weird feeling, I think. I’m possibly fortunate in the fact that the work I do is still quite hands on, manual, at least for now, until we’ve got, you know, robots tattooing people [...] I think it’s still possibly a little while off. I don’t think I’m necessarily against it”
Again, we witness ambivalence within individual participants. In Section
4.3.5, Christina experienced feelings of anxiety and paranoia (
“it wigged me out”’) after finding the system to be incredibly sophisticated and useful. On the other hand, Kyle’s fears of automation were eased upon discovering that generative AI is not as advanced as they had expected. These apparent contradicting reports are, in fact, pointing towards the same thing; that AI can be neither all ‘good’ or all ’bad’. Striving to develop systems that are ‘better’ in one domain inevitably brings about potential negative implications in another. Instead of centering evaluation only on whether an AI system is ‘good’, we must continue to probe precisely how, why, and most importantly, good for who?
Finally, for several participants (Andre, Zak, Christina) the value of art was deeply rooted in the knowing that it was made by a human. In particular, the visual qualities that in a sense gave away that the work was done by hand; a shaky line, uneven application, or irregularities in form and design, become a window into the artists state of mind.
Zak: “It’s kind of like sign painting, you know, like, you see, you see a hand painted sign, I personally respond to that with a human, kind of, like a visceral human response, my eyes averted to it”
Andre: “You kind of are looking for like the human errors, you know, you’re kind of doing a scan. And that’s when you actually realise how much skill has been involved in producing this piece”
Zak: “our imagination is kind of centred within the digital realm, but we can still apply things badly. And that makes us human”
And in conjunction, the value of creating art for our participants is founded in the process itself.
Andre “it’s like this, the satisfaction that you get from like, putting in however many hours it takes to create something, the journey is the best part, the result are just the icing on top, you know, that you get to see something that represents the hours toiled, and the path that it took to actually get you to create that thing”
Georgia: “It didn’t give me the feeling of creating work that I usually get, like, it didn’t give me the drive or the, I don’t know what to call it, like the artistic expression, the spiritual expression, because it wasn’t driven from me, it was driven by something else.”
All participants expressed a similar sentiment around the lack of creative satisfaction granted from engaging with the prompt-based interface. This appears to have little to do with the quality of the output, but the nature of the interaction; one-shot queries, mediated by language, and confined to a screen (Darien: ‘‘I’d just rather go and paint than sit at a computer”). According to our participants, creative satisfaction emerges from the invested time and effort necessary to produce a novel or complex work, to master a skill (Christina: “mastership of craft”), to express and articulate ones unique perspective of the world. While integrating generative AI into an artist’s workflow could very well automate tedious tasks, thereby, freeing up time to focus on creative work, this raises pivotal questions. Does automating creativity merely serve economic ends? And if so, what impact will this shift present towards the subjective experience of creativity?
The study led the participants to become more curious about AI technologies, their recent advancement, and in some cases, came to consider it as more of a threat. From the outset, all participants explain that they initially volunteered for this study because they had a curiosity about AI technologies, its capabilities, its potential as an assistive tool, perhaps even fueled by a morbid curiosity; to see if it was possible to automate some part of their artistic practice. To this end, the model at its current state fell below the mark for all participants. But of course, with the rapid technological advancements seen in recent years, the feasibility of such is virtually guaranteed, and might even be realised by the time this research study is published.