Clint Enns is a writer and visual artist living in Montreal whose recent AI experiments impose a sense of otherworldly disorientation on the viewer. Half hallucination, half glitched memories, his images are meant to unsettle your perception of reality.
Davide Andreatta: Hi Clint, I don’t know if you agree with the following taxonomy – which I admit to be a bit too simplistic and dichotomous – but I think that right now in the AI scene there are two uses of diffusion models, one which is aligned with the predominant visual regime of production, and another one that can be called minoritarian – a deviant trying to build a sort of iconographical temporary autonomous zone. And while this might as well be true for every medium, I believe that with AI-produced content this is more visible than elsewhere. Needless to say, your production would slot into the second category.
Clint Enns: I’m definitely an artistic minoritarian. The AI-generated images I produce embrace the flaws of the medium. When I am using Stable Diffusion, I am not trying to produce a particular image or aesthetic, I set-up parameters that I believe will produce visually contradictory representations. My prompts are antithetical, circular, or nonsensical in nature. They follow the old no-means-no axiom: “Nonsense is better than no sense at all.”
To me, they feel like found images. The images I post are the ones I consider to be the “best.” Although by “best,” I actually mean the “worst,” the most peculiar, the most bizarre, the most disorienting, the most challenging to the mind. They are images that depict the impossible and the unlikely, images that have never existed before and may never exist in reality. This is the promptum, the that-have-never-been.
DA: One thing that stood out when going through your Instagram feed was the absence of color. The black and white constitutes a clear aesthetic choice in general, moreso when it comes to AI where almost every piece of content produced with it it’s characterized by an epilepsy-inducing explosion of colors. What’s the reason behind this decision?
CE: The black-and-white images are a gesture towards a minor minoritarian practice, a way to distinguish my AI-generated images from the other more mainstream minoritarians. In all honesty, I mainly work in black-in-white since it makes these images look more “realistic,” less cartoon-y. I feel the more realistic the images look, the more uncanny and unsettling they are. I want my images to exist between the uncanny and the grotesque or surreal.
DA: In your work the body is presented not as an anatomically accurate map, but rather as an ever-evolving cartography of intensities, a convoluted assemblage of drives reshaping the human form. What is the relationship between the visual representation of the body and AI? What possibilities does AI offer in reimagining the flesh?
CE: The bodies in my images aren’t bodies, they are visual representations of code which are attempts by the machine to interpret my prompts. I imagine that the machine will get better at this over time and we will eventually recognize the code’s interpretation of a “body” as a body. As the code gets better, I imagine artists will figure out new ways to disorient the machine. Long live the new flesh.
DA: Can you briefly describe your practice and how you see AI impacting your artistic development?
CE: Much of my artistic practice involves artistic appropriation and working with found materials. For the last few years, I have been making work with found, digitally-born photographs. I see AI-generated images as another form of found photography. AI is just a tool and given that it is in its infancy, the outputs are “flawed” which is perfect for artists interested in exploring imperfection.
DA: You’re also a writer with many published essays and reviews. Does your writing interact somehow with your artistic practice?
CE: I make artist books that incorporate both images and writing. The books loosely connect ideas that I am working through with events that are occurring in my life at the time. The most recent one is called Camping at the Geriatric Ward and explores some of the criticisms directed at AI artists, as well as some of more problematic aspects of working with AI: corporate censorship from both the developers of the tools and the social media that they are posted on; authorship; authenticity; and copyright. In the book, I suggest that all of my AI-generated images are produced by my grandmother who is losing her ability to speak due to dementia. AI has provided my grandmother with a new way to communicate with the world.
The book experiments with two AI-bots, namely, Inferkit and ChatGPT, and began as an excuse for me to experiment with these tools and as a way to think through and pose questions about AI technologies. As with my artistic practice, I write because I enjoy it. If I didn’t, I would find something more enjoyable to do. I see AI as another tool for artists to play and experiment with, a fun way to creatively pass time.
DA: To wrap this up, I’ll ask a question that could as well have opened the interview: how and when did you first encounter AI? How do you see your art evolving?
CE: My first encounter with AI was probably through James Cameron’s mega-blockbuster Terminator 2: Judgement Day (1991), a sci-fi film where Arnold Schwarzenegger plays a cyborg who travels back in time to prevent a malevolent artificial intelligence named Skynet from destroying humanity. One can only speculate that Skynet’s AI wanted to create a world of paperclips, but people kept getting in the way.
In terms of AI art, I began with DALL·E-Mini. DALL·E-Mini made the technology accessible and it felt like a significant technological breakthrough. When I started experimenting with it, I was trying to see how the machine would misinterpret my prompts and where the technology failed. I would provide it with impossible tasks or with prompts that were self-referential, attempting to turn the technology in on itself. For instance, if you used “an AI-generated face” as a prompt, you could see what the program thought an AI-generated face should look like which was seemly a series of distorted, blurry faces arrange in rows similar to those produced by DALL·E-Mini.
Since that time I have experimented with DALL·E 2, MidJourney, Stable Diffusion, and Disco Diffusion. For now, I have settled with Stable Diffusion since it is cheap (read: free), easy, and it regularly produces the type of errors I am seeking.