I sometimes ask ChatGPT to read and analyze my artistic pieces. I have come to understand that it’s a tool that is sycophantic (servile) and that it will undoubtedly give me praise for whatever I type. This has decreased its utility for my ego-boosting needs, as it’s obviously insincere in a sense (but I still enjoy some of its compliments from time to time, I admit). That said, today I realized that it is still an adequate potential validation, in a different manner.
I tend to write casually, in a very easily interpretable language. However, today I wrote somewhat more abstractly (or so I thought). I had, on some level, doubted that ChatGPT would even understand what I’m trying to say. Nevertheless, I put it out there and asked for an analysis. It gave me its usual praise, and that still felt nice. But I realized it wasn’t as “fake” as I thought. Essentially, it was capable of accurately estimating what I’m feeling. Hence, I automatically felt complimented about what I wrote. This value came from ChatGPT’s inference, which made me certain that my poem is saying what I’m trying to (mostly). It’s not ChatGPT’s words that strongly resonated with me, but mine with its code. And that’s something we can fairly expect to derive from something that isn’t (as far as we know) sentient.
A poem or any artistic craft has two aspects. It possesses its emotional content, which you may like or dislike. On the other hand, it has its technical structure, which you either understand or don’t. In parallel, generative AI is inherently based on language. They’re technically defined as “Large Language Models” (LLMs). This nature makes ChatGPT and other tools’ compliments easily dismissible if one is seeking emotional praise (sorry). However, it posits LLMs as valid reviewers if one is seeking an assessment of thematic appropriateness and the presence of meaning…
Before you want a poem to resonate with others, you want it to be meaningful. And while LLMs’ praise may be “inauthentic”, the validation of your clarity remains quite respectable. After all, a language tool knows language. I believe similar arguments could be made about AI more broadly for the technicalities in art more generally, beyond LLMs.
What I’m trying to argue is that this is a broadly acceptable application of AI in the arts. While it minimally boosts self-esteem, it might act as a reasonable test of readiness in art. In no way does this negate the need for human critics and appreciators. Rather, it serves as a tool to tell you if you’re off-record (unless you want to) or you’re going somewhere cohesive before people see your work. And in many ways, this efficiently incorporates AI in the artistic process, without challenging or disabling it. And most importantly, this use doesn’t limit art’s raison d’être – to express and connect with others.
A small note – While this piece doesn’t directly relate to mental health, I am contemplating how to use this blog to tackle AI more broadly.
*Cover image generated on ChatGPT with this blog post’s content as the prompt.