AI Will Not Kill Your Darlings

The hype from AI enthusiasts is that Deep Thought will soon replace authors and artists. But do the creative and fine arts have anything to fear or is AI is just another plagiarist in wolf’s clothing?

William Faulkner may have given the great advice to writers to “kill your darlings”, and George “railroad” Martin may have taken this advice one step too far by killing everyone, but there’s a lot of folks out there fearing that the new “killer app”, AI, is about to kill everything. Well, everything in the writer and artistic world.

Enthusiasts of the latest tech hype are desperately trying to convince anyone who will listen that the end of the creative and fine artists had arrived. Why have writers when ChatGPT can write a novel? Why have painters when Midjourney can create such amazing art?

“Learn to code!” you’ll hear them cry.

And many, especially those on the periphery or in the executive suites of publishing and the creative world (or Mahogany Row as we call it) might be willing, eager even, to agree. ChatGPT can produce a 90,000 word novel in, what, less than a minute? Grok, Midjourney, maybe a few other things can produce “artwork” in say a minute or two? Even Photoshop can introduce AI elements into its products. The end is surely nigh.

Not so fast, pilgrims.

And we don’t even have to get into the weeds of “what is art?” to start pulling apart this somewhat woefully crocheted rug pulled over our eyes.

Fact is, AI is pretty close to killing off some major elements in society, but it’s a long way from murdering the creatives – and that’s because, in the end, AI doesn’t “create” anything. It’s a Franken-creature, part bower-bird, part parrot, part mime. It’s job is to copy something that already exists, reheat it, rehash it, and represent it as new.

It’s like ordering smashed avocado in an inner-city café. It’s 50c worth of avocado that someone has mashed with fork over a 15c slice of buttered toast then charged you $16 to put on a plate in front of you.

AI In A Shell of Nuts

AI, Artificial Intelligence, is artificial but it is not intelligence. Like everything, it’s just a marketing gimmick. The concept of AI is essentially “machine learning”, the idea that the machine (a computer) can “learn” from its experience. By learning, we mean improve, get better, adapt, overcome. Like a little cyber-Marine.

Let us look at what AI really is so we can understand it. Essentially, AI is simply a large database of information. Traditionally, that information is held as discrete or relational objects of binary data used to make up things like digits and words. The computer doesn’t know what these things mean, it relies upon programming to parse this data in a logical way so that it makes sense to an operator.

Sure, it’s “smart” in the same way CGI is clever, and it’s automated – but that is it. It’s not art. There’s no originality to it. AI is not creating anything – it is simply repurposing what other creators have created.

Now, the marketers of AI would have you believe that somehow these new, very large, very fast computers (of which NVIDIA, who used to make video drivers but is now the world’s largest maker of AI computers) can now “process” this information and the vast amounts of traditionally difficult information known as BLOB (binary large objects, like images and free-form text) more readily, almost like humans. In other words, intelligently, as if they were reading the text or looking at the image.

The thing is, they can’t.

Machine learning is simply a new variation of what used to be known as metadata. Metadata is data about the data. It is very often much larger, can can be infinitely larger, than the actual data itself. The metadata tells something about the data more than what the data can be itself. For example, take an image of an American flag. A computer, even an AI computer, has no idea that this is a flag let alone an American flag. But an operator attaches metadata to the image, such as “flag”, “America”, “American”, “USA”, “Stars and Stripes”, and so forth. Now when a user searches for a keyword, say, American, the picture of that flag might appear.

Importantly, the machine might be able to use image matching technology to identify similar images and then apply the same metadata. This is the learning side of machine learning. As you can imagine, it might get it wrong, because the machine is still a machine and only ever as good as its programming. It could improve with self corrective analytics, but still, same problem. Marginal improvement, some errors, and the closer Object A is as compared to Object B, the greater the chance of error.

The next layer of machine learning over the metadata is the concept of policies. There are instructions on how the data is to be used. At its simplest level, a policy might be an instruction that the data is only valid under certain circumstances, or between certain dates or times (for example, your test results in a hospital). On an extreme level, those policies can be so finely tuned as to say when, where, how, why and to whom data is available.

What you can also see is how open to owner-bias such a system is, if you were so inclined. Take the same example of the American flag. You could also apply subjective metadata and policies to identify and parse data (“democracy”, “fascist”, “laws”, “war crimes”, “freedom”, “racist”), and limit exposure to individuals, or even groups, based upon some user-specific identifier or even geography.

Writers Write, AI Copies

I decided to undertake a little experiment with ChatGPT. I asked it to create a storyline for a novel. I chose a genre well outside of my own (I write crime fiction) of which I have almost no experience – in this case, science fiction – and tested the Microsoft AI to see what it would do in the following scenario:

ChatGPT, create an original storyline, characters and setting for a novel set in post-Apocalyptic USA.

The result was superficially an interesting, albeit stereotypical take. Compound names and place settings were unique when checked on Google, for the most part, but when taken apart and checked individually, the thread started to fall apart. Terms like “Mechanicus”, “forge-world” (suddenly there seemed to be advanced science fiction in my Post-Apocalypse setting), and “Mutationem” were taken from PlayStation or Role-Playing Games like WarHammer.

…this leaves not only the reader but the author unsatisfied. Why? Because it’s not even remotely entertaining. The drama is superficial, a script following the guidelines of some hidden public service announcement.

(Interestingly, when I created another scenario in an action adventure plot-line set in a real-world setting the exact same names, places and even plot-lines were given. So. There’s that metadata and policies being deployed.)

ChatGPT AI had simply stitched together and amalgam of a lot of similar components to weave a common theme. That theme, by the way, was based upon a trope structure found on a website ChatGPT itself handily offered (www.cornettfiction.com), plus a few more it probably did not provide.

The point being, it was a magic trick. I asked for an original storyline, and it gave me a facsimile of what others had done, just with a few revisions and alterations.

Some might argue that there’s nothing original under the sun. There is, of course, more than a little truth to that. And, indeed, one could very convincingly argue that AI could very well have a place in pushing writers through block, or pushing writers to do better, to think differently, or take a different tack. There is, I think, some merit to that argument.

But AI is not originality. Not even close. It is an artfully crafted illusion, but ultimately not a satisfying one. While my post-Apocalyptic world was on first blush interesting, I quickly got bored with it – it was too derivative, too typical. It needed me to make it more interesting, to remove the absurd and ridiculous (even for sci-fi) and stabilise the ship.

Secondly, while there were elements of humanity, these were trite. Rote pieces obviously set to some script. The characters were female-male-female. The ultimate hero, female; the bad-guys all white military jocks, the good guys… definitely not. The diversity cues were there, the politically correct checkboxes marked. We had a blind woman, a man with one arm and a girl who could speak with animals – and it was their disability that was their greatest asset. It was the animals in this setting, by the way, that would teach us the way to post-Apocalypse salvation. Now we can all sing kumbaya.

Most writers will never make it rich doing what we do. It is, by definition, a labour of love with a whole lot of rejection and an awful lot of head-meet-wall moments. But here’s the kicker. Unlike medicine, engineering, tax accounting, computer coding, most bureaucratic systems, even the law, creative environments like writing and art do not follow the bouncing ball.

Again, this leaves not only the reader but the author unsatisfied. Why? Because it’s not even remotely entertaining. The drama is superficial, a script following the guidelines of some hidden public service announcement.

The same rules apply for AI in art. The machine isn’t creating art, it’s searching a giant index of artwork already created by human hand and stitching it together based upon the metadata it’s searching from the queries you are entering. Sure, it’s “smart” in the same way CGI is clever, and it’s automated – but that is it. It’s not art. There’s no originality to it. AI is not creating anything – it is simply repurposing what other creators have created.

And that’s the point. AI is making you follow a bouncing ball, but the ball is being pulled by a string you won’t see unless you look for it.

AI Borrows Your Watch To Tell You The Time

In July 2023, National book award-winning novelist Jonathan Franzen, My Sister’s Keeper author Jodi Picoult and nonfiction author and journalist Michael Pollan were among nearly 8,000 signatories to an open letter from the US Author’s Guild addressed to the CEOs of OpenAI, Alphabet, Meta, Stability AI, and IBM. ChatGPT had become the target of a lawsuit where it was claimed the system had trawled author’s works, without permission, to power its AI “engine”.

Jonathan Franzen said of the letter sent in 2023: “The Authors Guild is taking an important step to advance the rights of all Americans whose data and words and images are being exploited, for immense profit, without their consent. In other words, pretty much all Americans over the age of six.”

Will AI mean the end of writers and creative writing?

No. Not really.

According to the Authors Guild’s most recent income survey, the median writing-related income in 2022 for full-time writers in the US was just $23,330. To give you some comparison, an average fast food cook’s yearly income is $29,760. A cashier makes $30,750. Hell, the shampooer in a hairdressing salon makes $29,2601.

Most writers and artists will never make it rich doing what we do. It is, by definition, a labour of love with a whole lot of rejection and an awful lot of head-meet-wall moments. These are trades, crafts, requiring skill, human ingenuity and the deft stroke of the human hand. AI cannot replicate the brush strokes of Van Gogh’s sunflowers.

AI won’t write Harry Potter, or Ulysses. Or paint the Mona Lisa or the Weeping Woman.

But here’s the kicker. Unlike medicine, engineering, tax accounting, computer coding, most bureaucratic system, even the law, creative environments like writing and art do not follow the bouncing ball.

You will always need a human being to fix your plumbing, unstop the sink, repair the roof, clean the gutters, even build the damn house, cook dinner. These jobs are not replaceable with AI. And until we get a robot that can actually move like a human and not a demented chicken, they won’t be.

But I can assure you that within 10 years, AI will replace most General Practitioners. Why? Because the nature of their work is protocol based questions and answers, and GPs have so dehumanised their work in pursuit of the almighty dollar, referring patients with anything more than a sniffle to specialists. As medicine becomes more hyper-specialised it is relatively easy, even today, to replicate most clinical work with a simple AI question-response tool. Add an overlay that queries data from pathology, machine learning from other Big Data sets (based upon demographics, patient similarities, morbidities, outcomes) and advanced statistical analysis and I’d bet you would get better outcomes than with your average doctor, today.

But artists and writers don’t work to protocols, or schedules or even follow set principles. We don’t even follow the standard rules of grammar, or style. That’s because we are creating something with the intent of evoking an emotion, a sensation, an atmosphere.

Art and writing, despite the endless desire to “academise” them, always have been and always will be crafts. They are trades. Just like plumbing, and electrical work, drywall and concreting. And all trades require apprenticeships, time, practice and focus. AI won’t do your drywall, or salt your drive in winter.

Writing and art are creative crafts that will only ever be successfully performed by human beings.

1

Shopping Cart