I think something that's fascinating in the AI discussion is how non-creatives perceive AI versus how many creatives perceive AI.
For example, years before AI was a thing--I spoke with someone about my creative writing projects and they expressed to me how they found it unfathomable that I could just make up entire worlds far removed from our reality of existence. To them, it was like magic.
To me, it was the culmination of countless hours spent playing with words until they flowed into semi-coherent lines of thought and emotion. I remember being ten years old and laboring away on my "biggest" novel project ever--it was 5k words full of singular sentence-long paragraphs and garbled heaps of grammar atrocities to the English language.
If I hadn't written it, I wouldn't have come to learn how to create the basic foundations of a story.
But I do get the "it's magic" sentiment a bit--I'm that way with music. Theoretically, I understand the components of a music composition but it feels like magic to see a musician that can listen to a tune for the first time and play it perfectly due to years of honing in their craft.
That's the premise of that quote from Arthur C. Clarke: "Magic's just science we don't understand yet."
When it comes to anything we don't have countless hours of experience with, it feels like magic. It feels like something that's outside of our feeble human capabilities. It's not until we start to put in the time to learn a skill that it becomes more attainable inside our heads.
Generative AI presents a proposition to the non-creative: "What if you could skip past the 'learning process' and immediately create whatever art of your choosing?"
It's instant dopamine. In a world that preys upon our ever-decreasing attention spans and ways of farming short spikes of dopamine, was it ever a surprise that generative ai would be capitalized in this fashion?
So for the non-creative, when they use generative AI and see something resembling their prompt, it feels good. They are "writing" stories, they are "making" art in ways they could never do with their lack of skills.
(It is, in fact, really cool that we have technology that can do this. It's just incredibly shitty that it's exploitative of the human artists whose works were taken without permission as well as its existence threatening their livelihoods.)
What I think is equally concerning as the data scraping of generative ai is the threat that AI imposes on the education of the arts. More and more, you see an idea being pushed that you don't need knowledge/experience in how to create art, all you need to do is feed prompts into generative ai and let it do the "work" for you.
Generative AI pushes the idea that all art should be pristine, sleek and ready for capitalism consumption. There is no room for amateur artists struggling like foals to take their first steps in their creative journeys. We live in a world where time is money and why "waste" time learning when you can have instant success?
It's a dangerous concept because presents a potential loss in true understanding of how art works. It obscures it and makes it seem "impossible" to the average person, when art is one of the freest forms of expressions out there.
It's already happening--Nanowrimo, the writing challenge where the entire point was writing 50k original words in a single month regardless of how pretty it looked--coming out and saying that it is ableist and classist to be opposed to AI is the canary in the coalmine of what's to come.
For the non-creatives who enjoy the generative ai, it feels like a power fantasy come to life. But for creatives concerned about generative ai?
We're living in a horror movie.