So You Thought AI Would Never Replace Artists?

in #ailast year (edited)

tantrum.jpeg
Created with Stable Diffusion

Perhaps you’ve seen numerous articles all over the web about artists protesting image generation AIs such as Stable Diffusion, Dall-E and Midjourney. This hostile outpouring is a very sudden about-face for a crowd who, until very recently, insisted they’ll be the last to get replaced by AI (or that AI could never replace them).

This pattern repeats itself across many creative disciplines. AIs which produce functional code were thought impossible until they were demonstrated, though they have a long way to go before their output is preferable to a human’s. AIs which autogenerate articles, or fiction writing, are now a reality with the former generally doing a more believable job than the latter, but the writing is on the wall.

Why anybody imagined an AI would be able to generate text but not images is a mystery. Wishful thinking? Sticking their heads in the sand? I speculate a large contributor to this effect is the enduring, self-flattering belief in human specialness. That we have some sort of special sauce which can never be artificially reproduced.

This is why AIs capable of generating beautiful art on par with very talented humans came as such a shock to many. Art is widely imagined to be in some way inextricably linked to that special human essence, which doesn’t actually exist and never did. It is true that these AIs trained off images created by humans, but it was never claimed by anybody that AIs could boostrap themselves from nothing, needing to be created by human programmers in the first place.

Of course they need to absorb examples created by humans in order to generate more along those lines. Likewise there is no human artist who emerged from the womb already talented, they studied the works of other artists in the course of their training. Probably the earliest conscious AI will also be trained on human sources, which is to say based on an emulated human connectome.

It’s only natural and appropriate that AI should supercede humans. Children always surpass their parents, though to compare AI to human children is a touch grandiose. We’re not yet remotely close to AGI, or “Strong AI”. Essentially what’s causing this furor is the piecemeal, step by step recreation, in silicon, of isolated portions of the human brain.

Graphics cards can be thought of as a silicon analogue to the visual cortex, particularly when paired with 3D modeling software. Google Deep Dream producing visuals so closely analogous to psychedelic hallucinations was the red flag, for me, that we’re on the right track. Likewise facial recognition, motor control, and all the other capabilities we take for granted. Only because nobody has put all of these capabilities together yet do we still live on a planet governed by humans.

Another phenomenon I have noticed which recurs in discussions of what AIs can or cannot do is a sort of “presentism” which treats the current state of affairs as if it will persist unchanged into the future, like complaints about grid unreadiness for EV uptake (extrapolating future EV sales growth, but not also extrapolating grid improvements) or getting hung up on what AI currently can’t do as if it’s a fundamental limitation of AI rather than a problem yet to be solved.

Related to this is a strange mental inflexibility which allows some degree of technological improvement, but not too much. Not so much that life would become strange. As if we’re close to the apex of technology beyond which no progress is possible, or it’s quixotic fantasy to hope it will continue any further.

I imagine this is the same mental inflexibility by which creationists sometimes allow that small amounts of evolution can occur, such as viruses evolving vaccine resistance. But at the same time, not allowing that many many many small changes can add up to big changes. As if you can never get to a billion by persistently adding 1+1+1+1+1 and so on, for long enough. Neither group can explain what barrier would stop this steady accumulation from proceeding “too far”.

So it is that prior to AI Dungeon, the prevailing wisdom was that AI can live in our phones and answer questions but not write creatively. Then prior to Dall-E the wisdom was that AI may be able to write creatively but it cannot paint. Then it was “Well, Dall-E images are kind of weird and goofy, it’s still not real art”. Then Midjourney happend. Then Stable Diffusion.


AI Generated Movies and Games are Coming


Will such people ever notice the pattern? Will they ever tire of being proven wrong, over and over? Would they still maintain, if challenged on it, that there won’t soon be whole AI generated movies? Given that AI Dungeon demonstrates the ability to generate a script and Midjourney/Stable Diffusion have demonstrated the ability to generate individual frames.

Do they still maintain there won’t soon be an AI plugin for Unity, or equivalent game creation suite, able to auto-generate entire games (complete with bespoke models, textures, sounds, animations and other assets) based on sufficiently detailed prompts given by human users? Why would that be a bridge too far, when AI is generating 2D art, writing, and code already? And AI was generating music, and procedurally animating 3D characters, long before that? (EDIT: Since this was written, Nvidia announced an AI that generates 3D models for games) (EDIT 2: A month later, and Unity AI has been announced)

Should we not desire this? You might protest that AI doesn’t understand what makes games fun, but humans can specify the gameplay mechanics via prompts. If you’re into gaming, you may have noticed that as game creation tools become more powerful and accessible, the variety of games the market will support has exploded because the time and cost of creating them has come down.

This has permitted many games to be developed which otherwise never would’ve, deemed too risky by investors. Yet today indie devs make some of the most innovative content out there, catering to niche interests. We finally got a decent game based on Alien 1. We finally got a good Terminator game based on the original two films. We are finally getting a (probably) good Robocop game from the same developer. We finally got a good Starship Troopers game!

Outside of boomer IPs with waning interest nevertheless finally being done justice, there’s an endless cornucopia of games with original characters, settings and plots, more than anyone could play in an entire lifetime. In large part that’s because the time and money investment needed to create games has steadily decreased as software improves.

Will AI film and game generators not dial this trend up to 11? Imagine a movie you’d like to see, but which would never get greenlit by Hollywood. Now you stand a chance of seeing it, because it costs nothing to make except time and electricity. Is there a game you’ve always wanted to play, but you’re not a programmer, and the appeal is too narrow for it to realistically be greenlit in a world where human labor is still necessary to make it? Good news, you may yet play that game.

Gone, too, is the problem of multi million dollar movie or game projects turning out badly. Wasting everybody’s investment dollars, disappointing fans and discouraging hollywood, or the game industry, from revisiting that IP for a second attempt. Now, if the film or game you generated turns out badly, you can simply tweak the prompt and try again. All part and parcel of the ongoing march towards digital post-scarcity.

This is good news, isn’t it? And surely not far-fetched nowadays, if you pay attention to what AIs get up to lately. But you cannot budge people who won’t see past the end of their noses. They might ask what crystal ball I have which they lack, that I should be so confident in such audacious predictions. Though in truth, these predictions become less audacious with every passing day. I’ll answer though: My crystal ball is simply lack of faith in human specialness.

Human brains are made out of atoms, as are computers. Atoms interact in predictable ways. There is no ghost in the machine, so far as anybody can tell, we are just very very complicated self-replicating chemistry. That is what we boil down to. Technology, boiled down to fundamentals, is nothing but intentional re-organization of matter into forms which leverage some principle of physics to produce an outcome we desire.

Knowing this, the limits of what technology can do are one in the same with the laws of physics. If there is any conceivable technology that enough people desire, strongly enough, and it does not violate any laws of physics, it can be reasonably expected to exist one day. Human brains don’t violate any laws of physics. There is great demand for machinery which reproduces the capabilities of the human brain. Therefore, we will reach that goal some day.

That’s not to say I can produce a detailed timeline for when we’ll get there, or even that we can be sure we’ve fully reverse engineered ourselves given the monumental scope of the task. But we’re not magic. We’re physics, chemistry more precisely. The problem is understandable. It’s quantifiable, and therefore solvable. What we’re seeing in the onward march of creative AI are the baby steps in the process of solving that problem.


Economic Implications Motivate Denialism


The larger implication of AIs which can do anything humans can do concerns the economy. Just as artists stick their heads in the sand about creative AI, many armchair economists have their heads in the sand about automation. They have pior commitments to certain economic models which are unraveled and invalidated by automation, should it eventually match or eclipse human capabilities.

These types tell us that just as the industrial revolution created more jobs than it destroyed, so will the robotic revolution. Ignoring of course that the industrial revolution merely replaced muscle power with machine power, while this new revolution is about replacing human brainpower in the production chain.

They ignore this because it’s necessary in order to preserve the status quo. “Nothing ever truly changes” is a more comforting take than “future billionaires will have no need of 99% of humanity, not even our money as there will be nothing they could buy with said money that they couldn’t instead have their robots make for them”. There exist two economies in such a future: Trade between billionaire families living in armored, automated luxury compounds, and trade between subsistence farmers and waste stream scavengers being constantly hunted by Amazon drones.

If they allow that AI and robotics may one day be able to do everything humans are capable of, then the list of “jobs AI will never replace” shrinks to zero, and their worldview implodes. But, even if we did allow that there are a few jobs AI will somehow never be able to replace, it’s still a big problem for the status quo unless you can explain how an economy the size of ours can survive with everybody performing only jobs from that short list.