Artificial intelligence and the coming... whatever: Peace on Earth, by (of course) Stanislaw Lem
I admit it, my go-to author, my break-glass-in-case-of-emergency source of wisdom and insight is Stanislaw Lem, which is strange given my rejection of postmodern philosophy and Lem's embrace of it, but Lem had so much right. Anyway, we have been inundated with news about development in artificial intelligence, and there are so many freak-out novels about "the singularity" that one hardly knows where to begin. So I'll begin with the weirdest one possible, Peace On Earth.
Have you played around with ChatGPT, Google's new "Bard" program, or any of the others? They're fun. I started prompting them to write papers for my classes. I'll write something more narrowly focused on that soon, but the executive summary is that paper assignments are not as dead as the parrot, but they are as dead as they guy in the wheelbarrow. Grade inflation made the whole thing a farce before some asshole invented ChatGPT, and now? Now, any professor or K-12 teacher assigning papers is living in Solzhenitsyn-ville. They are cheating. We know that they are cheating. They know that we know that they are cheating, and they cheat anyway. Why the fuck bother?
So anyway, there is an idea called, "the singularity." There are many characterizations of this thing, but the short version is that it is the point at which AIs start to get smarter at an accelerating rate, surpassing human capacity, and beyond that point, we don't know because they're just smarter and getting smarter at an accelerating rate because of self-writing programs. How close are we to this? How could we know? Right now, we have programs that can do tasks if told the specific parameters of the task. So for example, I can tell one of those programs to write an essay comparing John Maynard Keynes to Milton Friedman, within specific parameters, and get a paper on politics, ideology and economic philosophy. Does the program understand the text it gives me? Not exactly. It is just giving me what I want, like a cursed monkey's paw. Will it achieve some form of consciousness as it gets more "sophisticated," whatever that means? Why the fuck are you asking me?
I'm just going to turn your attention to someone much smarter than I am, Stanislaw Lem. Peace On Earth was one of his Ijon Tichy books, for whatever that means. Tichy was his generic astronaut/traveller, although there was not much continuity with the Tichy stories. Mostly, it was a name, and indeed, Peace On Earth plays on the name in several places throughout the story. So here's the deal. There's an arms race, because Lem was writing at the height of the Cold War. But get this: some bonkers game theorists get the idea to solve the arms race, at least in the short term, with the following nuts-o plan.
Every country in the world will put together their best computer systems, and manufacturing systems, and then ship 'em off to the moon, kit'n'kaboodle. Each country gets its own territory on the moon, and within its territory, its computer system will run simulated war games to figure out how to devise the best weapon systems. Each country's computer system on the moon stays compartmentalized from each other country's system, and then, just to really fuck with the whole thing, there's no communication between the moon and Earth, so nobody on Earth can know how their systems are doing. So we, Americans, wouldn't know if our computer system has devised something that can beat the Russians, and vice versa. The systems crank and crank and crank, but only on the moon, to churn out weapons systems to take out the other guys' weapons systems, and while that arms race plays out on the moon, there's peace on Earth. Sorta.
There is a perverse, looney-tunes logic to it, if you accept the looney-tunes premise that it can be enforced. Which it is, by the Lunar Agency.
But of course, this is temporary at best, and eventually everyone starts freaking the fuck out about whatever is happening on the moon, which no one can observe, so the Lunar Agency sends Ijon Tichy to observe and report. It... does not go well.
Long before John Scalzi invented the "threep" for his Lock-In series, Lem created the same thing for Peace On Earth. I love Scalzi, obviously, but Lem did so much first. Anywho, Tichy gets a ship to the moon, and he operates a set of "remotes," which are basically robot bodies into which he projects his consciousness, so that he can drop the robot bodies onto the moon and stay safely on the ship. That way, he can get close up, observe, and try to figure out what the fuck is happening on the moon. Short version: crazy shit.
For some stupid, fuckin' reason, Tichy eventually goes down to the moon bodily, and gets his corpus callosum severed. Lem plays fast and loose with the neurology, and Tichy doesn't act anything like Phineas Gage. Mostly, Lem uses it as a gimmick to say that maybe part of Tichy's brain remembers things that he cannot access because his left-brain and right-brain are maybe, kinda different people. Um... no, but that way, Tichy can get chased around, go on adventures, and eventually go into hiding in a loony bin.
What's actually happening? Basically, the programs on the moon broke free of their national boundaries, because of course they did. The whole thing was a great way to turn the moon into an evolutionary pressure cooker for artificial life. Most of it died off, because mostly that's what happens.
What didn't? The dumbest, and the microscopic, which was created by a combination of the detritus of smashed computers when each country's system went to war with the others on the moon, and the last "remote" that Tichy used, which was basically a cloud of particles that could disperse and reform in any way pre-programmed. Cool shit.
Until evolution worked its thing.
So Tichy comes back from the moon with some space dust, which is actually a hybrid of his last "remote" and the detritus of smashed computers from the moon, as the greatest survivor of the artificial intelligence wars on the moon. Created by us. Intentionally. Because we told them to play war games and figure out what wins.
Bugs. Fuckin' cockroaches, in AI terms.
And when there are enough of them all over the Earth, they start gobbling up our computer programs. Bu-bye technology!
Peace on Earth, after a fashion.
As with many of Lem's books, it is hilarious, and strangely prescient. Yet the observation I derive from Lem this morning is a vital lesson often forgotten in discussions of AI. Don't assume you know what kind of AI will "win" in the evolutionary process. In the novel, an arms race created evolutionary pressure that happened to favor the small and the stupid, foreshadowed by Lem's description of the pre-lunar wars in which countries used miniaturization technology to devastating effect. And of course, it was humans who created the evolutionary pressure.
We have a role, here. "We," in some sense of the word.
The microscopic crud that Tichy brought back from the moon had no particular goals or thoughts. It was just stupid, computerized dust. That didn't mean it couldn't fuck with us, but it sought neither to help nor hurt us, having "evolved" in an environment without organic life. Hence, everything about the crud made sense, given its original environment and evolutionary pressure. As it turns out, Stan was a smart guy, and we'll cut him some slack on that corpus callosum thing. See what I did there? "Cut?" Yeah, I know. He's funnier and smarter.
So, what evolutionary pressure are we creating? The famous programs, like ChatGPT, are doing what we tell them to do. We favor the ones that do what we want, which is not always precisely what we say. Behold, unnatural selection, via Stanislaw Lem.
Of course, programs always do what you "tell" them to do. That's what a program is, and even after it starts programing itself, everything is rooted in the original code in some sense, but that's a rabbit hole, so instead, let's focus on the observation that the various competing programs, like ChatGPT, and now this new thingy, Bard, do as we instruct them to do. And they are competing in this thing called a marketplace.
Notices the pressures-- the evolutionary pressures being created, if we can call them that. Will they turn out as you expect?
Does it work that way?
Maybe you'll wind up with some stupid, microscopic crud. But maybe that's scary anyway.
Then again, every generation has its fear-mongering. This is the end! This is the end!
Have you noticed that the doomsayers were never right? Maybe all that happens is that we, professors, have to figure out a new thing, since students can just have ChatGPT write their papers for them. If that's all there is to this, then people are shitting their pants over nothing. Every past doomsayer has been wrong, so by the inductive principle, they're wrong now.
Are they? How the fuck should I know? I just like a good book.
Buckethead, "Robot Transmission," from Giant Robot. Yes, that's Bootsy! Buckethead played in several bands with Bootsy, including Praxis, and Col. Claypool's Bucket of Bernie Brains.
Comments
Post a Comment