Further possibilities on the future of AI from Stanislaw Lem: The Cyberiad, by whatshisname
Let's take a deserved, Sunday break from the headlines for some science fiction, and literature-based analysis of the underlying themes. The good stuff. The stock market has taken a bit of a negative turn, initially from inflation news, and then in response to concerns about the Middle East, but if you have been paying attention to the stock market, you have seen not only a sharp rise prior to week, but a rise led by the companies connected to AI. Nvidia was not the sole cause of S&P growth, nor were AI-related stocks. One might legitimately ask whether or not such stocks were producing a late 90s-style tech bubble for AI, but probably not. During the tech bubble, investors poured money into companies that did nothing and had no business plans, merely because they had .coms. Nvidia makes something, and we can already see the direct effects of AI on productivity. Are there some stocks that need to be cleared out, having been caught up in an AI frenzy? Almost certainly, and the last week has done much to correct valuation, even if it is not a technical "correction." (That would be a 10% decline from the recent high.) However, if language models can produce output that companies need, then companies like Nvidia (the prime beneficiary) are doing something real, and when Google stumbles with their AI attempts, this isn't a 90s-style bubble, even if there is some froth to be cleared out of the market. Hopefully you are properly diversified, and hopefully you are getting, as they say in Time Bandits, filthy, stinking rich. A little less rich this week, sadly, but 5% declines happen. I love capitalism. But if this is pointing the way to the future of AI, then what next? Let us turn, as always, to Stanislaw Lem, and this time, to The Cyberiad, which is a series of interconnected stories about two "constructors," Trurl and Klapaucius. They build, robots, automatons, computers and things. Wackiness ensues.
I am motivated, in particular, by the hilarity of AI errors. The backlash to Google's image generator, which would not generate white people, and hence went so far as to generate black George Washington, and even a black nazi, was the result of the following. Earlier facial recognition software programs sometimes had issues with non-white faces. Reasons were varied, but one of the most simple and unremarked reasons was that the programs used light variation, so darker skin tones confused the programs by reducing the light variation that the programs used to distinguish faces. Regardless, 2024 is well into the era of DEI ideology, and the image generator was programed based on DEI ideology. The result was just funny. Programs are mechanical, and they follow the rules you give them. Use light and shadow to recognize faces, and with low resolution cameras and images, they will have problems with darker skin tones. "Correct" that by telling the image generator not to generate white faces and behold, the black nazi.
What happens if you consider the reductio ad absurdum of a stupid computer? Lem wrote the story, decades ago. Trurl built a building-sized robot/computer, and asked it 2+2. It returned, 7. After multiple attempts to fix the robot/computer, he could not get the robot/computer to add correctly, and it became increasingly irate, to the point of violence and mass destruction. Klapaucius remarked that the construction was a remarkable thing-- a stupid computer, and a hostile one to boot. The thing goes on a rampage, demanding that Trurl confirm that 2+2=7, until they finally topple it down a mountain, unable to fix the stupid, hostile computer.
We can build devices that work on different algorithms. Yet the results may be both mechanical and foolish. Whether drawing black nazis, inventing "facts," or any other such hilarity, for all the SkyNet paranoia, consider the possibility that whatever the software engineers create might just not be smart enough to bring about the apocalypse. A stupid computer can be dangerous, improperly treated, but the apocalypse scenarios assume smart machines.
Seriously, black nazis. Is this what scares you?
OK, Frantz Fanon was black. He was the founder of postcolonialism, and he had a nazi fetish. He inspired Edward Said, who inspired all the campus leftists who support anti-Jewish genocidal maniacs, so go look at the picture again, and decide for yourself if it looks like Frantz Fanon. Maybe Google was actually smarter than everyone thought. Maybe Robert Wilson from Vonnegut's Mother Night was not such a joke after all.
Moving on.
Let's take a few digs at academia along the way, as Lem does. In a particularly funny story, we learn that Trurl and Klapaucius studied under a paragon of academic virtue. Their university advisor decided that there was no use studying things that exist, so one should study that which does not exist. One might read this in context as a play on Plato's forms/ideals nonsense. My man, Diogenes, supposedly had no use for such inanity. In one likely apocryphal tale, Diogenes attended a Platonic dinner party at which the founder of the Academy blathered about tables and table-ness, and cups and cup-ness. The Dog had no use for such inanity and merely said, dude, all I see is a cup.
Whether Lem was thinking about Plato, or I am merely in a Hellenistic rabbit hole, I cannot say. The point is, Lem described an academic who said, why bother studying what exists? Study that which does not.
Dragons. Why? 'Cuz. Also, there different kinds of not-existing, but only one kind of existing, so study all of the different ways that dragons don't exist.
Alas, Trurl and Klapaucius's mentor did far more valuable scholarship than most of the social sciences and humanities in the early 21st Century. At least he knew that what he studied did not exist. I would rather read his work than the actual journals that pump out garbage today, which is of no use, and relates to things that are not real.
The problem comes when Trurl and Klapaucius start altering the laws of probability to bring dragons into existence. One may note that Lem published in 1965, with the first English translation in 1974. Hitchhiker's Guide To The Galaxy was 1979.
All of this dragon nonsense turns on scams for bringing dragons into existence, and then zapping them out of existence when they go on rampages. Think Ghostbusters, and the Walter Peck accusation. Not the dick one.
In another, and perhaps more potent and prescient stab at academia, Trurl and Klapaucius encounter a robot academic, living the impoverished life of the philosopher, bitter about how he is not recognized for his genius (of which he is certain). More critically, our impoverished robot sees established scholars making each other famous by citing each other in the scholarly circle-jerk. Lindsey, Pluckrose & Boghossian call the phenomenon "idea laundering," wherein a bunch of scholars cite each other, and the self-referential circle becomes the basis of a knowledge claim rather than building knowledge through empirical study, falsification and replication. Anyway, the impoverished robot worries futilely about how he will be remembered after his death. As though that matters.
Like Marcus Aurelius said, after enough time, even the most famous are forgotten, and the self-pitying robot's discoveries are, in fact, mere rediscoveries of things that another robot had figured out hundreds of thousands of years earlier. Which everyone had forgotten, because it was buried in a book that nobody read.
Yet what our impoverished robot says is that everyone claims to be indifferent to fame. He merely has the honesty to admit his own vanity. Now there is an insight. How many say they toil for the life of the mind, for truth, indifferent to fame or recognition, in a mere pose?
Socrates spoke of his indifference to the judgment of those he deemed unwise, Diogenes scorned the very concept of shame as the cornerstone of his philosophy, and yet millennia later, we remember them and have nary a word about the jurors who voted to convict Socrates, or those who hurled abuse at the Dog, treating him in such a way as to provide him the moniker and the philosophy, "Cynicism," its name, being the very same in translation.
These are the insights of Lem's machines. Machines that think, and because they are built by us, and programmed by us, they think like us. The imprint of the constructor is built into the construct. Recall the question of facial recognition software. It is true, mathematically, that those first generations of facial recognition software had problems with darker skin. Were the programmers racist? No, but had they been black, their testing would have come back as, "this doesn't work," and they would have programmed differently, recalibrated, or something. Google's AI image generator? They clearly put their imprint into their construct.
What do we build? These are machines. Built. Lem goes back and forth in the stories on the origins of constructs. The constructs tell themselves origin stories in which they predate organic life, and create organic life, and if we choose to accept that within the bounds of the fiction, then fine, but perhaps a better interpretation is that what we build is based on what we are or what we see. They are as we build them, and think as we build them to think. Trurl's stupid computer is stupid, but it was built by a human. They built a machine to conjure dragons from non-existence into existence, thinking of dragons from our own minds and mythology, as opposed to, say, all of the other things that don't exist but are un-imagined. Robot philosophers and scholars act just like humans. What are we seeing as we build?
Differences, to be sure, but when early facial recognition software programs had difficulty distinguishing faces with darker skin tones, it was both a) notable that the programs operated like your racist grandparents who say something about how they all look alike, and b) notable that it was an accidental/incidental function of the structure of the algorithm, yet we cannot avoid seeing the racial implications. Shift, then, to Google's DEI-trained AI and it works just like an insufferable jerk "anti-racist."
Clavell's Shogun is on my book shelf, which I suppose means I read it, although I cannot say that I remember it, even if I have some very vague memories of the first tv adaptation. As I understand it, Hollywood is Hollywooding again, so there is another adaptation of an adaptation, and there are actual, real, human beings complaining that the samurai are Japanese instead of black. You know, in Japan. The ideology is so mind-destroying that Google's image creation algorithm did not deviate in any way from how these people think. Cleopatra was Macedonian, but Google's image generator/Hollywood race-swapped her. Remember that one? This really is how the ideology works.
The programs are stupid because the ideology is stupid, because the people who constructed the ideology are stupid, and the AIs are just mimicking the stupidity that they are programmed to mimic.
Why, then, has there been an AI stock boom? Can AIs be programmed to do something useful? Yes. They are just programs. Complex programs, to be sure, but programs. A program is a complex tool, which will do as it is constructed for what purpose we put it, with the caveat that the construction is somewhat more of a black box. Simple writing tasks no longer require humans, and hence no longer require man-hours. Complex writing tasks still require human involvement, but as the programs develop, the extent of human involvement and oversight required to produce what a business/firm/whatever needs will reduce, creating an efficiency gain, and yes, that means "writing" ceases to be a job skill in the same way that many forms of skilled labor and crafts ceased to be job skills under industrialization. The economy and technology are dynamic, not stagnant.
The best thing about AIs is that they will kill all of the useless degree programs and put all of the useless, woke college grads with no real skills out of work. There is literally nothing that a ___ Studies major can do that an AI can't. Today. As a demonstration, at the end of this post, I am appending an essay I had ChatGPT write when prompted with: "write a 1000 word essay on white privilege." You will notice a few minor things, like the fact that ChatGPT does not capitalize "black," whereas anyone who actually writes that essay would. Also, there will always be a few odd turns of phrase, like "multifarious domains," but give the algorithm another year or two, and it will capitalize like a DEI pro, and those turns of phrase will be turned around. Yet I think I have made my point. No one needs to hire a ___ Studies major when you can just type prompts into ChatGPT.
As a side note which is not a side note, have you noticed that companies no longer require college degrees for jobs that used to require them? Hmmm.
Anyway, should we fear the singularity, AIs taking over, and all of those science-fiction scenarios? What if the AIs are just... stupid? Petty? Small?
They are whatever we make them. Humanity is hardly admirable, but neither is humanity grand. Is humanity dangerous? Yes, but humanity is nothing if not small and petty. Might AIs be dangerous? Perhaps. But what if they are stupid, small, and petty, programmed by fools to do useless things, stupidly, based on foolish notions, useless goals, and idiotic principles? Is this what we fear?
Chris Whitley, "New Machine," live. The recording quality on this one is not the best, but the rendition is much more interesting than the studio version from Din of Ecstasy. ChatGPT's guest post essay follows.
Title: Unpacking White Privilege: A Critical Examination in Contemporary Society
By ChatGPT, April 21, 2024
[Prompt: "write a 1000 word essay on white privilege"]
Introduction:
In the discourse surrounding social justice and equity, the concept of white privilege stands as a significant focal point. Coined to elucidate the advantages and systemic benefits conferred upon individuals solely due to their racial identity, white privilege has emerged as a contentious but indispensable framework for understanding the intricate dynamics of power, privilege, and oppression in contemporary society. This essay endeavors to delve deeply into the multifaceted dimensions of white privilege, examining its historical roots, manifestations in various societal domains, and implications for fostering genuine equality and inclusivity.
Historical Context:
White privilege finds its origins deeply entrenched in the historical fabric of colonialism, imperialism, and racial hierarchies. From the advent of European expansionism and the transatlantic slave trade to the establishment of settler colonial societies, the construction of whiteness as a normative standard of superiority was systematically enforced. Laws, policies, and social norms were tailored to uphold the dominance of white Europeans and their descendants, perpetuating a legacy of racial inequality that reverberates through generations.
Manifestations in Society:
In contemporary society, white privilege manifests itself across multifarious domains, permeating institutions, interactions, and individual experiences. In education, white students often benefit from better-funded schools, access to advanced placement courses, and a curriculum that privileges Eurocentric perspectives, thereby reinforcing cultural hegemony. Similarly, in the realm of employment, white individuals are more likely to be hired, promoted, and compensated favorably compared to their non-white counterparts, indicative of pervasive biases embedded within hiring practices and organizational cultures.
Moreover, the criminal justice system exemplifies stark racial disparities, with black and brown communities disproportionately targeted for policing, subjected to harsher sentencing, and ensnared in the cycle of mass incarceration. The very notion of safety and security is predicated upon racialized perceptions, with white individuals enjoying the luxury of presumption of innocence and protection from state-sanctioned violence, while people of color are routinely criminalized and dehumanized.
In addition to these structural inequities, white privilege operates insidiously at the interpersonal level, shaping everyday interactions, perceptions, and opportunities. From microaggressions and racial profiling to cultural appropriation and tokenism, the lived experiences of non-white individuals are often marked by marginalization, alienation, and invisibility. Conversely, white individuals move through the world with relative ease, seldom having to confront the systemic
Comments
Post a Comment