The Robots are Smarter Than Me. So’s plastic.

Artificial Intelligence vs Intelligent Artifice.

Rick is staging another symposium.  Last time it was on VR, now on AI.  Maybe his third one will use full words or numbers.

We’ve been going back and forth on the topic.  He’s concerned with the dystopian future where super intelligent machines run amok.  I am concerned with how simpler, but related technological sensibilities have produced a dystopia populated by small pieces of plastic that have run amok.

Rick’s kinda on board with the robo-apocalypse crowd that says we are at a unique threshold where humans become irrelevant.  I get it.  Surely, it is correct to say more disaster looms.   Yet, I also say it’s the same old, same old, and the tipping into subordination, if not oblivion, has already happened because plastic has won.  The problem is not just autonomous super-intelligence, it’s about relentless stupidity. 

Here is a clip that makes the impending robo-apocalypse case, followed by the bones of an argument that this is just another moral panic distracting us from real ongoing termination by plastic, capital and our own enigmatic being that we are intent on outsourcing.

 

 

 

The disaster has already taken place. 

 

Biological evolution took sixty-five million years to produce the human brain. We outsourced that asset at the earliest opportunity. In a few short generations, we’ve reached such an atrophied mental state that nuclear geopolitics works exactly like the customer service department at Comcast or Wells Fargo.

 

This guy reminds us of a critical reality: forget the big issues, the big promises, because we need to remember that to place anything in the human world, it will be examined, monopolized and perverted by shysters rapaciously looking to make a buck.

 

 What is artificial intelligence?  It is embodied knowledge.  I am using the term in the complimentary sense of how it is normally understood.  I mean we take knowledge and give it corporeal form.  Knowledge  becomes an entity, technology.

 

Paradoxically, all knowledge is partial, yet impossible to thoroughly direct and contain.  Whatever form/body we have given to knowledge will both satisfy and operate outside any goals we give it.  The latter are contextual, not absolute.  Time alone will eclipse any goal.  What this means is embodied knowledge will keep on operating way past the time and circumstances of our original intended goal.

 

Take plastic.  We figured out chemical processes and embodied this knowledge into this substance.  Our goals are: waterproof, resilient, light, cheap and portable.  Works great, follows our goals.  Ya, too great.  All those “ideal” properties work so well that we now have a garbage patch the 3 times the size of France.  Moreover, the knowledge embodied in plastic extends beyond any goal we may have had.  The fact that plastic doesn’t decompose, but breaks down into smaller and smaller particles is not a derivative of embodied knowledge, it is that knowledge.  We just didn’t care to consider it, to study the consequences of the embodied knowledge quite that far past our original goals to other “goals”  connected to it. [i.e. States of the knowledge embodied; the so-called “side” or “collateral” effects.  These latter adjectives are ways of hiding our embarrassment and shame.]

 

The horror of micro-plastic has us view plastic as perverse, malevolent that “it” can do such terrible, unintended things.  “It” is not responsible; we are.  Already, we worry about what “AI” can and will be able to do.  We continually make the same mistake.  The “it”  which is AI is us. This is what Heidegger in the Question Concerning Technology and the Frankfurt gang were on about.  Mary Shelley too, quoted at the front end of the doc was aware: “You are my creator, but I am your master.”  The notion, worry, is far from new.

 

Now, it is said that the difference today is that we might produce things that intentionally and autonomously create their own new consequences, despite our intentions and actions.  Hasn’t plastic, Frankenstein’s monster and many other things done just that?  It is a fine distinction that says, “no, really, this thing can create it’s own directed development”.  Yes, but is that the result of it’s own strategies, or did we, just like plastic, disregard inherent properties and consequences right from the get-go?  This is to say, just like plastic always contained the embodied knowledge that it would break down into particles not fitting with any of our immediate goals, might we not say that if we were to build systems that could surprise us with the consequences of the knowledge they construct and embody, we simply overlooked what was there in the first place?  Isn’t it that we just don’t get/know what the process of embodying knowledge [i.e. technology] ultimately means? Another case in point:

 

 Dr. Zuckerstein’s Monster

 

Some people might argue that there are many forms of existing AI something that learns and creates its own consequences outside human intention.  Bureaucracies, libraries, media as can be found in Facebook  or government are all various instances of it.  Take any two books in the library and put them side by side — a simple, random act.  But someone, somewhere will do something unexpected with this happenstance. Is this something they did, autonomously, or was it already there, latent, in the books/technologies?  People long figured it was the latter, and this is why there were measures to restrict access to priesthoods and other experts — as if they really had any serious control. [Cf. The Name of the Rose.] The same thing is happening with AI.  It’s clear that the Elon Musks of the world are suitably worried about the current circumstances.  The mistake is to think that this is a new problem, or that delinquent human intervention can be eliminated and effective control is a slam dunk.

 

 Facebook’s 2 years of hell.

 

My problem with the AI-might-kill-us discourse is it is yet another moral panic that serves two purposes:

 

1 – distract us from the real problems we currently face, especially the human confusion that has created them;

2 – If we haven’t resolved the first issue, and we carry on promoting the idea of the “right” way to do things [cf. Barthes notion of the inoculation effect], we will likely intensify problems.  We know that with every technological leap, we have done just that, while blithely retaining for a critical time the naive techno-optimism Zuckerberg lost .

 

On the other hand, if there is anything positive in the intense critical focus on the consequence of things we build having their own autonomy, it may be that these objects can now serve as a model for the consideration of things we have ignored about ourselves: that is, the consequences of directed behaviour, generally.  The consequences of AI might, for once in our history, tell us something crucial and basic about ourselves. From this perspective, following McLuhan and Innis, AI is but an another medium, an extension of ourselves; not just of sight, sound or even our central nervous system,  but of our entire existential being.  It can — if we so approach it —  serve as a mirror to the compulsions and related behaviours that are bringing us to the brink of this existence.  Will we pay any attention?

 

Leave a Reply