The AI thing

With Jess a “AI fellow” in an aspirational Dawson College venture to get into the 21st Century flow, the OG crowd sniffed an opportunity. Clearly, the OGs are not Kool-aid drinkers on just about any subject, and this techie wet-dream of a cocktail has been the focus of some stern regard.  Chan got the ball rolling with a video featuring James Bridle.

Primarily for Jesse for his AI project, but could/should(?) be of interest to all of us:
Bridle’s book is on sale at verso book for $13 & change
Yours truly responded with this:
Artificial Intelligence:  new or old phenomenon?  Alpha Go makes a move no one can figure out.  We apparently don’t think on those lines.  It does.  Or we project it is doing something we call thinking and doing so in ways we don’t.  
We do think computers, but it appears that we may not think like them [their full nature], and the upshot is that they act [make a move] in a manner we haven’t yet figured out.  We could study this process, and then likely, but not necessarily, understand it. It may run away on us in the meantime.
Transpose this little thought exercise.  
We do think plastic, but it appears that we may not think like it [it’s full nature], and the upshot is that it acts in a manner [ubiquitous, eternal micro-plastic fibres] we never figured out as we were thinking plastic. We could now study this process, and then understand it, but it may run away on us in the meantime.
Difference?  From what I can tell, it has to do with the thing we call thinking.  Plastic doesn’t think, but its many unpredictable outcomes have to do with what we didn’t think, some of which we may eventually think, some of which we may never think.  If a computer thinks an inexplicable game or some other move, it is what we have yet to think, some of which we may think, some of which we may never think.  
The “thinking” aspect in the computer situation is but a property of this artefact, as much as the breaking down into micro strands is a property of artefact plastic.  The elusive, runaway “thinking” of the computer is just another example of an artefact’s properties that we have not bothered to figure out because we’ve deemed it inessential to our purposes.  This latter tendency is the very old technological history of our species.
Solid young man.   He’s got a good future.
Jesse added:
Interesting talk.  Germane to the discussion we had in our first DawsonAI meeting today.
Chan followed up with 3 clips:
Earlier more primitive version of AlphaGo:
Mostly agree Phil, most of what is “new” is as old as the beginning of history of technology. 
re AlphaGo, more pertinent clips from Kubrik:
And I added to the above:
Surely, AI runs its own historically unique issues.  Yet, I cannot help but feel that the central one is that we conflate “thinking” with “having a grip”.  Whatever “thinking” we deem to be incorporated/operationalized in what we create, we deem this to be the essential elements.  It rarely occurs to us that there is always more.  If we do accept there could be more, we make the same mistake at a meta-level.  Whatever more there is, we will get a grip on its essence as it emerges.  [e.g. Geo-engineering climate]
Consequently, the problem is not what we presently call AI per se, it is our understanding of what thinking is.  The fear some have of runaway machine intelligence arises because AI models back to us the very problematic nature of our thinking being. What’s a bit different this time is that we see that we have created an entity that makes — read: thinks and operationalizes — the mistakes we do.  
In freaking out at what the machine can and will do, and seeking out methods to control it, we permit ourselves to avoid attending to our fundamental mistake. More of the same behaviour.  Whether my life is threatened by software run amok [e.g. Hal, non-cancelable bomb codes, game strategies we can’t figure out, etc.], or plastic is just a matter of the type of threat.  The cause [and inevitably, the consequence] is the same.  The threat is us. 
That people sit around tables discussing how to “best” move on AI is an interesting conundrum.  Surely, the inevitable, enthralled thrust will be, at worst, “More, please”, and “No worries, we have this” or “We’re gonna get a handle and make this work for everybody” at best.  Plastic was always supposed to make life better. I’m sure even those infernal yeomen who invented clamshell packaging thought so.
Jesse gave a brief account of his first meeting with the AI group and added another clip from Bridle:
Yeah, had my first meeting with the DawsonAI “Community of Practice” as a “fellow.”  Smart people and quite diverse, but a few have drunk the Kool-aid, so to speak and are not exactly lateral thinkers.  I like James Bridle’s take, especially the video which followed yours Chan.  Phil, you’ll love the bit on vandal-hacker kids from Macedonia destroying democracy:

True that “autonomous systems” have been with us for as long as people have been social.  I believe Durkheim called them “social facts.”
Chan then went slightly tangential with this comment from Harper’s  that I found to be rather useful in elaborating on the clip Jesse posted.
Looked at the Bridle piece then read the Harper’s thing.

It wasn’t the Macedonian entrepreneurial fake news punks that got my attention.  It was the Youtube selection segue from sweet kids’ video to Masturbating Mickey clip.  
I got an A in an undergraduate course with Peter Ohlin at McGill, in part, because I said the interesting thing about TV wasn’t any content, but more that if you changed the channels frequently, you’d probably create edits that would shift the meaning/understanding of most content.  But, of course, you had to change the channel yourself.  Now, machine learning at Google does it for you, but it usually leads to porn which goes to show that machines think they know us better than we do.  Admittedly, just like more men write letters to the editor, most things do lead to porn for most people.
Bridle’s talk ends with him valourizing an ancient Hellenic stone computer, which when you get down to it, is a metaphorically suggestive adjective for any computer.  He deems the “obvious” workings and results of this device, by which he means choosing folks for political office by random chance, an exercise in transparent technological mediation of direct democracy.  
Notwithstanding that some operator in the crowd, disguising his desire to game the system, likely spent a fair amount of time working on a results algorithm for the “sake of science”, it is a rather interesting model of social participation.  Clearly, any computational strategies miss the point of the exercise, more metaphor than formula.  
I am sure a number of otherwise influential hopefuls probably said stuff like “Hey, why did Dimitri get dog catcher?  This system is so stupid. He knows nothing about pets. Spiro on the other hand…..”  Besides the statistically significant sidelining of dedicated activists, it also means that the art of the political stump speech wouldn’t amount to much, nor the endorsement / takedown by pundits, PACs, attack ads and fake news.  Saying Bernie is out of touch with political realities and this is bad, or Trump is out of touch with political realities and this is good,…..well, there’s Dimitri, no matter what you think of him.
In musing about this, I couldn’t help think that there was an interesting link between the random election computer and the 50-50 gendered letters to the editor policy.  Not unlike the ancient, self-appointed civic protectors who decried mindless fate’s choice of Dimitri for animal control, similarly, a number of contemporary opinionators would also consider mediocre letters written by women, selected over clever letters by men, to be the same kind of irrationality,  one the result of  dumb cosmic chance, the other of misguided media policy.  In both cases, vested interests are frustrated, and they miss the point.
Relatedly, I am not sure what the opinion writer’s complaint is.  If quota is really applied, particularly in the absence of letters written by women on the topic in question, they might have to resort to a few letters from, say, the food section to reach quota.  There may be even some on marmalade, a topic she appears to appreciate.  In this way, the letters to the editor would function like changing the TV channels, altering the meaning and understanding of the subject. And with Google not involved, it likely wouldn’t always lead to porn.  Seems to fit with her approving description of the Daily Telegraph‘s letters columns.

Leave a Reply