The patterns of AI in the stuff of the future.
There’s a fascinating interview of Max Tegmark, a prominent physicist now focussing on artificial intelligence research, by Sam Harris (the well known atheist neuroscientist), broadly on the future of AI, in particular once it reaches the point of producing a generalised intelligence at least equal to that of humans.
There are too many points of interest for me to extract those and save you from the recommendation that you listen to the podcast, but a few points stood out to me.
Firstly, Max has apparently pretty much the same view as I have about ontology (i.e. the study of what is actually there at the most fundamental level); he even uses the same language as I’ve been doing. I suppose that as we are both physicists at root, this is not as surprising as it might seem (I’m pretty sure he hasn’t read what I write, and I’ve not read anything he’s written!) There is “stuff”, and there is pattern, and pattern is not heavily dependent on the stuff which bears the pattern – as he puts it, pattern is substrate-independent. He points out that wave equations later adapted to describe fundamental particles were originally developed in fluid mechanics; the mathematics describes this class of patterns (which happen to be dynamic patterns) irrespective of whether they are in water or in, say, the electromagnetic spectrum.
He moves rapidly from there to discussing how AIs of the future are likely not to be using electrons in solid state systems, they could be in something entirely different – but the patterns will be transferrable, and in the process mentions that in IT there is one basic element, the NAND gate, which he likens to synapses in the human brain. However, of course, you can construct a NAND gate out of all sorts of “stuff”…
The bulk of the interview is about how we might control intelligences we create which could be far greater than our own intelligence, but there are many directions in which they could have gone but didn’t. Can we hope, sometime, to upload the pattern which is “us” to a computer, and thereby defeat death, or at least the limited lifespan of our biological substrate? Mention was made of the fact that the best chess player is now not a computer, after the famous defeat of Gary Kasparov, but a human-computer team, which Max calls an “android” – probably correctly, as it is a human-machine combination. Might we augment ourselves and become amalgams of human and machine? (As I get older, I would very much appreciate some memory augmentation, perhaps a few terabytes…)
What, morally, is our position regarding a machine with a generalised intelligence greater than ours? Is it morally acceptable for it to be effectively a slave? (There is some discussion of this, but by no means exhaustively). If not, will we see a situation, as Sam and Max discuss, of the superhuman intelligence being, in effect, in the position of an adult surrounded by young children, unable to make decisions as good as the adult?
If I have one overwhelming worry about this prospect (and it is closer than we might think – the self-driving car is already with us, the military are playing with machines which may, Bond-like, have a “licence to kill”, and the cheapest calculators can perform calculations many times faster than even the fastest human, giving a glimpse of what the situation might be were their “intelligence” generalised rather than restricted to arithmetic), it is that we are biological systems, and as such have emotions – and emotions are what founds most of our moral behaviour (as well as some of our most immoral). Without emotion, can an artificial intelligence ever be trusted to make good moral decisions? I worry about that; my long period of depression, which ended in 2013 (deo gratias!) ended up in a state of anhedonia, in which, broadly, I did not feel emotions. I could assess what would happen if I did something fairly well – my computing power wasn’t seriously damaged – but I couldn’t make a decision as to whether actually to do it or not because there was no emotional charge giving me this instead of that course of action. Even the prospect that the action would damage me, perhaps kill me (or others), had no emotional charge – it was a matter of indifference whether I were injured, or in pain, or dead in the future.
I got through that period by following a set of rules, largely “act as if” rules. Others did not get damaged, other than perhaps emotionally, and I got damaged relatively little and am still here to write about it. But it could so easily have been different.
Would a super-AI have the same problem? If so, we would want there to be VERY strong “rules” imbedded at an early stage to avoid disaster.
But then, I took much the same view when raising children…