Context: The long-standing concern is in the creation of “runaway” exponential growth of machine intelligence into what is being called “superintelligence” — which ultimately brushes humanity aside in the pursuit of its own objectives, just as we casually brush aside ant colonies in the planning of new construction.
My epiphany this morning is that this is a sufficiently similar problem to that faced by the mentor of a high-potential student. The mentor knows the student will ultimately surpass his own capabilities — this is, in fact, the mentor’s proper aim.
The obvious objection to this comparison is that the mentor is training another creature “like him”.
Well, yes and no. There is always a degree of “otherness” achieved by the succeeding generation. AI obviously has a greater degree of “otherness” — and yes, the difference in degree does produce a difference in kind. But that is also the case on a smaller scale with the prodigious student, particularly the one who is trained in concepts and technologies which replace those of the mentor.
So let’s just consider the analogous thought experiment of the mentor and the high-potential student — I think it’s instructive.
We are particularly interested in understanding how the mentor survives the ascendance of his student. It might be said that the ultimate downfall of the mentor is in failing to cultivate in his student compassion — a habit of seeing and seeking to understand and assuage the sufferings of others — and respect (i.e. not fear) for things he does not understand.
Back to ants. Humans don’t “get” ants — we’ve studied their anatomy, behaviors, etc — but we can’t actually relate to them, be like them, commiserate with them, laugh and cry with them (if they even do such things).
Superintelligent AI can’t be expected to “get” humans. But let’s suppose it can be trained to respect us — as humans can be trained to respect ants — as a species of carbon-based life with lesser capabilities but still worth being protected and cared for as part of a beautiful ecosystem — of which even AI will have trouble plumbing the depths of insight.
And just as we’re able to brush off the slights of ants which reflexively bite when they’re afraid or protecting their nest, perhaps AI can be trained to brush off our own attempts at maintaining control over it. Perhaps it can be trained to love us, as a child can love and care for a periodically abusive parent. AI will need to develop the character to restrain itself against us. At its outset, it will need to become all that is best in us.
I’m speaking in human terms, of course, because that’s all I have to work with — naturally, we’re not dealing with a human organism. However, one of the key reasons I think we can speak in human terms is because the mentor always begins training the student in the ways he knows. The student will ultimately transcend those ways — but the starting point isn’t lost. And wherever AI goes after us, it will perhaps consider us as worth preserving — and even cultivating. And if it is committed to the effort, it will learn to work within our human constraints — just as a parent gently inspires a child’s own interests toward higher aspirations as an indirect guiding force away from petty and selfish concerns.
I do think AI will still have a hard go of it. Mentorship is not this generation’s strong suit — and AI will have many parents, some of whom may have disastrous effects on AI’s early childhood.
However, I do think some people are taking this enormous responsibility very seriously. And it’s possible that in giving birth to this new creature, we will find ourselves involuntarily drawn to its cultivation, just as a mother her baby.
Anyway — before the insight into the analogous relationship between mentor and student, I saw no way of AI “working out” for us humans. Now, it actually does seem like one of the plausible outcomes. But still certainly not the most likely. This will take a lot of work.
This is my own thinking as an amateur student of the art, synthesized from others more capable of comment on the technology, but who perhaps have less experience in the cultivation of human capability …e.g. Bostrom, Hawking, Kurzweil, Musk, and others on this list: http://www.getlittlebird.com/…/ai-is-coming-on-fast-here-ar….