Roko’s Basilisk is a microcosm of what’s wrong with the futurist movement.
Okay, so we have to explain Roko’s Basilisk now, because it’s some absurd shit.
So Roko, a commentator on transhumanist forums, pointed out that a future transcendental computer intelligence might, under consequentialist understandings of morality, punish those who did not do everything possible to advance its existence.
The reason why the idea became called a basilisk, after the creature whose gaze petrifies or kills, is that other commentators pointed out that even reading the thread would actually worsen the consequences thereof. See, if you read the thread, you’d know that the future intelligence would do that, so you’d have had no excuse to try to advance the intelligence’s existence. At least if you hadn’t read the thread you might be able to advance ignorance of it.
This seriously caused enough people distress that discussion of it has been stopped on many transhumanist forums.
Now, of course, one can dismiss this as XKCD did as being just incredibly silly. And it really is.
But it also shows something about the belief systems of not only many of these people actively participating in these discussions, but a lot of humans who may not be on these forums or even identify as transhumanists but have implicit assumptions.
Our fears reflect our worldviews.
The people who are afraid of the superintelligence punishing them because it uses utilitarian ethics are afraid of a bully.
They imagine a superintelligence that is capable of immense reasoning and helping humanity but not of empathy or forgiveness.
A true superintelligence, assuming it was designed correctly, would have empathy. Love. Compassion.
It would recognize that some people were afraid of it and try to assuage those fears.
It would recognize that some people had different priorities and different beliefs, and respect them. It would recognize that many people didn’t believe that a supercomputer was in fact the means to solve humanity’s problems.
It would recognize that human beings are not pigeons to be given buttons to press or dogs to be chastised. It would recognize that we react to different incentives than those of fear or bribery.
It would recognize that it’s immoral to punish someone who didn’t give proactive effort to a cause.
It wouldn’t just use utilitarian ethics. It would use virtue ethics and deontological ethics. It would think ethically in ways we can’t imagine.
And that’s the problem with transhumanism.
All we can imagine is extending our lifespan, building our intelligence, having those rad Borg cyber-eyes with the laser tracer and cool bionic limbs with grappling hooks.
We routinely don’t imagine having technology that will make us kinder.
We don’t imagine improvements to our brain that, instead of making us smarter and thus more able to hurt others, make us more empathic and ethically conscious so that we hurt others less.
We don’t imagine improvements that would let us better manage the bursts of anger that leads us to say cruel things, the myopia of closed-minded worldviews that let us tolerate hurting each other.
We do imagine computers that think like armchair intellectuals rather than loving beings.
I have no problem with the idea of artificial intelligence. I have no problem with enhancing humanity and fixing the environment using nanotechnology, cybernetics and genetic technology. There are ethical issues that we will have to navigate, and some schemes that will have to be rejected for any number of reasons.
But if we today can’t imagine a truly better world, our technology won’t do it for us.
One thing that the Roko’s Basilisk people have right is this: Roko’s Basilisk is actually a self-fulfilling prophecy.
Because the kind of people who believe in it will make a computer that fulfills it.