So often, we debate about the flaws of human nature. We discuss how humans may be fundamentally cowardly, or cruel, or greedy, or whatever. People might say that the foster care system’s problems are just a sign of human apathy and greed, or that we are destined for nuclear hellfire or obliterating ourselves with climate change.
Let’s step back for a second and think, though. Is it just being stupid, flawed apes? Or is something else going on?
Imagine a society of robots.
These robots can be perfectly rational. They have no emotional distortions, no biases. Their instrumentation is engineered perfectly. Their optics don’t have the problems of the human eye, their semantic networks exceed ours. You can imagine them like C-3PO or Asimovian robots or however you please. They can be First Law compliant or not; it’s moot.
Would these robots make mistakes? Would they make miscalculations, errors of judgment, or decisions that ended up being flawed? Would they make inaccurate observations?
Any instrumentation will break down. Anyone who’s worked with any computer, whether the biological variety or the electronic one, knows that there are errors. The more complex the system, the more prone to unexpected errors it is. Complicated systems interact in complex ways.
In fact, a lot of modern psychology is finding that many aspects of what we think of as human flaws are in fact really wonderful computer engineering. Our instincts, for example, may sometimes be distorted or biased, but research is discovering that many human instincts are incredibly accurate. Similarly, perhaps part of the reason we’re struggling to create computers that have the capabilities of humans in crucial ways is our inability to give computers emotion. Emotions are powerful short-hand processing tools.
Any sentient being is probably, near as we can tell (it’s hard to say for sure since we haven’t encountered any others with our degree of sentience), going to have flaws and limitations. It’s a consequence of having limited processing power, limited instrumentation, limited perspective. Chaos theory teaches us that there are some systems that modeling is virtually impossible even with the greatest supercomputers imaginable. Weather systems, for example, are so massively chaotic that, supposedly, “the extreme non-linearity (chaos) of the equations that govern the motion of air means that something that small [as a butterfly’s flapping] can lead to huge differences weeks and months down the road…even if you had a perfect forecasting model, and perfect observations of the atmosphere from weather stations placed 1 meter apart for the entire depth of the atmosphere, you still could not predict whether or not it would rain a month from now. That’s how chaotic the atmosphere is“. And, of course, I highly doubt if even very advanced robots would be able to gather perfect data and have perfect forecasting models all of the time for every possible chaotic problem.
Information theory is an emerging science and we have to meet actual aliens, so everything I’m saying here has to be taken with a grain of salt. Still, the fact that systems as complex as our brains fail at some major tasks may not be a sign that they’re flawed but that those tasks are actually a lot harder than they look. There are some information theorists who despair of us ever being able to create something markedly more complex than the human brain with anything like our present understanding.
Does that mean we can’t improve? Not at all. We can teach better awareness of our limitations, we can make better institutions, we can get better data. And maybe our ability to make amazing pieces of technology will allow us to create a bunch of plastic pals who are fun to be with who will complement us. Maybe we’ll find as we meet extraterrestrial life (assuming any is out there) that they have very different capabilities and can complement us. (Of course, maybe we’ll find out that they are very similar to us because of the limitations of biological systems).
So when you forget your keys or make a mistake in life, cut yourself some slack.
Maybe a robot wouldn’t do any better.