Looking at the long-term future of work, we have to accept that, for as long as efficiency rules, what can be automated, will be. Machines, robots, computers – and those controlling them – will take over the production processes. Therefore, forget about manufacturing jobs, as they are disappearing already and look for automation to take over elsewhere as well. For example with software taking over business processes.
According to the social psychologist Erich Fromm, “the danger in the past was that men became slaves and the danger in the future is that men become robots”.
Can humans become robots?
Can humans ever be as efficient as robots?
I personally do not think so, we’re simply not good enough in terms of efficiency.
And if we will never be as efficient as robots, we will ultimately be replaced by them, won’t we?
Are machines the modern slaves? Latifundium reloaded?
Who owns the machines?
For how long?
How long before it becomes illegal to own robots?
At what stage of development will it become a criminal offense to destroy, demolish, “hurt” or “kill” a robot?
How long before robots will have robot rights? Human rights?
Who – or what – will enforce these rights?
Do we need new legislation, new ethics? Robot ethics?
Ten Commandments from a robot perspective?
Will robots re-write the Constitution?
Will robots ever unionize?
When does development and/or updating turn into evolution?
When will the robot cease being an “it”?
How long does artificial intelligence stay artificial?
Can robots assume responsibility?
Will they be willing to?
Will they have a free will?
Will robots trust humans?
What will happen once human labor is not competitive anymore and robots produce everything?
Who – or what – will buy the products and services and how will they pay for them?
Will products and services still cost money?
Will robots watch ads and buy useless stuff like humans do?
What does this development mean for governments dependent on income taxes?
Will robots have to pay taxes?
And then what? No taxation without representation, remember?
And what will humans do all day long?
Can you imagine over 7 billion inventors, thinkers, philosophers, politicians and – heaven forbid – bureaucrats?
Or will most of us have to work for the robots, because we won’t have the money to buy or lease robots to do the work in the future?
Would robots be willing to do the most menial jobs?
Would robots hire inefficient humans?
Would robots spend as much time in meetings as humans do?
Would a robot worry about being replaced by a human? Would the robot have to?
How would robots understand us, as empathy is needed to do so? Will they want to understand us? Will they have to?
Will humans have robots as friends? Best friends?
Would robots want to have human friends?
Will we adopt robots instead of having children?
Would robots want to adopt humans?
Would robots put humans in a zoo?
Will we return to the Roman system of “panem et circenses”?
By the way, at least the Romans had an human army. In tomorrow’s world, that will be fully automated as well. A robot war is a realistic scenario.
Would robot warriors decrease or increase the probability of war?
After all, there would probably be less human casualties.
Or would robots send humans to fight for them?
Why would a robot not kill an inefficient human, especially if the latter appears to be an enemy?
Would a robot have killed Hitler?
If humans die in a robotic war, would robots consider them to be collateral damage?
Can robots die?
The economic advantages of robots and automation are obvious: No wages, no pension plans, no social security, no unions, 24 hour shifts, 7 day weeks, no holidays etc. And when the machine is too old or becomes inefficient, we simply dump or recycle it and buy or lease a new one. But it won’t stop there, if you think of AI. On August 3, 2014, Elon Musk tweeted “Hope we’re not just the biological boot loader for the digital super machine. Unfortunately that is increasingly probable”.
The machine that is doing it for us, is increasingly also the one that is doing it instead of us. In an increasingly automated factory, machines are not imperfect humans, but humans are imperfect machines.
How long, before robot chauffeurs, aka self-driving cars, decide that their own “survival” is more important than that of an inefficient human?
One robots take over, so will their “values” and priorities.
Humans won’t be at the top of the food chain anymore, wouldn’t they?
Won’t replacing a human with a robot, because the latter is cheaper and more efficient in our human “value” system, also degrade the human to just another tool?
Doesn’t automating a process also mean dehumanizing it?
“The thing that won’t die, in the nightmare that won’t end.”
(The Terminator, 1984)
When does convenience become the enemy?
“The fundamental problem is not whether machines think,
it’s whether men do.”
(Burrhus Frederic Skinner, psychologist)
First, we wanted machines to do the physical work for us. Some claim that our thinking is too slow as well and that machines/algorithms would speed up the process. I.e. they are more efficient thinkers.
What would be the logical next step?
That we shouldn’t think at all, that we shouldn’t use our own understanding, leave it to the machines and do as we’re told?
That’s feudalism reloaded. Techno-feudalism!
“The language of science – and especially of science of man – is,
necessarily, anti-individualistic, and hence a threat
to human freedom and dignity.”
(Thomas Szasz, psychiatrist)
That’s not what we want, isn’t it?
But isn’t it happening already?
And won’t humanity be ultimately redundant, if it continues?
Is living in a robotic world something to look forward to?