Any sufficiently advanced technology is indistinguishable from magic
Arthur C. Clarke
Our fear of technology is deeply rooted in society. Some countries, mostly in East-Asia are more open to embrace unknown technologies than others. Those from my own tribe (the German speaking world) are usually skeptic when compared to the Japanese, where despite[¹] an aging population, a majority embraces tech automation, AI and robots. To the Japanese also inert objects have a soul which explains why also a robot can be «kawaii».
Sentient AI is a prominent menace in SciFi since Asimov, Clarke and Kubrik. Popular stories have a villain (or a company like SkyNet) greedy for power and dominance, and they often invent AI to outsmart competitors (and destroy anyone in the way). I can relate to this fear not because my Apple products might grow limbs and strangle me, but because history is full of examples where power concentrations cause a rift in society, leading to war and misery. Despite liberal promises of the early web to “level the playing field”, digital technology increases power concentrations and inequality.
Our fear of anything new is healthy and perfectly rational. So what’s the specific problem with demonizing AI? I’m concerned that due to the hype and panic around a superhuman AI threat, we fail to see more urgent and realistic problems.
I wrote about algorithmic-bias and the pitfalls of BigData models elsewhere[²]. To recap, we need (Big)Data-sets to produce any meaningful AI, but by doing so inherit its gotchas. No matter how monopolies are trying to pitch us on their ethical intentions, a conflict of interest arises when BigData turns from an instrument for producing economic merchandise, into the chief merchandise.
Uber, which doesn’t own any cars, replaced a whole industry of taxi-drivers with low paid part-timers, soon replaced with self driving cars. Amazon replaced hundreds of low-skilled workers with robots in their fully automated logistic centers. And if you’re a software developer thinking this can never touch you, then think again. You’re probably already working in an agile “continuous-integration treadmill”, where code-ownership was abolished a decade ago, and it’s easy to replace you and your code on a Friday afternoon.
Not all automation is bad. Why not remove any steps in a process/workflow if it benefits quality and reduces complexity! But there is a limit to automation when it comes to what a company should get away with. If a support engineer is rated by a customer with negative feedback, then this is dehumanizing for the employee if the reason had nothing to do with the employee, but was a shortcoming of the product. Yet it happens all the time and BI dashboards never take that into account.
Software Data is eating the world
The promise of tech is to give us an edge over our competition and so we put up with the cost, complexity and dehumanizing downside.
Customer support is only one example, where a large part of the workforce (and even customers) are slaves to BigData and poorly designed machine-logic. Companies are increasingly becoming like a set of «proprietary algorithms». Considering that their purpose is to maximize profits for shareholders when hardly any employees are shareholders is a problem. Maybe there is room to discuss a new type of corporate structure to make the future workplace less hostile for humans? (better be quick before the bots have their own union! ;))
In addition to BigData’s algorithmic-bias, we should discuss inability of AI to be open and transparent. «Open Source» (free both as in free beer and freedom) hardly exists. And even if you have the source code for the tools to generate the model from the raw input (corpus), there is no reproducibility[³]. From this perspective AI isn’t technology in the same way your software is, but closer to a biologic organism (test results might be reliable but the logical flow / path that produced the decisions are non-deterministic).
There are only a handful of companies which have the resources and the volume of data to produce meaningful AI and are already monopolies in other areas. While BigData tools may be distributed, there are no decentralized BigData concepts, and it’s impossible to build AI on decentralized systems (at least as long as the industry remains on its current trajectory). The danger AI poses is more urgent and imminent than what critics predict, and for totally different reasons. Not because we’re enslaved by a sudden sentient new life-form, but because AI furthers the already existing power concentration in the hands of those already abusing that power today.
Google is utilizing it’s massive user-base where real people solve puzzles that their robots got stuck with (reCaptcha). In that sense, we all already work for the bots without payment. Joking aside, the process is too slow for most of us to notice. The shift has been ongoing since 30 years and spans probably another generation in which we increasingly adapt our processes to make them more machine like.
The Complexity Problem
The Math behind AI is complex that unfortunately it’s hard for engineers without former training to quickly get into the subject. In a sense we’re increasingly becoming the «henchmen» of the machine and to an elite group of highly skilled academics publishing theoretic papers on the subject (but often removed from any practical implementation).
If very smart people like Stephen Hawking claim «the end is nigh», it’s unfortunate because the hysteria masks far more pressing issues already impacting our life today.
I’m an eager student of «Antifragile» (BlackSwan) risk management and would love to hear also Taleb’s position on AI. But I doubt a super-intelligence will destroy humanity soon. It’s far more plausible that human progress in other disciplines like Genetic-/Nano-/Climate- Engineering, could drive humanity off the cliff. Maybe it’s time to sit down with the bots and negotiate? 🙂
Papers & Resources:
- Professional Judgment in an Era of Artificial Intelligence and Machine Learning
- Analyze and ameliorate unintended bias in text classification models
- List of critical literature on algorithms and social / ethic concerns
- AI Can Be Made Legally Accountable for Its Decisions
- How algorithms and machine learning are affecting communities and societies
- The field of AI research is about to get way bigger than code
- One pixel attack for fooling deep neural networks
- The relationship between statistical definitions of fairness in machine learning, and individual notions of fairness
- Artificial intelligence can make our societies more equal
- If automated decision is used in criminal justice, it must be open source
- The Bad News About Online Discrimination in Algorithmic Systems