[Y]ou can imagine a fully decentralized AI marketplace where people provide their data, developers compete to provide the best machine learning models, and the whole system works as a self-reinforcing network that attracts more and more participants and creates better and better AI.
📈 PwC, an accountancy firm, reckons AI will create 7.2m jobs in the UK, more than the 7m displaced by automation. This joins the collection of long-range forecasts which sadly don’t expose their models for wide scrutiny. Taken at face value it suggests a massive dislocation, both geographically, and in terms of skills, equality, identity, and psychology. The sectors which gain most roles, healthcare and technical, are vastly different to those which get slammed (transport & manufacturing). London, naturally, does well, the Midlands and the North poorly. Seven million losing their job in 20 years is not a pretty picture and one which will require real serious policy interventions, especially at a geographical level to avoid a fractious social response. And, of course, all this assumes these 20-year forecasts have much validity. My hunch is the timing is way off. Things may move faster and hit harder, especially in a UK economy enfeebled by Brexit, and alongside the growth of work co-ordination platforms that put pressure on the quality of employment most workers can enjoy. On the latter point, The Economist has a visionary essay of a world of work where companies have no employees. It paints a challenging picture.
🙄 Automation in emerging markets could create huge issues: ‘For two centuries, countries have used low-wage labor to climb out of poverty. What will happen when robots take those jobs?’ asks Christina Larson. Great essay. I’ve spoken with the owner of a garment factory in Bangladesh who has described the pressure his Western fashion customers had put under him to switch from manual labour to (mostly Chinese) automated machines. He hadn’t yet had to make any redundancies, rather had allowed natural attrition to take its toll. He explained that job losses would likely be met by “sabotage and riots”. (Interestingly, recent research on gig workers, who have no access to unions, in Southeast Asia and Sub-Saharan Africa shows them forming new collective labour organisations, organised using messaging platforms, albeit fragmented by nationality.)
🐲 How the Chinese Communist Party entangled big tech. Over the past ten years, China’s landscape has changed. The largest firms in the country are not merely information age companies, but increasingly privately-held, as this graph shows. They increasingly contribute significantly to the economy. Taobao alone is estimated to have created 37m jobs. How do these “state overseen enterprises” balance their ambitions with those of Beijing’s? (FT registration required.)
The biggest US tech companies now have competing powers, and in many cases capabilities not available to the state, to challenge the primacy of governments in many domains. We touched on these issues, and the notion of “corporate foreign policy” in EV#162.
Now in the Pennsylvania Law Review, Kristen Eichensehr looks at the issue of Digital Switzerlands in greater depth, 66 pages of it to be precise, so we summarised parts of it here. One key distinction between large corporations and nation states is that they lack territory, control of state-violence, and have very different governance mechanisms to nation-states. But that is as true for many supranational bodies as well.
There are similarities between these large platforms and nation-states: in the size of their user bases, which rivals nation-states; their ability to access and influence those user bases; their financial capacity; and their ability to actually impact citizens at scale.
Eichensehr sounds a note of optimism, that:
The aspirational quasi-sovereign status of the U.S. technology companies [created] two powerful regulators, rather than only one [the nation state], [and] can benefit individuals’ freedom, liberty, and security because sometimes it takes a powerful regulator to challenge and check another powerful regulator.
We’ve already seen a certain amount of this. Technology firms have been helpful in their approach towards cybersecurity. And, arguably, Facebook’s “mark me safe” feature could be helpful in a crisis.
However, the story is not uniformly as straightforward as all that. Look at the complexities facing Facebook, the largest candidate for a Digital Switzerland. Now admittedly, its leadership has proven to be muddle-headed as it has been stubborn in confronting its responsibilities but the Swisher interview with Zuck above shows a passable recognition of Facebook's wider role.
But also, imagine navigating within the complexity of local laws, as well as a firm’s own internal governance systems. We’ve already seen employees from Google convince the firm’s bigwigs to drop certain military contracts. In other words, the big tech firms don’t yet have the internal governance systems or external legal frameworks to consistently engage with us as citizens or nation states.
Where does this leave us? By dint of their execution and by consumer choice, these platforms have incredible power. They will need special status and that status will confer more obligations on them. But the path to that status over the next few years will involve, I suspect, more bilateral negotiations & customised relationships than a single global contract.
we need to remember that we are building commercial products. We could build a robot that suffers like humans, but why would we do that? We don’t even take care of each other.
Humans show racial bias towards robots. “This bias is both a clear indication of racism towards black people, as well as the automaticity of its extension to robots racialized as black.” This is a new ugly twist to robot ethics. Would this encourage robot manufacturers to make helpful robots (e.g. care-bots) lighter shades, and enforcement robots, darker (like our dear friends RoboCop, and a skinless T800)?
A couple of readers have asked why I’m discussing so many issues about the political economy, rather than pure-play technology in Exponential View. It’s always a tough balance to strike - and one which I consider regularly. How much time should I spend on the technology (or, indeed, the research) vs. the impact of that research on the world in which we live.
After all, what appears in Exponential View reflects a fraction of what I read or the conversations I have each week.
EV is a flow, not a stock. Over time, I hope that it’ll bring some challenging ideas on technology and its impact on society. There is too much to cover in a single week. But what I cover each week does reflect some of the things I’m thinking about. Right now we sort of get today’s narrow artificial intelligence (and, in particular, that deep learning is enabling some rapid gains in multiple markets), what is less clear is how this is going to end up in our society. And if you are a founder or developer, understanding the wider implications of your endeavours (the “why”) is at least as important as the technology (the “how”) of getting there.