DEPT OF THE NEAR FUTURE
🌆 EV reader, David Galbraith on a new approach to designing cities. Argues David,
If recipes for cities leverage recent, scientific understanding of human behavior they can serve people’s needs from a psychological rather than just functional perspective, to serve spiritual needs that are an accepted fact, without needing to be tied to a particular ideology or religion. We can create recipes for new cities that make people happy, by design, not just house them.
(If you are going to read one thing to expand your thinking this week, read this.)
🙃 Is the world better than ever? And is our pessimism holding us back? Long-read on the work of Max Roser and Hans Rosling.
📶 Why are firms in developing countries so small? And how cell phones might solve this problem.
🥁 China: autonomous mobility-as-a-service could be a $2.5trn opportunity there by 2030. Also, how China’s fintechs are collaborating and innovating to extend credit scores to the unscored and widen financial access. And how China's manufacturing cost advantage over the US has virtually disappeared, courtesy of automation and rising labour costs. (See also: the FT on China’s experiments with crime-predicting technology.)
🇺🇸 On the market concentration of America’s internet giants. How far does it hamper innovation and hurt consumer welfare? And should they be broken up? (Also, Steve Bannon is apparently agitating for Facebook and Google to be regulated as utilities. Read also EV reader, Albert Wenger, on the digital monopolies and what appropriate remedies might be.)
🎐 A primer on critical mass, how virality works. Good foundation to have for understanding the future.
DEPT OF ARTIFICIAL INTELLIGENCE
Elon Musk and Mark Zuckerberg had a ding-dong about the impact of AI. Elon warned of existential risks, Zuck talked about the benefits.
💢 Ian Bogost’s take on their positions is particularly prescient: “really they’re protecting their personal brands”.
My take is this: there are a number of scenarios possible from the breakthroughs in artificial intelligence and the choice of individuals, businesses leaders and public leaders to make use of these technologies. In nearly every scenario, there are risks and unintended consequences and we need to take steps to manage those while enjoying the upside. Those scenarios:
Artificial intelligence and the gains in automation do nothing. This is all AI hype. This is the least likely scenario (p=0, in my view) and flies in the face of a million years of human history and pre-history and our relationship between our inquisitiveness and ability to use tools to fashion our environment to achieve our end goals de jour.
AI is going to be remarkably beneficial, freeing up time and resources, automating drudgery, reshaping industries. Just as we eliminated back-breaking human labour from agriculture and replaced those jobs with accountants, web designers and pilates teacher-trainers, we’ll replace today’s modern drudge jobs with something better (which will need a new political consensus between business and society). This is the most likely outcome. But a side effect is that millions if not hundreds of millions of people will face changes to their jobs, livelihoods, communities, values, quality of life and sense of purpose. The practical power of AI might be misused by those with that power for nefarious ends (market dominance, criminality, violence, dominance, etc).
The above but AI will be so good, there will be no jobs left for humans to do, rather an economic singularity will occur. It may be the case that AI will get so good it will go existential on us (either we’ll become Kurzweilian super-beings or be turned into paperclips by a Bostromonian superintelligence.)
In the last two of the three scenarios, and all the likely ones, huge benefits (to health, wellness, sustainability, income) are possible but massive social and employment change is also likely outcome. Regardless of whether jobs ever return or superintelligence kills us or elevates us to super-beings, the real questions need to be about the how we ensure the inevitable gains from this technology are distributed appropriately, how the social externalities are accounted for and how power is kept in check.
This means that a broad spectrum of voices are heard and the seats at the table are filled with a wide diversity of bottoms. And, ultimately, it means participation so that we can achieve the right consensus about how next.
And yes, there might be existential risks lurking out there. So it is reasonable that a very small number of very bright philosophers and mathematicians think through those formally. I do appreciate their work but perhaps their media air time is overweighted right now.
1. A stunning essay by Siddharta Mukherjee on what happens when medical diagnosis gets automated with algorithms.
Andrew Ng, who founded Google Brain, weighs in on the debate.
🔑 Fei Fei Li, ImageNet and data that transformed AI research.
Gary Marcus on why we need more top-down approaches to AI development rather than the current vogue for bottom-up. (Accessible.)
The first evidence that Facebook “dark ads” can be targeted by political opinion and sway elections.
AI-powered government and 5 biases that will affect AI in the policy arena.
Models as programs. Francois Chollet on the future of deep learning.
DEPT OF BEST OF THE BEST
It is summer holiday season for many of us. I know we may want a break from our summer novels from time-to-time. So here are ten of the best from the past six months in Exponential View. Preload them and find a shady spot.
2. The demise of big oil, and the dynamics that will lead to its collapse by 2025.
3. Nicholas Bloom’s must-read piece on corporations in the age of inequality, as the most skilled employees cluster inside top 1% most successful companies.
4. Rodney Brooks goes deep into the history of Moore’s Law and how its end brings “the golden new era of computer architecture”
5. The struggle of London’s Black cab drivers to survive in the age of Uber portrays a broader strife between cultures, the struggles of automation and platform dominance.
6. A take on the author and researcher Elizabeth Currid-Halkett’s theory of inconspicuous consumption, the invisible divide between the classes.
7. Computers in the next 30 years: what is the vanishing point for our machines?
8. ACM’s excellent review of how Moore’s Law came to be. (Really excellent, read with a pencil and paper so you can do the maths towards the end.)
9. Fascinating interview with Ross Anderson professor of security engineering at Cambridge University that puts today’s cybersecurity issues in multi-decade context.
10. Spiders appear to offload some cognitive load to their webs.
SHORT MORSELS TO APPEAR SMART AT DINNER PARTIES
😵 Demise of Jawbone, a result of overfunding.
🇸🇪 Sweden’s data breach is the biggest ever. (See also this take.)
Watch a hacked car wash attack a car.
Cryptocurrency miners are chartering 747s to fly GPU cards to their mining outposts.
✍️ Handwriting engages the brain, wins over keyboard.
Roomba can utilize more out of users' data than you think.
🎥 Disney tests emotion recognition to see how audiences respond to movies.
🐌 Slug slime inspires scientists to invent surgical glue.
Machines are having a hard time learning Chinese, too.
👽 How do you work out if messages from space are coming from aliens?
SciHub’s cache of scientific papers is so large it may spell doom for the aggressive scientific publishing industry.
The median monthly income of sharing economy workers who do it as a ‘side gig’ may be about $109. 👛
The evolution of trust: Absolutely wonderful game-theory simulator exploring how trust develops in society.