DEPT OF THE NEAR FUTURE
🌪️ How quickly and extensively will AI transform the economy? Useful summary from The New York Times. One headline being a McKinsey Global Institute report suggesting as many as 875m people will see their jobs displaced, and as many as 375m will need retraining by 2030. (As an example, a British bank is closing 25% of its branches as consumers move to digital touchpoints.)
☎️ If AI is making the economy grow, distractions such as smartphones and social media might be hurting our productivity, suggests Dan Nixon, from the Bank of England. Productivity growth in advanced economies has collapsed as smartphone penetration has risen and, of course, correlation does not mean causation. He suggests two possible pathways: that distractions just impact effective time working, and that we might just have “persistently lower productivity caused by habitually distracted minds” (Separately, Amazon may have put a brake on US inflation.)
🌀 Intangible assets have soared past tangible ones in economies like the UK, US and Sweden. (Think of Apple whose value is created through R&D, design and orchestration rather than manufacturing). But the properties of intangible economies may explain rising inequality and slowing productivity. In turn, it demands a very different policy environment. (Excellent essay; FT reg needed.)
💥 The impossibility of the artificial intelligence explosion. Francois Chollet argues that our civilization is an exocortex to our brains and where our intelligence co-occurs. But fundamental forces will put the brake on an intelligence explosion.
The expansion of intelligence can only come from a co-evolution of brains (biological or digital), sensorimotor affordances, environment, and culture…[and] No “intelligence explosion” will occur, as this process advances at a roughly linear pace.
(See a longer discussion below.)
🔒 70% of Brits say they will stop doing business with a firm after a data breach. This is reflected amongst US and European consumers, too. With GDPR around the corner, the number of data breaches notified is going to rise. Since many firms aren’t remotely ready for GDPR, either a lot of companies are going to lose close to three-quarters of their customers, or consumers are going to keep doing business with breached corporations. (Accenture reckons that security breaches are growing at more than 27% per annum.)
💦 Kai-Fu Lee: The online and offline worlds are merging.
☂️ Volatility be damned. Bitcoin appears to be going mainstream. Coinbase, a wallet, has more customers that Charles Schwab, a brokerage. The firm is adding more than a million users a month. Nasdaq will offer futures contracts on bitcoin from next year, which will allow investors to take even riskier bets on bitcoin. EV stalwart, Albert Wenger, reckons the cryptocurrency market may be on a path to trillions of dollars.
DEPT OF ARTIFICIAL INTELLIGENCE PROGRESS
How fast are the underlying technical improvements in AI really coming? And might it lead to an intelligence explosion?
The Stanford AI Index 2017 has some super data on longitudinal improvements in specific areas. Machine vision, natural language understanding, and question answering are all areas which have seen step-change improvements in the past few years. They have or are close to crossing the Uncanny Valley where these technologies can be used without annoying us. Object detection, where the maths can spot something in an image, is now at truly super-human levels. Only ten years ago, our best computer vision tools were more akin to the performance of a short-sighted person wearing sunglasses in a dark room.
For a more detailed dive into computer vision’s progress this year alone, I enjoyed Duffy and Flynn’s, “A Year in Computer Vision”. They describe how the “voracious pursuit of optimisation” by the machine learning community has led to significant “intra-year changes in accuracy”. This voracious pursuit has resulted in anyone tracking the space being on the verge of exhaustion. As Duffy & Flynn recognise, it is “becoming increasingly difficult to remain abreast of research as the number of publications grow exponentially.” I can testify to this. My slides feel perpetually out of date and my list of unread papers seems to double on a weekly basis.
But a second point is that the community’s strong approach to sharing its progress, through frameworks, pre-preprints on ArXiv, and open-source affordances is unparalleled and is “continuously attracting new researchers and having its techniques reappropriated by fields like economics, physics and countless others.” And, of course, the competition amongst the major cloud platforms is throwing out juicy components for researchers and engineers alike to get their hands on. (See for example, this deep learning-enabled programmable camera from Amazon.)
It is worth considering Francois Chollet’s argument, from above, when we look at think rate of progress in AI.
Francois argues that despite the seemingly rapid progress in AI, an intelligence explosion, where intelligent machines recursively design greater intelligences, is very unlikely. So the fast progress we see today is a chimaera, more linear than we think and more likely to slow down, because:
Doing science in a given field gets exponentially harder over time — the founders of the field reap most the low-hanging fruit, and achieving comparable impact later requires exponentially more effort.
And that even the open-source networked approach to research that has driven so much recent progress has limits because:
Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger.
Essentially, these manifest themselves as contingent bottlenecks and diminishing returns.
I’m very sympathetic to Francois’s arguments, but I do have a couple of points of departure. First: we have already gone exponential. We have already seen a rise in complexity which when mapped against linear time shows the exponential hockey-stick graph. Where that is hominid cranial capacity (inflection point about 2m years ago), or world GDP (looks very linear on a 1950-2016 scale, clearly inflecting on the scale of AD1 to today with the kink in the chart coming around the industrial revolution), or even human population (which is still in its hockey-stick growth phase).
Many of these things are logistic (or S-curves) reaching a point of saturation, or diminishing returns, so they flatten out. Human cranial capacity is unlikely to treble in the next two million years, although that is what it did in the previous two million. It better not, otherwise our cranial capacity would reach 4,500cm3 and our skulls would be 45% bigger in every direction.
But Ray Kurzweil’s description of accelerating returns is that exponential relationships are often the layering of S-curves from multiple subsequent technical paradigms that gives us our familiar exponential curve. In fact, you only need to look at this graph from Nvidia’s investor day. As the improvements we can push out of CPU architectures peters out, we exploit a new architecture which is accelerating at the steep point of its S-curve. In this case the GPU & CUDA pairing. Over the next 10 years, CPUs might only improve 2-3-fold, but GPUs will increase 60-fold to 1000 times the performance of their CPU cousins. That feels better than linear...
Second, it seems like the open-source community around AI may be structured in a way that fosters long-term innovation rather than atrophy. In his wonderful book, Scale, complexity scientist Geoff West investigates complex social systems, and relevant here, cities and companies. Both analyses form a useful lens to look at one of Francois’s other observations: the declining returns from collaboration as things get complex. Geoff points out that empires and companies do often collapse under their own weight as they get larger. The half-life of firms in the S&P 500 is about 10 years, a result of the stagnation of goals and increasingly bureaucratic control needed to oversee execution. (See the UK edition, page 408-410).
But equally, Geoff makes the case that cities have very different characteristics as they get bigger and more complex. They are prototypically multidimensional, enjoy superlinear scaling resulting in “open-ended growth, … expanding social networks..., resilience, sustainability, and seemingly immortal.”
My sense is the multi-disciplinary, and increasingly inter-disciplinary, praxis of AI research in these open source communities has more in common with cities (which survive the rise and fall of the empires in which they are contained) than companies (which, per Geoff, seem to collapse under the weight of their own directionless bureaucracy.)
If that is the case, then perhaps that dynamic won't really act as a brake on the path to an intelligence explosion.
EV reader, Florent Crivello, makes a number of other excellent points in his response to Francois.
- Eliezer Yudkowksy makes a fascinating observation about how bitcoin mining is creating enough of an economic incentive for ASIC and GPU designers to invest to shorten “R&D timelines for computing hardware, which affects AGI timelines.”
- On the subject of collaboration, the exo-cortex and intelligence as collaboration, EV reader, Geoff Mulgan, has a new book on the subject, Big Mind. There is a good review in Nature. (The book is on my desk to read real soon now.)
- Collaboration 2: PISA rankings on collaboration around the world puts the pupils from Singapore, Japan, Hong Kong and South Korea at the top of the list & finds that “worldwide, girls were much better at collaboration than boys.”
- Mozilla open sources its as-good-as-human speech recognition;
- How Twenty Billion Neurons tackles dataset collection for video analysis using a novel crowdsourcing technique they call CrowdActing;
- Reuters uses AI to amplify its global news collection;
- Europe has the largest share of the top 100 AI research institutions worldwide, new Atomico report on tech in Europe;
- China’s future military power and AI. “China may possess the potential to equal or surpass the United States in this critical technology, the U.S. military must recognize the PLA’s emergence as a true peer competitor and reevaluate the nature of U.S.-China military and technological competition.”
- Deep learning meets palliative care: an 18-layer DNN predicts the probability of death over 3-12 months;
- Google’s Jeff Dean calls for “fair and responsible” AI.
SHORT MORSELS TO APPEAR SMART AT DINNER PARTIES
Melvin Kranzberg’s “Six Laws of Technology”. Worth reading, I hadn’t come across these before. (WSJ reg required.)
100 cryptocurrencies in 4 words or less. Quite amusing. (Also, close your pop-under browser windows. They might be secretly mining crypto in the background.)
🌡️ It is 54℉ above normal in Greenland.
🚘 GM will launch autonomous vehicles in 2019. “This technology is coming along faster than anyone thinks.”
Inequality rose significantly as we domesticated animals and plants.
🚧 Chinese man paints own road markings to speed up his commute.
🤯 Mind. Blown. Powerpoint is Turing complete. (Meaning you can programme it to do anything.)
Does collaboration kill creativity? Science speaks.
Parents are spending more time with their kids.
Zebra finches practice singing during sleep.
👮 Could teaching humanities to police officers make them better at their job?