Artificial Intelligence, Brains, Networks, Bugs, and Complexity

As a Computer Science graduate student in the late 70s/early 80s, I often wondered what would happen if the problems that  would later come to be known as the  “AI-complete” problems, which included vision, knowledge representation, natural language, and machine learning [0], were all actually solved. Would the resultant code be self-aware (whatever that means)?  Would it engage in recursive self-improvement and rapidly surpass its initial (human) programming and intelligence? And what would be the implications for our technology, science, culture,  and economies? Would such an event somehow threaten all of us carbon-based life forms here on Earth?

Fast forward to today. One of the “macro trends” I’ve been talking about over the last 16 or so months is the rise of machine intelligence [1]. In fact, while AI (in many forms) has been studied for the last 50+  years, recent advancements (e.g.,  Deep Learning [2,3]) in conjunction with large increases in processing power and training data has produced a qualitative change in the scope of applicability for AI-based systems.  For example, while just a few years ago AI systems were constrained by computational resources, today we run very large and very deep networks on fast GPUs with billions of connections and 10+ layers, and train them on large datasets with hundreds of millions of examples; obviously this has vastly expanded the class of problems that can be attacked.

What we are observing today is that the IT arms race coupled with rapidly maturing theoretical foundations for AI have conspired to create a new “Machine Intelligence Landscape”  in which modern AI techniques are being applied to a staggeringly diverse set of use-cases and technologies. These include data center resource allocation and optimization, image and language recognition, and a vast array of cyber-physical systems (CPS) including sensor networks, robots and even spacecraft. The key takeaway here is that our technology is about to get a lot smarter.  Of course, technology getting smarter doesn’t seem that hard to accept (based on Moore’s Law alone). However, the questions is how smart (and what are the implications of this new intelligence)?

I’ll just note here that there remain many difficult problems yet to be solved. For example,  we know that human learning is non-convex; if it were convex it wouldn’t matter in what order we learn things,  but  clearly it does. However, non-convex problems are provably harder to solve and may not have efficient solutions at all (this is a well-known result from the early 1960s and derives from work on constrained optimization, also known as Monotropic programming [8]).  Convex problems also have nice mathematical properties including, for example, that local minima are also global minima; many of these properties are not present in non-convex problems which, among other things, makes them harder to solve and their results more difficult to interpret. Another representative problem is known as the curse of dimensionality: at very high dimension almost all data become sparse, so there even though we have “big data”, it may be very sparse when viewed from a training perspective.  So while it is not surprising there are many open problems in machine learning, progress has nonetheless been astonishing.  See [7] for a nice overview of the progress and open questions in machine learning.

It is against this backdrop that noted physicist Stephen Hawking, along with Stuart Russell, Max Tegmark, and Frank Wilczek have asked the same question I asked back in the 80s [4]: What would happen if we solved the “AI-complete” problems. Or put another way:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In particular, Hawking and his colleagues argue that

Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.

Perhaps not surprisingly, recent research suggests the concerns voiced by Hawking and his colleagues, namely that there is a non-zero probability that some kind of malevolent “super-intelligence” could emerge, might be a property of intelligence itself and that intelligent behavior in general may spontaneously emerge from an agent’s effort to ensure its freedom of action in the future.  More specifically, according to the Maximum Causal Entropy Production Principle [5], intelligent systems move towards those configurations that maximize their ability to respond and adapt to future changes (the paper, entitled “Causal Entropic Forces”, can be found at [6]). The intuition here is that the kind of super-intelligence envisioned by both the AI-Complete problem set and by Hawking might be a direct (and inevitable) consequence of physics, in this case thermodynamics (see [9] for a deeper dive).

Today, some thirty-five years after I first wondered about the deeper implications of AI,  Hawking and his colleagues are raising exactly the same issues. Indeed,  I agree with Hawking’s call-to-arms and concern that AI could pose a significant threat. However, AI is here to stay, and we’re going to start seeing it turn up in unexpected places.  For example, it doesn’t take a lot of imagination to see how sophisticated unsupervised machine learning will revolutionize many of the more complex network tasks that are solved today in (much) less dynamic ways. For example, problems such as data center orchestration could benefit greatly from this technology. Clearly functionality that we’re talking about in networking these days, including Network Function Virtualization, Service Function Chaining, Mobility and the like are great candidates for treatment by Deep Learning.  More generally,  orchestration and optimization of Compute, Storage, Networking, Security and Energy (CSNSE) are prime candidates for consideration by Deep Learning technology. And consider what DevOps-style automation might look like when combined with Deep Learning.  Suffice it to say that there are hundreds if not thousands use cases for AI technologies in today’s data center as well as more widely in CSNSE.   BTW, one place where it is unlikely that we’ll see these techniques deployed is against the problem of finding shortest paths in a graph; Dijkstra’s algorithm does a pretty good job of that and we have quite a bit of experience implementing and operating networks that find paths using it (IS-IS, OSPF, …).

Finally, one interesting note: Our study of  the fundamental tradeoffs faced by network architectures reveals that these tradeoffs are closely related to the concerns articulated by Hawking and his colleagues.  But I’ll leave that for next time.

The key takeaway here: Our technology is about to get a lot smarter. 

References

[0] http://en.wikipedia.org/wiki/AI-complete

[1] http://www.1-4-5.net/~dmm/talks/hidden.pptx

[2] http://en.wikipedia.org/wiki/Deep_learning

[3] http://techcrunch.com/2014/01/26/google-deepmind

[4] http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence–but-are-we-taking-ai-seriously-enough-9313474.html

[5] http://io9.com/how-skynet-might-emerge-from-simple-physics-482402911

[6] http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf

[7]  https://www.youtube.com/watch?v=vShMxxqtDDs

[8]  http://web.mit.edu/dimitrib/www/Extended_Mono.pdf

[9] http://www.pnas.org/content/early/2010/05/04/0914630107.full.pdf+html

 

David Meyer
David Meyer is currently CTO and Chief Scientist at Brocade Communications, where he works on future directions for Internet technologies. Prior to joining Brocade, he was a Distinguished Engineer at Cisco Systems, where he also worked as a developer, architect, and visionary on future directions for Internet technologies. He is currently the chair of the Technical Steering Committee of the OpenDaylight Project. He has been a member of the Internet Architecture Board (IAB) of the the IETF (www.ietf.org) and the chair/co-chair of many working groups. He is also active in the operator community, where he has been a long standing member of the NANOG (www.nanog.org) program committee (and program committee chair from 2008-2011). He is also active in other standards organizations such as ETSI, ATIS, ANSI T1X1, the Open Networking Foundation, and the ITU-T. Mr. Meyer is also currently Director of the Advanced Network Technology Center at the University of Oregon where he is also a Senior Research Scientist in the department of Computer Science.. One of his major projects at the University of Oregon is routeviews (see www.routeviews.org). Prior to joining Cisco, he served as Senior Scientist, Chief Technologist and Director of IP Technology Development at Sprint.
David Meyer
  • What Lies Beneath

    Hey David, all a bit too academic for me in the main I’m afraid but your penultimate sentence just crystallised a mass of fuzzy thoughts in my mind. A very interesting perspective, I look forward to ‘next time’. Thanks