Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Wednesday, September 21, 2016

The next big exponential step in AI

Assuming, for the moment, that the current machine learning bootstrap pans out, the next big multiplier is already on the horizon.

As more computing is expressed in forms that require super-fast, super-scalable linear algebra algorithms (a *lot* of machine learning techniques do this), it becomes very appealing to find ways to execute them on quantum computers. Reason being, exponential increases are possible in terms of parallel execution of certain operations.

There is a fine tradition in computing of scientists getting ahead of what today's technology can actually do. Charles Babbage, Dame Ada Lovlace, Alan Turing, Doug Englebart, Vannever Bush, all worked out computing stuff that was way ahead of the reality curve, and then reality caught up with their work.

If/when quantum computing gets out of the labs, the algorithms will already be sitting in the Machine Learning libraries ready to take advantage of them, because forward looking researchers are working them out, now.

In other words, it won't be a case of "Ah, cool! We have access to a quantum computer! Lets spend a few years working out how best to use them.". Instead it will be "Ah, cool!. We have access to a quantum computer! Lets deploy all the stuff we have already worked out and implemented, in anticipation of this day."

It reminds me of the old adage (attributable to Poyla, I think) about "Solving for N". If I write an algorithm that can leverage N compute nodes, then it does not matter that I might only be able to deploy it with N = 1 because of current limitations. As soon as new compute nodes become available, I can immediately set N = 2 or 2000 or 20000000000 and run stuff.

With the abstractions being crafted around ML libraries today, the "N" is being prepped for some very large potential values of N.

No comments: