Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Monday, March 14, 2016

Algorithms where human understanding is optional - or maybe even impossible

I think I am guilty of holding on to an AI non-sequitur for a long time. Namely the idea that AI is fundamentally limited by our ability as humans to code the rules for the computer to execute. If we humans cannot write down the rules for X, we cannot get the computer to do X.

Modern AI seems to have significantly lurched over to the "no rules" side of the field where phrases like CBR (case based reasoning) and Neural Net Training Sets abound...

But with an interesting twist that I have only recently become aware of. Namely, using bootstrapping to use generation X of an AI system to produce generation X+1.

The technical write-ups about the recent stunning AlphaGo victory make reference to the boostrapping of AlphaGo. As well as learning from the database of prior human games, it has learned by playing against itself....

Doug Englebart springs to mind and his bootstrapping strategy.

Douglass Hofstadter springs to mind and his strange loops model of consciousness.

Stephen Wolfram springs to mind and his feedback loops of simple algorithms for rapidly generating complexity.

AI's learning by using the behavior of the previous generation AI as "input" in the form of a training set sounds very like iterating a simple Wolfram algorithm or a fractal generating function, except that the output of each "run", is the algorithm for the next run.

The weird, weird, weird thing about all of this, is that we humans don't have to understand the AIs we are creating. We are just creating the environment in which they can create themselves.

In fact, it may even be the case that we cannot understand them because, by design, there are no rules in there to be dug out and understood. Just an unfathomably large state space of behaviors.

I need to go to a Chinese room, and think this through...