Thursday, May 05, 2016

Statistics and AI

We live at a time where there is more interest in AI than ever and it is growing every day.

One of the first things that happens when a genre of computing starts to build up steam is that pre-existing concepts get subsumed into the new genre. Sometimes, the adopted concepts are presented in a way that would suggest they are new concepts, created as part of the new genre. Sometimes they are. But sometimes they are not.

For example, I recently read some material that presented linear regression as a machine learning technique.

Now of course, regression has all sorts of important contributions to make to machine learning but it was invented/discovered long long before the machines came along.


Thursday, April 14, 2016

Cutting the inconvenient protrusions from the jigsaw pieces

There is a school of thought that goes like this....

(1) To manage data means to put it in a database
(2) A 'database' means a relational database. No other database approach is really any good.
(3)  If the data does not fit into the relational data model, well just compromise the data so that it does. Why? See item (1).

I have no difficulty whatsover with recommending relational databases where there is a good fit between the data, the problem to be solved, and the relational database paradigm.

Where the fit isn't good, I recommend something else. Maybe index flat files, or versioned spreadsheets, documents, a temporal data store....whatever feels least like I am cutting important protrusions off the data and off the problem to be solved.

However, whenever I do that, I am sure to have to answer the "Why not just store it in [Insert RDB name]?" question.

It is an incredibly strong meme in modern computing.

Monday, March 14, 2016

Algorithms where human understanding is optional - or maybe even impossible

I think I am guilty of holding on to an AI non-sequitur for a long time. Namely the idea that AI is fundamentally limited by our ability as humans to code the rules for the computer to execute. If we humans cannot write down the rules for X, we cannot get the computer to do X.

Modern AI seems to have significantly lurched over to the "no rules" side of the field where phrases like CBR (case based reasoning) and Neural Net Training Sets abound...

But with an interesting twist that I have only recently become aware of. Namely, using bootstrapping to use generation X of an AI system to produce generation X+1.

The technical write-ups about the recent stunning AlphaGo victory make reference to the boostrapping of AlphaGo. As well as learning from the database of prior human games, it has learned by playing against itself....

Doug Englebart springs to mind and his bootstrapping strategy.

Douglass Hofstadter springs to mind and his strange loops model of consciousness.

Stephen Wolfram springs to mind and his feedback loops of simple algorithms for rapidly generating complexity.

AI's learning by using the behavior of the previous generation AI as "input" in the form of a training set sounds very like iterating a simple Wolfram algorithm or a fractal generating function, except that the output of each "run", is the algorithm for the next run.

The weird, weird, weird thing about all of this, is that we humans don't have to understand the AIs we are creating. We are just creating the environment in which they can create themselves.

In fact, it may even be the case that we cannot understand them because, by design, there are no rules in there to be dug out and understood. Just an unfathomably large state space of behaviors.

I need to go to a Chinese room, and think this through...

Thursday, March 10, 2016

LoRa

LoRa feels like a big deal to me. In general, hardware-lead innovations tend to jumpstart software design into interesting places, moreso than software-lead innovations drag hardware design into interesting places.

With software driving hardware innovation, the results tend to be of the bigger, faster, cheaper variety. All good things but not this-changes-everything type moments.

With hardware driving software innovation however, software game changers seem to come along sometimes.

Telephone exchanges -> Erlang -> Elixer.
Packet switching -> TCP/IP -> Sockets

BGP Routers -> Multihoming
VR Headsets -> Immersive 3D worlds

etc.

I have noticed that things tend to come full circle though. Sooner or later, the any hardware bits that can themselves be replaced by software bits, are replaced:-)

This loopback trend is kicking into a higher gear at the moment because of 3D printing. I.e. a hardware device is conceived of. In order to build the device, the device is simulated in software to drive the 3D printer.  Any such devices that *could* remain purely software, do so eventually.

A good example is audio recording. A modern DAW like ProTools or Reaper now provides pure digital emulators for pretty much any piece of audio hardware kit you can think of: EQs, pre-amps, compressors, reverbs etc.

Friday, March 04, 2016

XML and St Patrick

I am finding it a bit hard to believe that I wrote this *fourteen* years ago.

Patrick to be Named Patron Saint of Software Developers
In a dramatic development, scholars working in Newgrange, Ireland, have deciphered an Ogham stone thought to have been carved by St. Patrick himself. The text on the stone predicts, with incredible accuracy, the trials-and-tribulations of IT professionals in the early 21st century. Calls are mounting for St. Patrick to be named the patron saint of Markup Technologists.

The full transcription of the Ogham stone is presented here for the first time:

DeXiderata

    Go placidly amid the noise and haste and remember what peace there may be in silence.

    As far as possible, without surrender, accommodate the bizarre tag names and strange attribute naming conventions of others.

    Speak your truth quietly and clearly, making liberal use of UML diagrams. Listen to others, even the dull and ignorant, they too have their story and won't shut up until you have heard it.

    Avoid loud style sheets and aggressive time scales, they are vexations to the spirit. If you compare your schemas with others, you will become vain and bitter for there will always be schemas greater and lesser than yours -- even if yours are auto-generated.

    Enjoy the systems you ship as well as your plans for new ones. Keep interested in your own career, however humble. It's a real possession in the changing fortunes of time and Cobol may yet make a comeback.

    Exercise caution in your use of namespaces for the world is full of namespace semantic trickery. Let this not blind you to what virtue there is in namespace-free markup. Many applications live quite happily without them.

    Be yourself. Especially do not feign a working knowledge of RDF where no such knowledge exists. Neither be cynical about Relax NG; for in the face of all aridity and disenchantment in the world of markup, James Clark is as perennial as the grass.

    Take kindly the counsel of the years, gracefully surrendering the things of youth such as control over the authoring subsystems and any notion that you can dictate a directory structure for use by others.

    Nurture strength of spirit to nourish you in sudden misfortune but do not distress yourself with dark imaginings of wholesale code re-writes.

    Many fears are born of fatigue and loneliness. If you cannot make that XML document parse, go get a pizza and come back to it.

    Beyond a wholesome discipline, be gentle with yourself. Loosen your content models to help your code on its way, your boss will probably never notice.

    You are a child of the universe no less than the trees and all other acyclic graphs; you have a right to be here. And whether or not it is clear to you, no doubt the universe is unfolding as it should.

    Therefore be at peace with your code, however knotted it may be. And whatever your labors and aspirations, in the noisy confusion of life, keep peace with your shelf of manuals. With all its sham, drudgery, and broken dreams, software development is a pretty cool thing to do with your head. Be cheerful. Strive to be happy.

Friday, February 26, 2016

Software complexity accelerators

It seems to me that complexity in software development, although terribly hard to measure, has steadily risen from the days of Algol 68 and continues to rise.

In response to the rise, we have developed mechanisms for managing - not removing - managing the complexity.

These management - or perhaps I should say 'containment' mechanisms have an interesting negative externality. If a complexity level of X was hard to contain before, but thanks to paradigm Y is not contained, the immediate side-effect is an increase in the value of X:-)

It reminds me of an analysis I found somewhere about driving speed and seat belts. Apparently, steat belts can have the effect of increasing driving speed. Reason being, we all have a risk level we sub-consciously apply when driving. Putting on a seat belt can make us feel that a higher speed is now possible without increasing our risk level.

So what sort of "seat belts" have we added into software development recently? I think Google Search is a huge one. Rather than reduce the complexity of an application as evidenced by the amount of debugging/head-scratching you need to do, we have accelerated the process of finding fixes online.

Another one is open source. We can now leverage a world-wide hive-mind that collectively "wraps its head around" a code-base so that code-base can become more complex than it could if a finite team work the code-base.

Another one is cloud. Client/Server-style computing models push most of the complexity of management into the server side. Applications that would be incredibly complex to manage in todays diverse OS world if they were thick-clients are easier to manage server-side, thus creating headroom for new complexity which, sure enough gets added to the mix.

Is this phenomenon of complexity acceleration thanks to better and better complexity containment a bad thing?

I honestly don't know.

Friday, February 19, 2016

It's obvious really

Nothing is more deserving of questioning, than an obvious conclusion.