Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Tuesday, January 26, 2016

The biggest IT changes in the last 5 years: The death-throes of backup-and-delete based designs

One of the major drivers in application design is infrastructure economics i.e. the costs - both in capex an opex terms - of things like RAM, non-volatile storage, compute power, bandwidth, fault tolerance etc.

These economic factors have changed utterly in the 35 years I have been involved in IT, but we still have a strong legacy of designs/architectures/patterns that are ill suited to the new economics of IT.

Many sacred cows of application design such as run-time efficiencies of compiled code versus interpreted code or the consistency guarantees of ACID transactions, can be traced back to the days when CPU cycles were costly. When RAM was measured in dollars per kilobyte and when storage was measured in dollars per megabyte.

My favorite example of a deeply held paradigm which I believe has little or no economic basis today is the concept of designs that only keep a certain amount of data in online form, dispatching the rest, at periodic intervals, to offline forms e.g. tape, disc that require "restore" operations to get them back into usable form.

I have no problem with the concept of backups:-) My problem is with the concept of designs that only keep, say, 1 years worth of data online. This made a lot of sense when storage was expensive because the opex costs of manual retrieval were smaller than the opex costs of keeping everything online.

I think of these designs as backup-and-delete designs. My earliest exposure to such a design was on an IBM PC with twin 5 and 1/4 inch floppy disk drives. An accounting application ran from Drive A. The accounting data file was in Drive B. At each period-end, the accounting system rolled forward ledger balances and then - after a backup floppy was created - deleted the individual transactions on Drive B. That was about 1984 or so.


As organizations identified value in their "old data" -for regulatory reporting or training or predictive analytics, designs appeared to extract the value from the "old" data. This lead to a flurry of activity around data warehousing, ETL (extract, transform, load), business intelligence dashboards etc.

My view is that these ETL-based designs are a transitionary period.  Designers in their twenties working today, steeped as they are in the new economics of IT, are much more likely to create designs that eschew the concept of ever deleting anything. Why would you when online storage (local disk or remote disk, is so cheap?) and there is alway the possibility of latent residual value in the "old" data.

Rather than have one design for day-to-day business and another design for business intelligence, regulatory compliance, predictive analytics, why not have one design that addresses all of these? Apart from the feasibility and desirability of this brought about by the new economics of IT, there is another good business reason to do it this way. Simply put, it removes the need for delays in reporting cycles and predictive analytics.  Ie. rather than pull all the operational data into a separate repository and crunch it once a quarter or once a month, you can be looking at reports and indicators that are in near-realtime.

I believe that the time is coming when the economic feasibility of near-realtime monitoring and reporting becomes a "must have" in regulated businesses because the regulators will take the view that well run businesses should have it. In the same way that a well run business today, is expected to have low communications latencies between its global offices (thanks to cheap availability of digital communications), they will be expected to have low latency reporting for their management and for the regulators.

We are starting to see terminology form around this space. I am hopelessly biased because we have been creating designs based on never-deleting-anything for many years now. I like the terms "time-based repository" and the term "automatic audit trail". Others like the terms "temporal database", "provenance system", "journal-based repository"...and the new kid on the block (no pun intended!) - the block chain.

The block-chain, when all is said and done, is a design based on never-throwing-away anything *combined* with providing a trust-free mechanism for observers of the audit-trail to be able to have confidence in what they see.

There is lots of hype at present around block chain and with hype comes the inevitable "silver bullet" phase where all sorts of problems not really suited to the block-chain paradigm are shoe-horned into it because it is the new thing.

When the smoke clears around block chain - which it will - I believe we will see many interesting application designs emerge which break away completely from the backup-and-delete models of a previous economic era.

No comments: