Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Thursday, April 19, 2018

Thinking about Software Architecture & Design : Part 2

Technological volatility is, in my experience, the most commonly overlooked factor in software architecture and design. We have decades worth of methodologies and best practice guides that help us deal with fundamental aspects of architecture and design such as reference data management, mapping data flows, modelling processes, capturing input/output invariants, selecting between synchronous and asynchronous inter-process communication methods...the list goes on. 

And yet, time and again, I have seen software architectures that are only a few years old, that need to be fundamentally revisited. Not because of any significant breakthrough in software architecture & design techniques, but because technological volatility has moved the goal posts, so to speak, on the architecture.

Practical architectures (outside those in pure math such as Turing Machines) cannot exist in a technological vacuum. They necessarily take into account what is going on in the IT world in general. In a world without full text search indexes, document management architectures are necessarily different. In a world without client side processing capability, UI architectures are necessarily different, in a world without always-on connectivity.....and so on.

When I look back at IT volatility over my career – back to the early Eighties – there is a clear pattern in the volatility. Namely, that volatility increases the closer you get to the end-users points of interaction with IT systems. Dumb “green screens”, bit-mapped graphics, personal desktop GUIs, tablets, smart phones,  voice activation, haptic user interfaces..

Many of the generational leaps represented by these innovations have had profound implications on the software architectures that leverage them. It is not possible – in my experience – to abstract away user interface volatility and treat it as a pluggable layer on top of the main architecture. End-user technologies have a way of imposing themselves deeply inside architectures. For example, necessitating an event-oriented/multi-threaded approach to data processing in order to make it possible to create responsive GUIs. Responding sychronously to data queries as opposed to batch processing. 

The main takeaway is this: creating good software architectures pay dividends but they are much more likely to be significant in the parts of the architecture furthest away from the end-user interactions. i.e. inside the data modelling, inside discrete data processing components etc. They are least likely to pay dividends in areas such as GUI frameworks, client side processing models or end user application programming environments.

In fact, volatility is sometimes so intense, that it makes more sense to not spend time abstracting the end-user aspects of the architecture at all. i.e. sometimes it makes more sense to make a conscious decision to re-do the architecture if/when the next big upheaval comes on the client side and trust that large components of the back-end will remain fully applicable post-upheaval.

That way, your applications will not be as likely to be considered “dated” or “old school” in the eyes of the users, even though you are keeping much of the original back-end architecture from generation to generation.

In general, software architecture thinking time is more profitably spent in the back-end than in the front-end. There is rarely a clean line that separates these so a certain amount of volatility on the back-end is inevitable, but manageable, in compared to the volatility the will be visited upon your front-end architectures.

Volatility exists everywhere of course. For example, at the moment serverless computing models are having profound implications on "server side" architectures. Not because of end-user concerns - end-users do not know or care about these things - but because of the volatility in the economics of cloud computing.

If history is anything to go by, it could be another decade or more before something comes along like serverless computing, that profoundly impacts back-end architectures. Yet in the next decade we are likely to see dozens of major changes in client side computing. Talking to our cars, waving at our heating systems, installing apps subcutaneously etc.