Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Friday, June 01, 2018

Thinking about Software Architecture & Design : Part 11


It is said that there are really only seven basic storylines and that all stories can either fit inside them or be decomposed into some combination of the basic seven. There is the rags-to-riches story. The voyage and return story. The overcoming the monster story...and so on.

I suspect that something similar applies to Software Architecture & Design. When I was a much younger practitioner in this field, I remember a very active field with new methodologies/paradigms coming along on a regular basis. Thinkers such as Yourdon, de Marco, Jackson, Booch, Hoare, Dijkstra, Hohpe distilled the essence of most of the core architecture patterns we know of today.

In more recent years, attention appears to have moved away from the discovery/creation of new architecture patterns and architecture methodologies towards concerns closer to the construction aspects of software. There is an increasing emphasis on two way flows in the creation of architectures– or perhaps circular flows would be a better description. i.e. iterating backwards from, for example user stories, to the abstractions required to support the user stories. Then perhaps a forward iteration refactoring the abstractions to get coverage of the required user stories with less “moving parts” as discussed before.

There has also been a marked trend towards embracing the volatility of the IT landscape in the form of proceeding to software build phases with “good enough” architectures and the conscious decision to factor-in the possibility of needing complete architecture re-writes in ever short time spans.

I suspect this is an area where real world physical architecture and software architecture fundamentally differ and the analogy breaks down. In the physical world, once the location of the highway is laid down and construction begins, a cascade of difficult-to-reverse events starts to occur in parallel with the construction of the highway. Housing estates and commercial areas pop up close to the highway. Urban infrastructure plans – perhaps looking decades into the future – are created predicated on the route of the highway and so on.

In software, there are often similar amount of knock-on effects to architecture changes but when these items are themselves primarily software, rearranging everything based on a architecture is more manageable. Still likely a significant challenge, but more doable because software is, well “softer” than real world concrete, bricks and mortar.

My overall sense of where software architecture is today is that it revolves around the question : “how can we make it easier to fundamentally change the architecture in the future?” The fierce competitive landscape for software has combined with cloud computing to fuel this burning question.

Creating software solutions with very short (i.e. weeks) time horizons before they change again is now possible and increasingly commonplace. The concept of version number is becoming obsolete. Today's software solution may or may not be the same as the one you interacted with yesterday and it may, in fact, be based on an utterly different architecture under the hood than it was yesterday. Modern communications infrastructure, OS/device app stores, auto-updating applications, thin clients...all combine to create a very fluid environment for modern day software architectures to work in.

Are there new software patterns still emerging since the days of data flow and ER diagrams and OOAD? Are we just re-combining the seven basic architectures in a new meta-architecture which is concerned with architecture change rather than architecture itself? Sometimes I think so.

I also find myself wondering where we go next if that is the case. I can see one possible end point for this. An end-point which I find tantalizing and surprising in equal measure. My training in software architecture – the formal parts and the decades of informal training since then – have been based on the idea that the fundamental job of the software architect is to create a digital model – a white box – of some part of the real world, such that the model meets a set of expectations in terms of its interaction with its users (which may, be other digital models).

In modern day computing, this idea of the white box has an emerging alternative which I think of as the black box. If a machine could somehow be instructed to create the model that goes inside the box – based purely on an expression of its required interactions with the rest of the world – then you basically have the only architecture you will ever need for creating what goes into these boxes. The architecture that makes all the other architectures unnecessary if you like.

How could such a thing be constructed? A machine learning approach, based on lots and lots of input/output data? A quantum computing approach which tries an infinity of possible Turing machine configurations, all in parallel? Even if this is not possible today, could it be possible in the near future? Would the fact that boxes constructed this way would be necessarily black – beyond human comprehension at the control flow level – be a problem? Would the fact that we can never formally prove the behavior of the box be a problem? Perhaps not as much as might be initially thought, given the known limitations of formal proof methods for traditionally constructed systems. After all, we cannot even tell if a process will halt, regardless of how much access we have to its internal logic. Also, society seems to be in the process of inuring itself to the unexplainability of machine learning – that genie is already out of the bottle. I have written elsewhere (in the "what is law?" series - http://seanmcgrath.blogspot.com/2017/07/what-is-law-part-15.html) that we have the same “black box” problem with human decision making anyway).

To get to such a world, we would need much better mechanism for formal specification. Perhaps the next generation of software architects will be focused on patterns for expressing the desired behavior of the box, not models for how the behavior itself can be achieved. A very knotty problem indeed but, if it can be achieved, radical re-arrangements of systems in the future could start and effective stop with updating the black box specification with no traditional analysis/design/ construct/test/deploy cycle at all.

Monday, May 28, 2018

Thinking about Software Architecture & Design : Part 10


Once the nouns and verbs I need in my architecture start to solidify, I look at organizing them across multiple dimensions. I tend to think of the noun/verb organization exercise in the physical terms of surface area and moving parts. By "surface area" I mean minimizing the sheer size of the model. I freely admin that page count is a crude-sounding measure for a software architecture, but I have found over the years that the total size of the document required to adequately explain the architecture is an excellent proxy for its total cost of ownership.

It is vital, for a good representation of a software architecture, that both the data side and the computation side are covered. I have seen many architectures where the data side is covered well but the computation side has many gaps. This is the infamous “and then magic happens” part of the software architecture world. It is most commonly seen when there is too much use of convenient real world analogies. i.e. thematic modules that just snap together like jigsaw/lego pieces, data layers that sit perfectly on top of each other like layers of a cake, objects that nest perfectly inside other objects like Russian Dolls etc.

When I have a document that I feel adequately reflects both the noun and the verb side of the architecture, I employ a variety of techniques to minimize its overall size. On the noun side, I can create type hierarchies to explore how nouns can be considered special cases of other nouns. I can create relational de-compositions to explore how partial nouns can be shared by other nouns. I will typically “jump levels” when I am doing this. i.e. I will switch between thinking of the nouns in purely abstract terms (“what is a widget really” to thinking about them in physical terms: “how best to create/read/update/delete widgets?”). I think of it as working downwards towards implementation an upwards towards abstraction at the same time. It is head hurting at times, but in my experience produces better practical results that the simpler step-wise refinement approach of moving incrementally downwards from abstraction to concrete implementation.

On the verb side, I tend to focus on the classic engineering concept of "moving parts". Just as in the physical world, it has been my experience that the smaller the number of independent moving parts in an architecture, the better. Giving a lot of thought to opportunities to reduce the total number of verbs required pays handsome dividends. I think of it in terms of combinatorics. What are the fundamental operators I need from which, all the other operators can be created by combinations of the fundamental operators? Getting to this set of fundamental operators is almost like finding the architecture inside the architecture.

I also think of verbs in terms of complexity generators. Here I am using the word “complexity” in the mathematical sense. Complexity is not a fundamentally bad thing! I would argue that all system behavior has a certain amount of complexity. The trick with complexity is to find ways to create the amount required but in a way that allows you to be in control of it. The compounding of verbs is the workhorse for complexity generation. I think of data as a resource that undergoes transformation over time. Most computation – even the simplest assignment of the value Y to be the value Y + 1 has an implicit time dimension. Assuming Y is a value that lives over a long period of time – i.e. is persisted in some storage system – then Y today is just the compounded result of the verbs applied to it from its date of creation.

There are two main things I watch for as I am looking into my verbs and how to compound them and apply them to my nouns. The first is to always include the ability to create an ad-hoc verb “by hand”. By which I mean, always having the ability to edit the data in nouns using purely interactive means. This is especially important in systems where down-time for the creation of new algorithmic verbs is not an option.

The second is watching out for feedback/recursion in verbs. Nothing generates complexity faster than feedback/recursion and when it is it used, it must be used with great care. I have a poster on my wall of a fractal with its simple mathematical formula written underneath it. It is incredible that such bottomless complexity can be derived from such a harmless looking feedback loop. Using it wisely can produce architectures capable of highly complex behaviors but with small surface areas and few moving parts. Used unwisely.....