These days, I mostly post my tech musings on Linkedin. https://www.linkedin.com/in/seanmcgrath/
Thursday, June 14, 2018
Thinking about Software Architecture & Design : Part 12
The word “flexible”
gets used a lot in software architecture & design. It tends to
get used in a positive sense. That is, "flexibility" is mostly seen as
a good thing to have in your architecture.
And yet, flexibility is
very much a two edged sword. Not enough of it, and your architecture
can have difficulty dealing with the complexities that typify real
world situations. Too much of it and your architecture can be too
difficult to understand and maintain. The holy grail of flexibility,
in my opinion, is captured in the adage that “simple things should
be simple, and hard things should be possible.”.
Simple to say, hard to
do. Take SQL for example, or XSLT or RPG...they all excel at making
simple things simple in their domains and yet, can also be
straitjackets when more complicated things come along. By
“complicated” here I mean things that do not neatly fit into
their conceptual models of algorithmics and data.
A classic approach to handling this is to
allow such systems to be embedded in Turing Complete Programming
language. i.e. SQL inside C Sharp. XSLT inside Java etc. The Turing
Completeness of the programming language host ensures that the “hard
things are possible” while the core – and now “embedded system”
- ensures that the simple things are simple.
Unfortunately what
tends to happen is that the complexity of the real world chips away
at the split between simple and complex and, often times, such hybrid
systems evolve into Turing Complete hosts. i.e. over time, the
embedded system for handling the simple cases, is gradually eroded
and then one day, you wake up to find that it is all written in C# or Java or
whatever and the originally embedded system is withering on the vine.
A similar phenomenon
happens on the data side where an architecture might initially by 98%
“structured” fields but over time, the “unstructured” parts
of its data model grow and grow to the point where the structured
fields atrophy and all the mission critical data migrates over to the
unstructured side. This is why so many database-centric systems
organically grow memo fields, blob fields or even complete distinct
document storage sub-systems over time, to handle all the data that
does not fit neatly into the “boxes” of the structured fields.
Attempting to add
flexibility to the structured data architecture tends to result in
layers of abstraction that people have difficult following. Layers of
pointer indirection. Layers of subject/verb/object decomposition.
Layers of relationship reification and so on....
This entropy growth does not happen
overnight. The complexity of modelling the real world chips away at
designs until at some point there is an inflection. Typically this
point of inflection manifests in a desire to “simplify” or “clean
up” a working system. This often results in a new architecture that
incorporates the learnings from the existing system and then the whole
process repeats again. I have seen this iteration work at the level
of decades but in more recent years the trend appears to be towards
shorter and short cycle times.
This cyclic revisiting
of architectures begs the obvious teleological question about the end
point of this cycle. Does it have an end? I suspect not because, in a
Platonic sense, the ideal architecture can be contemplated but cannot
be achieved in the real world.
Besides, even if it could be achieved,
the ever-changing and monotonically increasing complexity of the real
world ensures that a perfect model for time T can only be achieved at
some future time-point T+N, by which time, it is outdated and has been
overtaken by the every shifting sands of reality.
So what is an architect
to do if this is the case? I have come to the conclusion that it is
very very important to be careful to label anything as an immutable truth
in an architecture. All nouns, verbs, adjectives etc. that sound to
you like that are “facts” of the real world, will, at some point
bend under the weight of constant change and necessarily incomplete empirical knowledge.
The smaller the set of things you consider immutable
facts, the more flexible your architecture will be. By all means, layer abstractions on top of this core layer. By all means add Turing
Completeness into the behavioral side of the model. But treat all of
these higher layers as fluid. It is not that they might need to
change it is that they will need to change. It is just a question of time.
Finally, there are
occasions where the set of core facts in your model is the empty set!
Better to work with this reality than fight against it because entropy is the
one immutable fact you can absolutely rely on. Possibly the only thing you can have an the core of your architecture and not worry about it being invalidated by the arrival of new knowledge or the passage of time.
No comments:
Post a Comment