It is said that there
are really only seven basic storylines and that all stories can
either fit inside them or be decomposed into some combination of the
basic seven. There is the rags-to-riches story. The voyage and return
story. The overcoming the monster story...and so on.
I suspect that
something similar applies to Software Architecture & Design.
When I was a much younger practitioner in this field, I remember a
very active field with new methodologies/paradigms coming along on a
regular basis. Thinkers such as Yourdon, de Marco, Jackson, Booch,
Hoare, Dijkstra, Hohpe distilled the essence of most of the core
architecture patterns we know of today.
In more recent years,
attention appears to have moved away from the discovery/creation of
new architecture patterns and architecture methodologies towards
concerns closer to the construction aspects of software. There is an
increasing emphasis on two way flows in the creation of
architectures– or perhaps circular flows would be a better
description. i.e. iterating backwards from, for example user stories,
to the abstractions required to support the user stories. Then
perhaps a forward iteration refactoring the abstractions to get
coverage of the required user stories with less “moving parts” as
discussed before.
There has also been a
marked trend towards embracing the volatility of the IT landscape in
the form of proceeding to software build phases with “good enough”
architectures and the conscious decision to factor-in the possibility
of needing complete architecture re-writes in ever short time spans.
I suspect this is an
area where real world physical architecture and software architecture
fundamentally differ and the analogy breaks down. In the physical
world, once the location of the highway is laid down and construction
begins, a cascade of difficult-to-reverse events starts to occur in
parallel with the construction of the highway. Housing estates and
commercial areas pop up close to the highway. Urban infrastructure
plans – perhaps looking decades into the future – are created
predicated on the route of the highway and so on.
In software, there are
often similar amount of knock-on effects to architecture changes but
when these items are themselves primarily software, rearranging
everything based on a architecture is more manageable. Still likely a
significant challenge, but more doable because software is, well
“softer” than real world concrete, bricks and mortar.
My overall sense of
where software architecture is today is that it revolves around the
question : “how can we make it easier to fundamentally change the
architecture in the future?” The fierce competitive landscape for
software has combined with cloud computing to fuel this burning
question.
Creating software
solutions with very short (i.e. weeks) time horizons before they
change again is now possible and increasingly commonplace. The
concept of version number is becoming obsolete. Today's software
solution may or may not be the same as the one you interacted with
yesterday and it may, in fact, be based on an utterly different
architecture under the hood than it was yesterday. Modern
communications infrastructure, OS/device app stores, auto-updating
applications, thin clients...all combine to create a very fluid
environment for modern day software architectures to work in.
Are there new software
patterns still emerging since the days of data flow and ER diagrams
and OOAD? Are we just re-combining the seven basic architectures in a
new meta-architecture which is concerned with architecture change
rather than architecture itself? Sometimes I think so.
I also find myself
wondering where we go next if that is the case. I can see one
possible end point for this. An end-point which I find tantalizing
and surprising in equal measure. My training in software architecture
– the formal parts and the decades of informal training since then
– have been based on the idea that the fundamental job of the
software architect is to create a digital model – a white box –
of some part of the real world, such that the model meets a set of
expectations in terms of its interaction with its users (which may,
be other digital models).
In modern day
computing, this idea of the white box has an emerging alternative
which I think of as the black box. If a machine could somehow be
instructed to create the model that goes inside the box – based
purely on an expression of its required interactions with the rest of
the world – then you basically have the only architecture you will
ever need for creating what goes into these boxes. The architecture
that makes all the other architectures unnecessary if you like.
How could such a thing
be constructed? A machine learning approach, based on lots and lots
of input/output data? A quantum computing approach which tries an
infinity of possible Turing machine configurations, all in parallel?
Even if this is not possible today, could it be possible in the near
future? Would the fact that boxes constructed this way would be
necessarily black – beyond human comprehension at the control flow
level – be a problem? Would the fact that we can never formally
prove the behavior of the box be a problem? Perhaps not as much as
might be initially thought, given the known limitations of formal
proof methods for traditionally constructed systems. After all, we
cannot even tell if a process will halt, regardless of how much
access we have to its internal logic. Also, society seems to be in
the process of inuring itself to the unexplainability of machine
learning – that genie is already out of the bottle. I have written
elsewhere (in the "what is law?" series - http://seanmcgrath.blogspot.com/2017/07/what-is-law-part-15.html)
that we have the same “black box” problem with human decision
making anyway).
To get to such a world,
we would need much better mechanism for formal specification. Perhaps
the next generation of software architects will be focused on
patterns for expressing the desired behavior of the box, not models
for how the behavior itself can be achieved. A very knotty problem
indeed but, if it can be achieved, radical re-arrangements of systems
in the future could start and effective stop with updating the
black box specification with no traditional analysis/design/
construct/test/deploy cycle at all.
No comments:
Post a Comment