This series of posts will contain some thoughts on software architecture and design. Things I have learned over the decades spent doing it to date. Things I think about but have not got good answers for. Some will be very specific - "If X happens, best to do Y straight away.", some will be philosophical "what exactly is X anyway?", some will be humorous, some tragic, some cautionary...hopefully some will be useful. Anyway, here goes...
The problem of problems
Some "problems" are not really problems at all. By this I mean that sometimes, it is simply the way a “problem” is phrased that leads you to think that the problem is real and needs to be solved. Other times, re-phrasing the problem leads to a functionally equivalent but much more easily solved problem.
Another way to think about this is to recognize that human language itself is always biased towards a particular world view (that is why translating one human language into another is so tricky. It is not a simple mapping of one world view to another).
Simply changing the language used to describe a “problem” can sometimes result in changing (but never removing!) the bias. And sometimes, this new biased position leads more readily to a solution.
I think I first came across this idea in the book "How to Solve It" by the mathematician George Poyla. Later on, I found echoes of it in the work of philosopher Ludwig Wittgenstein. He was fond of saying (at least in his early work) that there are no real philosophical problems – only puzzles – caused by human language.
Clearing away the fog of human language - says Wittgenstein - can show a problem to be not a problem at all. I also found this idea in the books of Edward de Bono whose concepts of “lateral thinking" often leverage the idea of changing the language in which a problem is couched as a way of changing view-point and finding innovative solutions.
One example De Bono gives is a problem related to a factory polluting water in a river. If you focus on the factory as a producer of dirty water, your problem is oriented around the dirty water. It is the dirty water that output that needs to be addressed. However if the factory also consumes fresh water, then the problem can be re-cast in terms of a pre-factory input problem. i.e. make the factory put its intake upstream from its water discharge downstream. Thus incentivizing the factory to not pollute the river. Looked at another way, the factory itself becomes a regulator, obviating or at least significantly reducing the need for extra entities in the regulation process.
In more recent years I have seen the same idea lurking in Buddhist philosophy in the form of our own attitudes towards a situation being a key determinant in our conceptualization of a situation as either good/bad or neutral. I sometimes like to think of software systems as "observers" of the world in this Buddhist philosophy sense. Admittedly these artificial observers are looking at the world through more restricted sense organs that humans, but they are observers none-the-less.
Designing a software architecture is essentially baking in a bias as to how "the world" is observed by a nascent software system. As architects/designers we transfer our necessarily biased conceptualization of the to-be system into code with a view to giving life to a new observer in the world - a largely autonomous software system.
Thinking long and hard about the conceptualization of the problem can pay big dividends early on in software architecture. As soon as the key abstractions take linguistic form in your head i.e. concepts start to take the form of nouns, verbs, adjectives etc., the problem statement is baked in, so to speak.
For example. Imagine a scenario where two entities, A and B, need to exchange information. Information needs to flow from A to B reliably. Does it matter if I think of A sending information to B or think of B as querying information from A? After all, the net result is the same, right? The information gets to B from A, right?
Turns out it matters a lot. The bias in the word "send" is that it carries with it the notion of physical movement. If I send you a postcard in the mail. The postcard moves. There is one postcard. It moves from 1) I have it to 2) in transit to 3) you have it (maybe).
If we try to implement this "send" in software, it can get very tricky indeed to fully emulate what happens in a real world "send" - especially if we stipulate guaranteed once and only once delivery. Digital "sends" are never actually sends. They are always replications, or normally replicate followed by delete.
If instead of this send-centric approach, we focus on B as the active party in the information flow - querying for information from A and simply re-requesting it if, for some reason it does not arrive, then we have a radically different software architecture. An architecture that is much easier to implement in many scenarios. (Compare the retry-centric architecture of many HTTP systems compared to, say, reliable message exchange protocols.)
So what happened here? We simply substituted one way of expressing the business need - a send-oriented conceptualization, with a query-oriented conceptualization, and the "problem" changed utterly before our very eyes.
Takeaway : the language in which a problem is expressed is often, already a software architecture. It may or may not be a good version 1 of the architecture to work from. It contains many assumptions. Many biases. Regardless of whether or not it is linguistic or visual.
It often pays to tease out those assumptions in order to see if a functionally equivalent re-expression of the problem is a better starting point for your software architecture.