Monday, March 30, 2020

Fully Virtual Legislative and Parliamentary sessions coming in 2020. Forty years ahead of schedule

It goes without saying that we live in interesting times at the moment. Never could I have imagined that the world would change utterly, so rapidly...

In times like this thoughts turn immediately to those most seriously affected. Fingers crossed the measures being taken at the moment will allow us to defeat this virus soon.

When we return to "normal" it is clearly going to be a new "normal". All over the world organizations of all sorts are finding ways to function in fully virtualized environments. The word "Zoom" - already in the dictionary as a verb coming into this  - has acquired a new meaning, in record time. There has never been a more intense focus on finding ways to get things done digitally. It is no longer hyperbole to say that our futures and our lives depend on it.

Legislatures and Parliaments are not immune to this new digital impetus. All around the world Legislatures and Parliaments are exploring ways to work fully digitally. Decades ahead of when I had envisaged that they would.

These institutions face some unique challenges in making this transition. It is not as simple as using conference calls for voice votes and switching on a few video conferencing cameras. Decades and indeed centuries of precedents, statutory provisions and constitutional/charter provisions need to be complied with so that continuity of the rule of law is maintained and to ensure legal authority is preserved.

Rules of Legislative Procedure such as Mason's provide a wealth of good guidance that functions regardless of whether or not a Session is being conducted "in person" or virtually. 77 of the 99 legislative chambers in the US use Mason's as the bedrock of their chamber rules. Mason's allows for the suspension of rules and, on the face if it, this appears to provide a lot of scope for US legislative chambers/houses to adapt quickly to working in a fully virtualized environment.

However, Mason's is also clear that its rules, or the rules in a House or Senate resolution or in the joint rules of a legislature, are all ultimately limited by whatever the constitution may require.

Simply put, the Constitution rules. And this is where the difficulty may lie in some cases. Most constitutions are old documents - into the hundreds of years old. Some specifically state, for example, that legislative sessions are conducted "in person".

This has higher precedence that anything in the joint rules, the chamber specific rules or in Masons/Jeffersons/Roberts etc. So, perhaps a change to the constitution is required in states where there is a constitutional requirement for "in person" meetings?

This is of course possible in all states, but I know of no state where the constitution can be changed quickly. In some states it takes a minimum of a session to pass it, plus a referendum....All of which take time.

I am not a constitutional scholar or indeed a lawyer for that matter, but it may be that the nature of legal language comes to the rescue here. I have written before at length about how I think about law and how it is not a simple, large set of "rules". For example What is Law? and also a series of blogposts starting here.

Simply put, my view is that the open-ended nature of legal language that may appear to be a "bug" at first glance - especially if you are a software engineer seeking to create  "rules" from it - is actually law's most brilliant feature.

Perhaps the time has come - in these extraordinary times - to revisit the interpretation of the phrase "in person" meetings to include "virtual" meetings in certain circumstances. I believe a lot of groundwork already exists for this with the gradual adoption over the last few decades of digital technology from faxes to digital signatures in legally binding environments.

In one fell swoop, such an interpretation of "in person" would fully pave the way for the fully virtual legislative meeting capability in states that have "in person" language in their constitutions. It is a matter for the courts obviously, as that is where the "interpretation" function lies, at least in common law jurisdictions.

I do not mean to suggest that there is some sort of magic wand to be waved here, but I do believe that with all the dedicated, talented legal people looking at enabling virtual legislative sessions at the moment, a legal solution will be found in all states that need it.

In parallel of course, the technology has come on in leaps and bounds over the last two decades to facilitate it, once the legal framework is in place to enable it. The technology in question is my day job and has been for about thirty years. It is very exciting to be in a position to help. We have all the modules we need right now for the technology having spent decades building out digitial systems in Legislatures/Parliaments already.

Indeed, we are already working with a number of legislatures (and also with local government with PrimeGov), to extend our existing solutions to be fully virtual for legislatures and parliaments, covering chamber floors and committees. Deployment time can be as low as a matter of weeks.

For many legislatures, it is already full steam ahead towards support for virtual sessions. For those that need to make some preparatory legal framework modifications, my advice would be to do that in parallel. There is no need to wait. Do them in parallel. The time to start is now. Today.

This is all happening in my lifetime.  I never would have guessed it, but I am delighted to be part of it having spent 30 years thinking about it. The new normal is upon us.


Thursday, January 03, 2019

An alternative model of computer programming : Part 2


This is part two of a series of posts about an alternative model of computer programming I have been mulling for, oh decades now. The first part is here: http://seanmcgrath.blogspot.com/2018/08/an-alternative-model-of-computer.html

The dominant conceptual model of computer programming is that it is computation, which in turn of course is a branch of mathematics. This is incredibly persuasive on many levels. George Boole's book An Investigation of The Laws of Thought, sets out a powerful way of thinking about truth/falsity and conditional reasoning in ways that are purely numerical and thus mathematical and, well, really beautiful. Hence the phrase “boolean logic”. Further back in time still, we find al-Khwārizmī in the Ninth century working out sequences of mathematical steps to perform complex calculations. Hence the word “algorithm”. Further back in the time of the ancient Greeks we find Euclid and Eratosthenes with their elegant algorithms for finding greatest common divisors and prime numbers respectively.

Pretty much every programming language on the planet has a suite of examples/demos that include these classic algorithms turned into math-like language. They all feature the “three musketeers” of most computer programming. Namely, assignment (e.g. y = f(x)), conditional logic (e.g. “if y greater than 0 do THIS otherwise THAT”) and branching (e.g. “goto END”).

These three concepts get dressed up in all sorts of fine clothes in different programming languages but, as Alan Turing showed in the Nineteen Thirties, you only need to be able to assign values and to “branch on 0” in order to be able to compute anything that is computable via a classical computer – a so called Turning Machine. (This is significantly less that everything you might want to compute but that is another topic for another day. For now, we will stick to classical computers as exemplified in the so-called Von Neumann Architecture and leave quantum computing for another day.)

So what's the problem? Mathematics clearly maps very elegantly to expressing the logic and the calculations needed to get algorithms formalized for classical computers. And this mathematics maps very nicely onto todays mainstream programming languages.

Well, buried deep inside this beautiful mapping are some ugly truths that manifest themselves as soon as you go from written software to shipped software. To see these truths we will take a really small example of an algorithm. Here it is:
“Let y be the value of f(x)
If y is 0
then set x to 0
otherwise set x to 1”

The meaning of the logic here doesn't matter and it doesn't matter what f(x) actually calculates. All we need is something that has some assignments and some conditional logic such as the above snippet.

Now ask any programmer to code this in Python or C++ or Java or and they will be done expressing the algorithm in their coding environment in a matter of minutes. It is mostly a question of adding the write “boilerplate” code around the edges, and finding whatever the correct syntax is in the chosen programming language for “if” and for “then” and for demarcating statements and expressing assignments etc.

But in order to ship the code to production items such as error handling, reliability, scalability, predictability.... – sometimes referrred to as the “ilities” of programming end up taking up a lot of time and a lot of coding. So much so that the coding for “ilities” that needs to surround shipped code is often many times larger that the lines of code required for the original purely mathematical mapping into the programming language.

All of this ancilliary code – itself liberally infused with its own assignments and conditional logic – becomes part of the total code the needs to be created to ship code and most of it needs to be managed for the life time of the core code itself. So now we have code for numeric overflows, function call timeouts, exception handlers etc. We have code for builds, running test scripts, shipping to production, monitoring, tracing...the list goes on and on.

The pure world of pure math rarely needs to have any of these as concerns because in math we say “let x = f(x)” without worrying if f(x) will actually fall over and not give us an answer at all, or, perhaps worse, work fine for a year and then start getting slower and slower for some unknown reason.

This second layer of code – the code that surrounds the “pure” code is very hard to quantify. Its very hard to explain to non-programmers how important it might prove to be, how much time it might take and then to make matters worse, it is very unusual to be able to say its “done” in any formal sense. There are always loose ends. Error conditions that code doesn't handle – either because they are believed to be highly unlikely or because there are an open ended set of potential error scenarios and its simply not possible to code for every conceivable eventuality.

Pure math is a land of zero computational latency. A land where calculations are independent of each other and cannot interfere with each other. A land where all communications pathways are 100% reliable. A land where numbers have infinite precision. A land of infinite storage capacity. A land where the power never dies...etc. Etc.

All this is to make the point that in my opinion that for all the appealing mapping from pure math to pure algorithms, actual computer programming involves adding many other layers to cater for the fact that the real world of shipped code is not a pure math “machine”.

Next up. My favorite subject. Change with respect to time....


Friday, August 31, 2018

An alternative model of computer programming : Part 1

Today, I googled "How many programming languages are there?" and the first hit I got said, "256".

I giggled - as any programmer would when a power of two pops up in the wild like that. Of course, it is not possible to say exactly how many because new ones are invented almost every day and it really depends on how you define "language"...It is definitely in the hundreds at least.

It is probably in the thousands, if you rope in all the DSLs and all the macro-pre-processors-and-front-ends-that-spit-out-Java-or-C.

In this series of blog posts I am going to ask myself and then attempt to answer an odd question. Namely, "what if language is not the best starting point for thinking about computer programming?"

Before I get into the meat of that question, I will start with how I believe we got to the current state of affairs - the current programming linguistic tower of Bable - with its high learning curve to enter its hallowed walls. With all its power and the complexities that seem to be inevitable in accessing that power.

I believe we got here the day we decided that computing was best modelling with mathematics.

Friday, July 27, 2018

The day I found Python....

It was 21 years ago. 1997. I was at an SGML conference in Boston (http://xml.coverpages.org/xml97Highlights.html). It was the conference where the XML spec. was launched.

Back in those days I mostly coded in Perl and C++ but was dabbling in the dangerous territory known as "write your own programming language"...

On the way from my hotel to a restaurant one evening I took a shortcut and stumbled upon a bookshop. I don't walk past bookshops unless they are closed. This one was open.

I found the IT section and was scanning a shelf of Perl books. Perl, Perl, Perl, Perl, Python, Perl....

Wait! What?

A misfiled book....Name seems familiar. Why? Ah, Henry Thomson. SGML Europe. Munich 1996. I attended Henry's talk where he shows some of his computational linguistics work. At first glance his screen looked like the OS had crashed, but after a little while I began to see that it was Emacs with command shell windows and the command line invocation of scripts, doing clever things with markup, in Python. Very productive setup fusing editor and command line...

I bought the mis-filed Python book in Boston that day and read it on the way home. By the time I landed in Dublin it was clear to me that Python was my programming future.  It gradually replaced all my Perl and C++ and today, well, Python is everywhere.




Monday, July 23, 2018

Thinking about Software Architecture & Design : Part 14

Of all the acronyms associated with software architecture and design, I suspect that CRUD (Create Read/Report Update Delete) is the most problematic. It is commonly used as a very useful sanity check to ensure that every entity/object created in an architecture is understood in terms of the four fundamental operations : creating, reading, updating and deleting. However, it subtly suggests that the effort/TCO of these four operations are on a par with each other.

In my experience the "U" operation – update  – is the one where there are the most “gotchas” lurking. A create operation – by definition – is one per object/entity. Reads are typically harmless (ignoring some scaling issues for simplicity here). Deletes are one per object/entity, again by definition. More complex than reads generally but not too bad. Updates however, often account for the vast majority of operations performed on objects/entities. The vast majority of the life cycle is spent in updates. Not only that, but each update – by definition again – changes the object/entity and in many architectures updates cascade. i.e. updates cause other updates. This is sometimes exponential as updates trigger other updates. It is also sometimes truly complex in the sense that updates end up,through event cascades, causing further updates to the originally updated objects....

I am a big fan of the CRUD checklist to cover off gaps in architectures early on but I have learned through experience that dwelling on the Update use-cases and thinking through the update cascades can significantly reduce the total cost of ownership of many information architectures.

Monday, June 25, 2018

Thinking about Software Architecture & Design : Part 13


Harold Abelson, co-author of the seminal tome Structure and Interpretation Of Computer Programs (SICP) said that “programs must be written for people to read, and only incidentally for machines to execute.”

The importance of human-to-human communication over human-to-machine is even more true in Software Architectures, where there is typically another layer or two of resolution before machines can interpret the required architecture.

Human-to-human communications is always fraught with potential for miscommunications and the reasons for this run very deep indeed. Dig into this subject and it is easy to be amazed that anything can be communicated perfectly at all. It is a heady mix of linguistics, semiotics, epistemology and psychology. I have written before (for example, in the “What is Law Series - http://seanmcgrath.blogspot.com/2017/06/what-is-law-part-14.html) about the first three of these, but here I want to talk about the fourth – psychology.

I had the good fortune many years ago to stumble upon the book Inevitable Illusions by Massimo Piattelli-Palmarini and it opened my mind to the idea that there are mental concepts we are all prone to develop, that are objectively incorrect – yet unavoidable. Think of your favorite optical illusion. At first you were amazed and incredulous. Then you read/discovered how it works. You proved to your own satisfaction that your eyes were deceiving you. And yet, every time you look at the optical illusion, your brain has another go at selling you on the illusion. You cannot switch it off. No amount of knowing how you are being deceived by your eyes will get your eyes to change their minds, so to speak.

I have learned over the years that some illusions about computing are so strong that it is often best to incorporate them into architectures rather than try to remove them. For example, there is the “send illusion”. Most of the time when there is an arrow between A and B in a software architecture, there is a send illusion lurking. The reason being, it is not possible to send digital bits. They don't move through space. Instead they are replicated. Thus every implied “send” in an architecture can never be a truly simple send operation and it involves at the very least, a copy followed by a delete. 

Another example is the idea of a finite limit to the complexity of business rules. This is the very (very!) appealing idea that with enough refinement, it is possible to arrive at a full expression of the business rules that express some desirable computation. This is sometimes true (especially in text books) which adds to the power of the inevitable illusion. However, in many cases this is only true if you can freeze requirements – a tough proposition – and often is impossible even then. For example in systems where there is a feedback loop between the business rules and the data creating a sort of “fractal boundary” that the corpus of business rules can never fully cover.

I do not let these concerns stop me from using concepts like “send” and “business rule repository” in my architectures because I know how powerfully these concepts are locked into all our minds. However, I do try to conceptualize them as analogies and remain conscious of the tricks my mind plays with them. I then seek to ensure that the implementation addresses the unavoidable delta between the inevitable illusion in my head and the reality in the machine.


Thursday, June 14, 2018

Thinking about Software Architecture & Design : Part 12


The word “flexible” gets used a lot in software architecture & design. It tends to get used in a positive sense. That is, "flexibility" is mostly seen as a good thing to have in your architecture.

And yet, flexibility is very much a two edged sword. Not enough of it, and your architecture can have difficulty dealing with the complexities that typify real world situations. Too much of it and your architecture can be too difficult to understand and maintain. The holy grail of flexibility, in my opinion, is captured in the adage that “simple things should be simple, and hard things should be possible.”.

Simple to say, hard to do. Take SQL for example, or XSLT or RPG...they all excel at making simple things simple in their domains and yet, can also be straitjackets when more complicated things come along. By “complicated” here I mean things that do not neatly fit into their conceptual models of algorithmics and data.

A classic approach to handling this is to allow such systems to be embedded in Turing Complete Programming language. i.e. SQL inside C Sharp. XSLT inside Java etc. The Turing Completeness of the programming language host ensures that the “hard things are possible” while the core – and now “embedded system” - ensures that the simple things are simple.

Unfortunately what tends to happen is that the complexity of the real world chips away at the split between simple and complex and, often times, such hybrid systems evolve into Turing Complete hosts. i.e. over time, the embedded system for handling the simple cases, is gradually eroded and then one day, you wake up to find that it is all written in C# or Java or whatever and the originally embedded system is withering on the vine.

A similar phenomenon happens on the data side where an architecture might initially by 98% “structured” fields but over time, the “unstructured” parts of its data model grow and grow to the point where the structured fields atrophy and all the mission critical data migrates over to the unstructured side. This is why so many database-centric systems organically grow memo fields, blob fields or even complete distinct document storage sub-systems over time, to handle all the data that does not fit neatly into the “boxes” of the structured fields.

Attempting to add flexibility to the structured data architecture tends to result in layers of abstraction that people have difficult following. Layers of pointer indirection. Layers of subject/verb/object decomposition. Layers of relationship reification and so on....

This entropy growth does not happen overnight. The complexity of modelling the real world chips away at designs until at some point there is an inflection. Typically this point of inflection manifests in a desire to “simplify” or “clean up” a working system. This often results in a new architecture that incorporates the learnings from the existing system and then the whole process repeats again. I have seen this iteration work at the level of decades but in more recent years the trend appears to be towards shorter and short cycle times.

This cyclic revisiting of architectures begs the obvious teleological question about the end point of this cycle. Does it have an end? I suspect not because, in a Platonic sense, the ideal architecture can be contemplated but cannot be achieved in the real world.

Besides, even if it could be achieved, the ever-changing and monotonically increasing complexity of the real world ensures that a perfect model for time T can only be achieved at some future time-point T+N, by which time, it is outdated and has been overtaken by the every shifting sands of reality.


So what is an architect to do if this is the case? I have come to the conclusion that it is very very important to be careful to label anything as an immutable truth in an architecture. All nouns, verbs, adjectives etc. that sound to you like that are “facts” of the real world, will, at some point bend under the weight of constant change and necessarily incomplete empirical knowledge.

The smaller the set of things you consider immutable facts, the more flexible your architecture will be. By all means, layer abstractions on top of this core layer. By all means add Turing Completeness into the behavioral side of the model. But treat all of these higher layers as fluid. It is not that they might need to change it is that they will need to change. It is just a question of time.

Finally, there are occasions where the set of core facts in your model is the empty set! Better to work with this reality than fight against it because entropy is the one immutable fact you can absolutely rely on. Possibly the only thing you can have an the core of your architecture and not worry about it being invalidated by the arrival of new knowledge or the passage of time.