Featured Post


 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Friday, August 13, 2010

Lefty Day

Today is lefty day. Thanks to James Tauber for reminding me.

Personally, I don't mind the right-oriented college desks or the right-oriented scissors or the right-oriented tin opener. What really irks me is not being able to walk into a music shop and pick up the guitars or the banjos or the mandolins...

Thursday, August 12, 2010

Strong math needed always? I don't think so

For some reason I watched this video today entitled "A Day in the Life - Computer Software Engineer". Towards the end it says that it is important to have "a strong grasp of mathematics".

I remember hearing that back in 1982 and it very nearly scared me away from getting involved in computing. Its not that I'm particularly bad at math but I certainly would not consider myself "strong" in it.

Sometimes I wonder how many young people who would be very competent developers get scared off by this sort of talk? I have been lucky enough to work with some very, very good developers over the years and "strong math" has not been a common thread amongst them.

U.S. Senate Rules/Floor Procedures

http://www.senatefloor.us/ is a nice high level view of the U.S. Senate Rules/Floor Procedures.

Monday, August 09, 2010

A good example of a legislative workflow constraint

This is a good example of a legislative workflow constraint.

Many legislative systems split bills into two buckets: metadata and data. Metadata fields for things like long title and bill number are commonplace. So too is the concept of the data itself being an opaque "payload" of metadata workflow checks.

The difference between the two is oftentimes a side-effect of the data/document duality. In order to leverage scalar types for indexing/sorting, duplicates of data in the text of the bill itself are created.

As soon as data is duplicated like this, consistency becomes an issue. In an effort to deal with this, some try to fully leverage data normalization by shredding bills into chunks in an RDB. That approach fixes one problem - consistency - but introduces another : you now have to worry about re-assembling a bill from the chunks, often preserving line/page number fidelity. Not easy!

The answer, in my opinion, is to preserve the sanctity of the document and make sure that any metadata extraction from the document is purely an optimization for workflow engine purposes and is never treated as "normative".