A while back I argued that Open data is more important than open source.
Dave Megginson has posted that your data is the next big battle. This is a key point. Its good to see the debate moving to this important question.
I'm hoping that as part of this, folks will begin to think about how process models - as opposed to traditional source code - might provide a better way of "open sourcing" key information about how open data is processed to produce particular results e.g. WYSIWYG renderings of documents.
ODF, I think is a good forum for this. As ODF grows and grows, the gap between what any ODF compliant application does interpreting the data, and what is explicit in the data - will need to be filled.
Traditionally, reference implementations (i.e. traditional source code) has been the way to do this. "Running code" is the final arbiter.
Maybe this is as good as it gets? Unfortunately, a fully blown word processor runs to many, many thousands of lines of code and the semantic devil is buried way down in the details...