I am filing this one under "extremely speculative".
I think it was Douglas Hofstadter's book "I am a strange loop" that first got me thinking about the the possible roles of recursion and self-reference in understanding consciousness.
Today - for no good reason - it occurred to me that if the Radical Plasticity Theory is correct, to emulate/re-create consciousness[1] we need to create the conditions for consciousness to arise. Doing that requires arranging a computing system that can observe every aspect of itself in operation.
For most of the history of computing, we have had a layer of stuff that the software could only be dimly aware of, called the hardware.
With virtualization and cloud computing, more and more of that hardware layer is becoming, itself, software and thus, in principle, open to fine grained examination by the software running on....the software, if you see what I mean.
To take an extreme example, a unix application could today be written that introspects itself, concludes that the kernel scheduler logic should be changed, writes out the modified source code for the new kernel, re-compiles it, boots a Unix OS image based on it, and transplant itself into a process on this new kernel.
Hmmm.
[1] Emulation versus re-creation of consciousness. Not going there.
4 comments:
What are it's goals? How would it determine what should change and how/how much? Would it need to inspect the behavior of other "conscious clouds" and adjust accordingly? How would it balance self interest vs the interest of cloud society? Or would it? Would it consider it's relationship to humans? Would these imperatives be intial inputs from humans? So much more on which to speculate.
I suggest that "arranging a computing system that can observe every aspect of itself in operation" is, strictly speaking, more than is possible for known exemplars of consciousness, at least directly - and therefore can't be a true requirement. For example, there are lots of aspects of human thought that are not observable, especially when you get down to the "hardware" level.
All that said, I would agree that there needs to be a lot more reflexive capability than most systems have at this time before there's likely to be much progress on the consciousness front.
I think about this a a lot: the emergent possibilities of greater self reflection in software. I wonder if a necessary precondition for making it work well would be to also somehow add the *resiliency* of our plasticity: we are prevented (usually) from bricking ourselves by the simple fact that our plasticity is far from 100% effective.
Post a Comment