Thursday, June 28, 2007

The Universe as a Computer II

I like to use the topic of my last post and tie it back to the very first post I made to this blog.

In my first post I claimed that life is defined as something that resists the 2nd Law of Thermodynamics. Now this is more of a poetic statement than a scientific one so allow me to clarify. The 2nd law is a statistical fact that must be true about any large collection of discrete things (such as atoms and molecules or even the items on top of my desk). It basically says that there is a higher probability of such collections moving to a state of increased disorder (entropy) than increased order. So when I say that life resists the second law I am really stating that life has the property of expending energy to resist decay. It does this, of course, at the expense of those things in its immediate environment (including other life forms) so that on the whole the 2nd law is not violated.

Returning to the ideas expressed in my previous post and the New Scientist article, I think it is safe to say that if any law must be true in all possible universes, the 2nd law is as good a candidate as you are going to get. Given this, one must not ask why our universe is so tuned to support life. Such a statement implicitly assumes life as that which is composed of atoms and molecules. My definition of life is much more general. It does not require that there be such a thing as electric charge, for instance. It only requires that there be:

1) Some collection of discrete things
2) Some way for those discrete things to interact (i.e. at least one force)
3) Some emergent complex dynamics that can arise due to combinatorial configurations of discrete things and forces (this probably means the force must not be too weak or too strong and that it vary with distance).

Given such a system, I believe there is a high probability that in the course of large spans of time a configuration could evolve that resists the second law through actions such as replication and metabolism. It may even be inevitable for a much larger class of systems than we can consider simply by permuting the laws or constants of our own universe. For example, there may be deserts of non-life in the immediate vicinity of our universe's configuration of laws and constants but a a majestic bounty of life forms in the space of all possible laws.

Here again computers provide a wonderful analogy. Imagine a piece of working software. Almost any mild permutation of that piece of working software will lead to a broken piece of software. In fact, the probability of a crashing a program by flipping a single bit in its executable section is fairly high. However, it does not follow form this observation that all working programs must look almost exactly like this particular program. There are an infinite number of amazingly rich and varied programs in the vast space of all possible programs. So too, I believe there are a vast richness of life forms in the space of all possible universes.

The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations — then so much the worse for Maxwell's equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation. ”
--Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

The Universe as a Computer

Here is a quote from the latest issue of New Scientist. You can read the part of the article here (or the whole thing if you are a subscriber).

There is, however, another possibility: relinquish the notion of immutable, transcendent laws and try to explain the observed behaviour entirely in terms of processes occurring within the universe. As it happens, there is a growing minority of scientists whose concept of physical law departs radically from the orthodox view and whose ideas offer an ideal model for developing this picture. The burgeoning field of computer science has shifted our view of the physical world from that of a collection of interacting material particles to one of a seething network of information. In this way of looking at nature, the laws of physics are a form of software, or algorithm, while the material world - the hardware - plays the role of a gigantic computer.

This is by no means a new idea. However, it is gaining more traction and I believe it will become prevailing viewpoint in my lifetime or at least that of the generation of physicists  who grow up emerged in worlds such as Second Life.

The problem I have with the article is that it mentions how we need to explain why our universe is so tuned to support life. It is certainly true that life as we understand it could not arise if some of the fundamental constants of nature were altered just a tad. However, it does not follow that these alternate realities would not support complex systems for which we can have little understanding from our vantage point. I think if Stephen Wolfram's work on NKS showed anything at all it showed that complex dynamics can arise from quite simple initial ingredients.

Friday, June 22, 2007

Archetypes Redux

In an earlier post I wrote about how archetypes can be specified by a set of prototypical vectors with weights. A better explanation of this setup is that an Archetype is a set of vectors and the weights are the fuzzy membership of the vector in the set. Under this setup a vector with the characteristics of a boulder can exist in the rock archetype with a fuzzy membership value less than 1.0.

Saturday, June 16, 2007

Vision Science

I just picked up Vision Science: Photons to Phenomenology by Stephen E. Palmer. Without exaggeration this is the best book on cognitive science I have ever read and the best book on science in general that I have read in a while.

As the title suggests this book covers vision from the physics of photons all the way through to the phenomenology of experience. The chapter on color is one of the most complete treatments I have ever seen in one book.

I think what a like most about the book is that it is authoritative, well researched and scholarly yet reads almost as easily as a pop science book. The book carries a hefty price tag ($82.00) but at 800 pages it is well worth it and you can find used editions at a significant discount.

Archetypes revisited

I partly addressed my displeasure with the prior post on archetypes, so if you are interested you may want to read the latest version.

Monday, June 11, 2007

The role of Archetypes in Semantic modeling

In a previous post I introduced the notion of Semantic Vectors. These are vectors (in the sense of the mathematical notion of a Vector Space) that can be used to model knowledge about the world. It is not yet clear to me how vectors, in and of themselves, can model much of what needs to be molded in a knowledge based system (at least without complicating the notion of a vector space so it only vaguely resembled its mathematical counterpart). This post is about one aspect of this challenge that I have begun working on in earnest. I have some hope that this challenge can be met by the model.

Imagine, if you will, a rock. If my notion of a semantic vector space has any value at all it should be able to model knowledge about a rock. Presumably a rock would be modeled as vector with explicit dimensions such as mass, density, hardness, etc. At the moment, it is not my intent to propose a specific set of dimensions sufficient to model something like a rock so this is only meant to give you a rough idea of the vector concept.

When I asked you to imagine a rock, which particular rock did you imagine?

Was it this one?

Or this one?Chances are you had a much more vague idea in your mind. Although the idea you had was vague it was probably not the idea of "Mount Everest" or "the tiniest pebble" even though theses have something rocky about them.

A system that purports to model knowledge of specific things must also model knowledge of general things. In fact, most truly intelligent behavior manifests itself as the fluid way we humans can deal with the general.

I use the term archetype to denote what must exist in a semantic model for it to effectively deal with generality.

At the moment, I will not be very specific about what archetypes are but rather I will talk about what they must do.

An archetype must place constraints on what can be the case for an x to be an instance of an archetype X. In other words if you offer rock23 as an instance of archetype ROCK there should be a well defined matching process that determines if this is the case.

An archetype must allow you to instantiate an instance of itself. Thus the ROCK archetype acts as a kind of factory for particular rocks that the system is able to conceive.

An archetype must specify what semantic dimensions are immutable and which are somewhat constrained and which are totally free. For instance, a rock is rigid, so although rocks can come in many shapes, once a particular rock is instantiated it will not typically distort without breaking into smaller pieces (lets ignore what might happen under extreme pressure or temperature for the moment). In contrast, the archetype for rock would not constrain where the rock can be located. I can imagine few places that you can put a rock where it would cease to be a rock (again lets ignore places like inside a volcano or a black hole, for now).

An archetype must model probabilities, at least in a relative sort of way. For example, there should be a notion that a perfectly uniform fire engine-red rock is less likely than a grayish-blackish-greenish rock with tiny silverish specks.

Archetypes also overlap. A BOULDER archetype overlaps a ROCK archetype and they system should know that a rock becomes a boulder by the application of the adjective BIG.

An intelligent entity must be able to reason about particular things and general classes of things. It would be rather odd and awkward, in my opinion, if the system had distinctly different ways to deal with specific things and general things. It would be nice if the system had a nice universal representation for both. Certainly, the fluid way in which humans can switch back and forth between the general and the specific lends credence to the existence of a uniform representational system. If a knowledge representation proposal (like my semantic vector concept) fails to deliver these characteristics then it should be viewed as implausible.

I am only just beginning to think in earnest out how the vector model can deal with archetypes. I have some hope but nothing that I am willing to commit to at the moment. Presently I am working with the idea that an archetype is nothing more than as set of pairs consisting of a vector and a weight. The vector provides an exemplar of an element of the archetype and the weight provides some information as to the likelihood. The nice thing about vectors is that, give two of them, a new vector can be produced that lies in the middle. Hence the vector model provides away of flushing out a sparsly populated archetype. Further, membership in the archetype can be tested using the distance metric of the semantic space.

The limitation of this approach has to do with the notion of the flexibility of various dimensions that I mentioned above. It would seem that the vector model, in and of itself, does not have an obvious way to represent such constraints. Perhaps this simply means that the model must be expanded but expansion always leads to complexity and a semantic modeler should always prefer economy. There is some hope for a solution here. The basic idea is to provide a means by which constraints can be implied by the vectors themselves but elaboration of this idea will have to wait for a future post.

Saturday, June 9, 2007

Color your consciousness

Is my experience of red the same as yours? Or maybe when you experience red you experience something more like this relative to my experience? Can we ever say anything definitive here? It would seem hopeless.

Here is an interesting paper that adds some color to these ideas.

Wednesday, June 6, 2007

More Powerful Than a Turing Machine


Turing Machines (along with Lambda Calculus and other formalisms) are the standard upon which computability is defined. Anything effectively computable can be computed by a Turing Machine (TM).

From one point of view, the computer I am using right now is less powerful than a TM since my computer does not have infinite memory. However, it has enough memory for any practical purpose so for the most part this is not a concern.

From a more interesting perspective, my computer is much more powerful than a TM. By more powerful I do not mean it is faster (by virtue of using electronics instead of paper tape). Rather, I mean it can "compute" things no mere TM can compute. What things ? I am glad you asked!

There is nothing in the formal specification of a TM (or Lambda Calculus) that would lead you to believe a TM can tell you the time of day or keep time if you primed it with the current time of day on its input tape. My computer has no problem with this because it is equipped with a real time hardware clock. The specifications that went into the construction of this clock rely on some sort of fixed frequency reference (like a quartz crystal). Clearly there is no such fixed frequency specification in the definition of a TM.

I could hook a GPS unit to my laptop as many people have. If I did my laptop would be able to "compute" where it was at any time. Togetehr with its real time clock my laptop would be able to orient itself in both time and space. No mere TM could do that.

If I purchased a true hardware random number generator like this one I could also add even more power to my computer because the best a TM could do is implement a pseudo random number generator. Hence my computer could presumably perform much better Monte Carlo simulations.

By further adding all sorts of other devices like cameras, odor detectors, robotic arms, gyroscopes and the like my computer would be able to do so much more than a TM.

There is nothing really deep here. Turing was only interested in modeling computation so it would have been silly if he accounted for all the peripherals one might attach to a TM or computer in his mathematical model.

However, when you are reading some critique of AI that tries to set limits on what a computer can or can't do based on either Turings or Godel's findings it helps to remember that computers can be augmented. Clearly humans have some built in capacity to keep time (not as well as a clock but not as bad as a TM). Humans have a capacity to orient themselves in space. It is also likely that our brains generate real randomness. We should begin to think of how these extra-TM capabilities are harnessed to make us intelligent. We can then teach our computers to follow suit.

Monday, June 4, 2007

Hypnotic Visual Illusion Alters Color Processing in the Brain

The following is an excerpt from this study. I think there are important clues to the secrets of consciousness hiding here.

OBJECTIVE: This study was designed to determine whether hypnosis can modulate color perception. Such evidence would provide insight into the nature of hypnosis and its underlying mechanisms. METHOD: Eight highly hypnotizable subjects were asked to see a color pattern in color, a similar gray-scale pattern in color, the color pattern as gray scale, and the gray-scale pattern as gray scale during positron emission tomography scanning by means of [15O]CO2. The classic color area in the fusiform or lingual region of the brain was first identified by analyzing the results when subjects were asked to perceive color as color versus when they were asked to perceive gray scale as gray scale. RESULTS: When subjects were hypnotized, color areas of the left and right hemispheres were activated when they were asked to perceive color, whether they were actually shown the color or the gray-scale stimulus. These brain regions had decreased activation when subjects were told to see gray scale, whether they were actually shown the color or gray-scale stimuli. These results were obtained only during hypnosis in the left hemisphere, whereas blood flow changes reflected instructions to perceive color versus gray scale in the right hemisphere, whether or not subjects had been hypnotized. CONCLUSIONS: Among highly hypnotizable subjects, observed changes in subjective experience achieved during hypnosis were reflected by changes in brain function similar to those that occur in perception. These findings support the claim that hypnosis is a psychological state with distinct neural correlates and is not just the result of adopting a role.

Friday, June 1, 2007

Hardware Matters

Let's engage in a thought experiment. As a warm-up exercise I would like you to consider the simplest of memory devices know as a flip-flop (or technically an SR latch).

The purpose of such a latch is to act as a electronic toggle switch such that a logical 1 pulse on S (SET) input will turn Q to logic 1. A pulse on the R (RESET) input will revert Q to logical 0. I will not delve here into how a SR latch works but if you know the truth function of a nor-gate you can probably figure it out for yourself. I still remember with some fondness the moment I grokked how such a latched worked on my own so it is a worthwhile exercise. It is clear, just from looking at the picture that feedback is central to the latch's operation. There are more sophisticated flip-flops and latches but all require feedback.

So here is the thought experiment. Imagine the wires (black lines) connecting the logic gates growing in length. Since this is a thought experiment we can stipulate that as they grow in length their electrical characteristics remain constant. That is, imagine resistance, capacitance and inductance does not change. If this bothers you then replace the electrical wires with perfect fibre optics and make the nor-gates work on light pulses rather than electrical ones. The point is that for purpose of the thought experiment I want to exclude signal degradation.

I maintain that as the wires grow, even without degradation in the signals, there would reach a point where the latch would fail to work and would begin to oscillated or behave erratically. The same would be the case if you replaced the simple latch with a better design such as a D-flip-flop. The point where the flip-flop would begin to fail is near where the delay due to the finite speed of the signal (limited by the speed of light) would become comparable with the switching time of the nor-gate. In essence the flip-flop relies on the feedback signals being virtually instantaneous relative to the switching time. In the real world other effects would cause the flip-flop to fail even sooner.

Thought Experiment 2

Okay, now for a new thought experiment. Before we engage in this experiment you must be of the belief that your brain is what causes you to possess consciousness (whereby we define consciousness is the the phenomenon that allows you to feel what it is like to for something to be the case -- e.g. what red is like). If you believe consciousness is caused by pixies or granted to you by the grace of God irregardless of the architecture of the brain then stop reading right now and go here instead.

Still with me? Good.

Now, imagine your brain with its billions of neurons and their associated connections. As before, imagine the connections (axons) growing in length without degradation of their signals. Let these connections grow until your brain is the size of the earth or even the sun. Let the neuron bodies stay the same microscopic size, only the inter-neuron connections should grow and no connections should be broken in the process.

Now there is no way I can prove this at the moment but I would bet a very large sum that as the brain grew in this way there would be a point where consciousness would degrade and eventually cease to exist. I believe this would be the case for similar reasons as with our SR latch thought experiment. Time delays would impact the function of the brain in all respects. More specifically, I believe that it won't simply be the case that the delays would cause the brain to slow down such that it would cease to be a practical information processing device relative to the rest of the real-time world. I believe that consciousness would stop for a much more fundamental reason that the propagation delays relative to the neuronal firing times are crucial to the function of the brain in every respect just as they are relevant to the function of the flip-flop. Time and consciousness are intimately tied together.

So Whats the Point?

As you read about the mind you will come across various other types of thought experiments where the neurons of the brain are replaced by other entities that act as functional stand-ins. For example, a popular depiction is every neuron in the brain being replaced with a person in China. Each person would communicate with a group of other people in a way that was functionally identical to the neurons. The question posed is whether the group as a whole could be conscious (that is a consciousness independent of the individuals and inaccessible to any one individual).

Such experiments assume that consciousness is independent of such notions as timing and spacial locality. To me this is highly improbably and hardly even worth consideration. In fact, when we finally understand the brain in its fullness, I am quite certain it will be the case that there are properties of neurons, neurotransmitters and hormones that are crucial to brain function. Specifically, a brain of silicon chips organized as a conventional computer could not work the same. In short, hardware matters.