Monday 9 April 2007

Another Go at Artificial Intelligence: Hierarchical Temporal Memory

Last week I read this interesting WIRED article about Jeff Hawkins (creator of Palm Pilot and Treo) and his passion / project / company Numenta. Their objective is no small deed: they aim and claim that their software replicates the human brain. Based on the axiom that the basic working principle of the brain consists of passing information through several level of hierarchically structured layers of neurons, Numenta's objective is to create software that independently recognizes patterns in the outside world (focusing on computer vision). Lower nodes transfer their limited recognition of a pattern to higher nodes, which integrate these sub-patterns to eventually an integrated whole pattern - this process happens over time, hence temporal(the litmus test, Hawkins says, is for the computer to be able to differentiate between cats and dogs).

Hawkins believes that his program, combined with the ever-faster computational power of digital processors, will also be able to solve massively complex problems by treating them just as an infant’s brain treats the world: as a stream of new sensory data to interpret. Feed information from an electrical power network into Numenta’s system and it builds its own virtual model of how that network operates. And just as a child learns that a glass dropped on concrete will break, the system learns to predict how that network will fail. In a few years, Hawkins boasts, such systems could capture the subtleties of everything from the stock market to the weather in a way that computers now can’t.

Then today I found this 12min podcast interview with Hawkins, which led me to check out Numenta's website, where a about one-hour video of a talk Hawkins gave at a cognitive computing event very nicely explains his concept (turns out Hawkins is an excellent speaker).

It is obviously a intriguing idea and research-area, and naturally brings to mind the likes of R2D2, Data, Hal 9000 etc. Independent of the feasibility of Hawkin's concept I wonder how much longer it will take until a system is developed that passes the famous Turing test. Just around the corner? 50 years? 100 years? 500 years? It may seem rather inconceivable, but then - so of course would today's world seem to a person 500 or 1000 years ago.

And - also a common question in most science fiction stories dealing with AI - will such a system ever have self-consiousness, a sense of "I"? In the human brain the sense of "mind" or "I" is either postulated as a emergent quality of the interplay of the brain's various subsystems (the rational-materialistic school) or - in various spiritual / religious variations - as a (eternal) soul. Considering the former to be case, then the question is on which level of complexity / interaction such a system would wake up to a sense of "I" and self ... which then in turn of course raises ethical questions about the "right to exist" of such an entity etc. ... anyhow, purely academic at this point of course, but fun to ponder.

1 comment:

Rong said...

If you read Hawkins' book on this topic On Intelligence, you'll see that the point is not to pass the Turing test.