22 Feb 2009, 10:05 AMPrototype #2 is an extension of the first cognitive mapping experiment. As improvements, it has
- Four rooms
- Four coloured doors
- Capable of remembering sequences
27 Nov 2008, 12:01 PMI was able to get the Cognitive Mapping prototype #1 (mentioned in the previous post) working fairly easily. A lot of assumptions were made (such as assuming a perfect vision system was available etc.) in getting the demo working, but the underlying idea of associating landmarks and features to learn and in turn, use that to navigate, worked well (in an oversimplified way, of course).
The CONTROL module is complete symbolic where as all other areas are neural nets composed of (fLIF neurons).
The vision system can be considerd as an example for the height of over simplification. There is the RETINA that is hardwired to recognize doors,walls and rooms, directly interfacing witht the symbolic CONTROL with no Visual Cortex whatsoever.
The RETINA has connections to a COLOURS network which enables it to identify three colours (red,green,blue), which in the simulation, are the colours of doors.
The KNOWN OBJECTS network has three pre-known concepts (door, wall, room) which gets excitation directly from the RETINA, when ever it sees one of those objects.
The NEW INSTANCES network is blank to begin with and that is where rooms and door-colour associations are stored as the bot explores the world and spots new rooms.
- There are only two rooms in the simulation, separated by a red door
- The bot starts in room1 in exploration mode
- It keeps moving forward until it comes across a door
- When it passes through a door, it knows that (symbolic) it just came from a room, passed an X coloured door and is right now in a different room
- Then it learns this as a new instance and stores it in the instance net. previous_room+door_color+current_room
- There goes the most basic form of spatial mapping!
- The bot keeps moving until it reaches the end of the next room, then it switches to the navigation mode (symbolic)
- In the navigation mode, the bot is instructed to go to room1 by explicitly stimulating the pattern corresponding to room1 which it learned and stored in the INSTANCES net
- So, when room1 in the INSTANCES net is stimulated, room2 and door-colour come on as the bot has learned that association (step 5)
- The bot records the door-colour that just came on, now it knows that what ever door has that colour, leads to the target room
- The bot simply goes looking for the target colour (in the test's case, Red), passes through it and yes!
Currently, I am working on better cognitive mapping prototype which involves four rooms (could be N rooms), sequential memory and removal of whatever possible symbolic links.
17 Nov 2008, 11:03 AMThe human brain has the ability to construct representations of space and store them. This spatial mapping ability is used in many processes. For instance navigation uses representations of routes, objects, distances, locations and directions. This is called cognitive mapping. The exact principles of cognitive mapping are unknown, such as the absence of an accurate description of distance perception.
Right now, I am working on building a simulation of a virtual agent in a 3D environment capable of doing basic cognitive mapping. The aim is to get the agent to learn paths and navigate between rooms.
Building the 3D world
I looked around a lot to find a simple 3D engine which would fit my purpose of building a 3D environment comprising of a few rooms and doors.
I tried JPCT (java) which looked promising but completely lacked support resources and a good documentation. Crystal Space (C++), I found to be too big and complex for my purpose. Same with JMonkeyEngine(java). At last, I came across Irrlicht (C++) an amazingly compact and immensely powerful 3D engine that has a huge community, plenty of support resources, good documentation and even a java binding called Jirr. I have nothing but words of praise for Irrlicht.
After fiddling around a bit with it and modeling my rooms in Anim8or, I finally got the 3D world working, which looks like this:
I am working on the neural side of things, where the the agent has:
- A simple vision system
- Basic colour recognition capabilities
- Basic object (room, door) recognition capabilities
- and the ability to learn instances (doors, rooms) and associate them to be able to navigate.
01 Nov 2008, 03:47 PMLack of 'labeled arcs' (or definition of the type of relationship between two concepts in memory) in simulated memories is a problem. It is somewhat easy to encode two concepts in CAs and associate them. But what kind of association? How to define the type? Shouldn't a definition somehow be formed implicitly?
For instance, imagine a wooden table and a chair in a room. We perceive many kinds of association when looking at them.
- They are both pieces of furniture.
- They are both made out of wood.
- They are placed near to each other.
- They are both in the same room.
and so on..
A neural network was trained with the patterns food, hungry, not hungry, salivate, lie down.
During the learning phase, hunger was repeatedly presented with food and salivate, invoked (Ofcourse, we don't have to force a dog to salivate when its hungry and sees food, in real life).
Again, not hungry was repeatedly presented with food and lie down invoked (assuming that a dog food lie down inspite of the presence of food, when it's not hungry).
In the testing phase, when food was presented in the context hungry, the resultant activation was salivate (obviously) and when the same pattern food was presented with not hungry, the result activation was lie down.
This experiment cannot be compared to Pavlov's dog by any stretch (no, not because of the absence of bells) and does not have any resemblance except for the doog-food-salivate part, but it was fun and I got to witness cell assembly association.
06 Oct 2008, 07:31 PMI finally got around simulating the classic Sharks and Jets experiment using Cell Assemblies (CAs).
The Sharks and Jets experiment [McClelland, 1981] was designed to demonstrate how a network could function as a content-addressable memory system. The information to be encoded was two groups (Sharks and Jets), group members
and some demographic attributes of theirs. The goal of the experiment was to store, associate and retrieve patterns in a network.
My CA implementation of the model learned, grouped and associated members of a group. Associations between different patterns were defined as overlapping neurons in the CA network. The associations were learnt through a series of cycles. The model was tested by presenting a randomly chosen pattern of all the ones learnt. For instance, when a member of a group was presented, activation was observed in that member and the areas of his demographic characteristics. When a parent group was presented, activation was observed in a large scale where many members of that group and their attributes ignited and in many cases, a particular member had the most activation, assumed to be the prototypical representation of that group.
While the network in the original experiment was specialised to the goal, the CA implementation was conducted based on the general behaviour of CAs (having distance biased neuron connections). Even though the highly interesting dynamics observed in the original experiment were not recreated in the implementation, the goal of observing associative behaviour in CAs was attained with fair success.
[McClelland, 1981] McClelland, J. L. (1981). Retrieving general and specific information from stored knowledge of specifics. In Proceedings of the Third Annual Conference of the Cognitive Science Society, pages 170–2.
02 Oct 2008, 10:15 AMI am working on various simulations using the CANT (Connection Association and Networking Technology) model) by Christian Huyck, my director of studies.
Overview of CANT - http://www.cwa.mdx.ac.uk/chris/hebb/hebb.html
The Java implementation of the model is available here - http://www.cwa.mdx.ac.uk/CABot/CANT.html
The code has limitations, but it is good enough to do experiments upto a certain complexity.
28 Apr 2008, 03:09 PMMy undergraduate thesis (Unpublished). Proposed and developed a method of using partially supervised algorithms to resolve the PP-attachment ambiguity. The training and test data was obtained from the Penn Treebank WSJ corpora and the semantic data from WordNet. The experiment concluded successfully with the model attaining a resolution accuracy of 87.23%. The implementation was pretty straight forward and I wrote the program in a mix of Perl and Ruby. I feel the paper is slightly amateurish (very verbose) as it is my first academic paper, had to conform to academic verbosity-requirements and was done in a hurry to meet the university deadlines.
» download pdf