I have been working for about 48 hours (straight with intermittent cat naps) on my robotic arm, which is equipped with a point light sensor. I have re-programmed it to characterize polygons as well as circles. However, the main purpose of this research project is to develop an inquiry engine that is generally engineered to decide which experiment the robot should perform to obtain the information that it requires. In this case, an experiment is characterized by the location of potential light measurements.
Here is a link to an older post that references a research paper, a slide presentation, and a video of my talk on this topic.
One thing that has surprised me is just how intelligent the instrument appears. It selects measurements that are obviously the smart measurements to take. What is strange, is that I can intrepret these measurements in terms of the techniques that I would employ to solve the problem, such as: “It is trying to find the edge of the polygon” or “It is verifying that the center of the polygon is where it thinks it is” or “It is checking to see if there is a vertex at a given position”.
The important fact is that the robot is not using heuristics at all, which is what makes this intelligent system a powerful and generic plug-and-play system. Instead, the robot is maximizing the relevance of the experimental question with respect to the issue that the robot is programmed to resolve (in this case characterizing a white shape on a black background). The relevance is related to the entropy, and the inquiry engine performs its computations by sampling a posterior probability of models and considering the entropy of the set of predictions that this set of models makes regarding the outcomes of potential measurements.
How can an instrument that simply performs computations with entropy select manuevers which I, who uses heuristics (or general problem-solving strategies), easily recognize as intelligent. The only reason I can imagine is that my heuristics work! The heuristics that I employ to solve such problems are excellent approximations to computing relevance via entropy.
An interesting consequence of this is that sometimes the robot chooses a measurement that I cannot interpret as intelligent. However, when I study the results of the measurement, I always find that these measurements provide a great deal of information; more than the measurement I would have selected.
The system still has some difficulties, which involves two issues. The first issue is the fact that the current algorithm design requires that the new measurements be analyzed in conjunction with all of the past measurements. I could get around this with particle filters, but this is an approximation to the complete inference problem, and I’d like to avoid that. I am working on a solution to this.
The second issue has to do with the fact that I require a diverse set of probable models to query to perform the entropy calculations. If the set is not sufficiently diverse, the robot does not consider the wide variety of possible solutions when it is planning the next experiment. There are easy ways to assure that the set of samples is diverse, but this takes computation time, which I am aiming to avoid.
In the near future, I will have a video of the robot in operation.