Published online by Cambridge University Press: 24 June 2011
We have proposed a new repetition framework for vision-based behavior imitation by a sequence of multiple humanoid robots, introducing an on-line method for delimiting a time-varying context. This novel approach investigates the ability of a robot “student” to observe and imitate a behavior from a “teacher” robot; the student later changes roles to become the “teacher” for a naïve robot. For the many robots that already use video acquisition systems for their real-world tasks, this method eliminates the need for additional communication capabilities and complicated interfaces. This can reduce human intervention requirements and thus enhance the robots' practical usefulness outside the laboratory. Articulated motions are modeled in a three-layer method and registered as learned behaviors using color-based landmarks. Behaviors were identified on-line after each iteration by inducing a decision tree from the visually acquired data. Error accumulated over time, creating a context drift for behavior identification. In addition, identification and transmission of behaviors can occur between robots with differing, dynamically changing configurations. ITI, an on-line decision tree inducer in the C4.5 family, performed well for data that were similar in time and configuration to the training data but the greedily chosen attributes were not optimized for resistance to accumulating error or configuration changes. Our novel algorithm, OLDEX identified context changes on-line, as well as the amount of drift that could be tolerated before compensation was required. OLDEX can thus identify time and configuration contexts for the behavior data. This improved on previous methods, which either separated contexts off-line, or could not separate the slowly time-varying context into distinct regions at all. The results demonstrated the feasibility, usefulness, and potential of our unique idea for behavioral repetition and a propagating learning scheme.