RECONFIG is an FP7-ICT9 European Union project, running from 2013 to 2016. The project consortium aims at exploiting recent developments in vision, robotics, and control to tackle coordination in heterogeneous multi-robot systems. Such systems hold promise for achieving robustness by leveraging upon the complementary capabilities of different agents and efficiency by allowing sub-tasks to be completed by the most suitable agent.
A key challenge is that agent composition in current multi-robot systems needs to be constant and pre-defined. Moreover, the coordination of heterogeneous multi-agent systems has not been considered in manipulative scenarios. We propose a reconfigurable and adaptive decentralized coordination framework for heterogeneous multiple & multi-DOF robot systems. Agent coordination is held via two types of information exchange: (i) at an implicit level, e.g., when robots are in contact with each other and can sense the contact, and (ii) at an explicit level, using symbols grounded to each embodiment, e.g, when one robot notifies one other about the existence of an object of interest in its vicinity.
The substrate of cognitive tasks will be a computer vision front-end, that will allow for the grounding and subsequent sharing of symbols between agents. In the envisioned framework coordination is decentralized and tasks are defined at the individual agent level. These tasks are then enriched during operation based on the updated knowledge that individual agents acquire through the two information exchange types. The control and planning updates take place at two levels: (i) a continuous level, where agent actions are defined by continuous feedback, e.g., change of contact point while grasping an obstacle given the contact point of another manipulator and (ii) a discrete level, where specifications are defined in terms of formal languages such as Linear Temporal Logic. The overall approach is motivated by the need for increased robustness, heterogeneity and reconfigurability in future multi-robot setups. An example scenario includes a team of robots that first individually distinguish objects in the environment and then agree that the same symbol represents the same object for each robot. They then use this information for a global objective, i.e., finding all chairs in a room full of chairs and tables and moving them to another room. As the robots obtain more information on the environment (e.g., the existence of a chair that was not spotted before) they change their controllers and plans accordingly.