Project details
Interactional motivation
Interactional Motivation (IM) is a form of self-motivation proposed for artificial agents that deviates from traditional goal-driven or extrinsic motivation. Instead of valuing the outcome or state of the environment, IM assigns value directly to the agent-environment interactions themselves. This allows the designer to "seed" the system with inborn behavioral preferences associated with primitive interactions, and leaves room for the agent to construct more motivational structures based on his individual history of interaction.
We have modeled Interactional motivation in the form of the Enactive Markov Decision Process (EMDP) formalism which provides a framework to model agents where perception and action are kept embedded within sensorimotor schemas, rather than being separated as in traditional approaches. The interaction cycle begins with the agent's intended schema, which results in an enacted schema depending on the environment's state. Within this framework, IM is initialized by associating a predefined scalar value (valence, or satisfaction value) with each primitive schema. This value function is a function of the enacted schema and is defined independently of the environment's state. In essence, an interactionally motivated agent operating within an EMDP is driven by the motivation to enact certain interactions rather than to reach certain predefined goals. This approach allows for defining a value system without referring to specific environmental states, ontoloty, or predefined goals, making it suitable for studying autonomous learning where agents construct their own knowledge and potentially their own goals in terms of possibilities of interaction.
Studying the emergence of experience-grounded semantics
Interactional motivation can be used in the framework of non-axiomatic reasoning systems to study the emergence of experience-grounded semantics.
We cast the problem of designing self-motivated agents as a problem of hierarchical online sequence generation driven by interactional motivation. The agent records and models past sequences of enacted schemas experienced from step 0 to step t-1. At step t, it generates a set of future intended sequences based on sequential patterns to try to enact next. Each token of an intended sequence represents a decision that the agent intends to make at a future step and a corresponding expected outcome provided by the environment. The agent selects a particular generated sequence based on interactional motivation and other intrinsic preferences rather than external goals or rewards. The sequence generation model may be based on Long Short-Term Memory, Transformers, or a schema mechanism.
The semantics of schemas is not predefined by the designer but emerges out of the regularities of interactions observed and exploited by the agent, as the agent exploit them to fulfill its interactional motivation.
Implementing interactionally-motivated agents through a schema mechanism
A schema mechanism is a system designed to create, organize, and progressively complexify data structures that represent schemas of interaction, thereby enabling the emergence of increasingly intelligent behaviors. These mechanisms are grounded in theories of knowledge generation and cognitive development originally proposed by Jean Piaget, and are particularly well suited for implementing interactionally motivated agents, as they place interaction at the core of the cognitive process.
The modeler initializes the schema mechanism with inborn interactional preferences, which guide the bottom-up association and development of more complex schemas. As the agent engages with its environment, it learns composite schemas—hierarchical sequences composed of simpler, lower-level schemas—based on its lived experience. Through this continuous interaction, the agent autonomously discovers and exploits environmental regularities to enhance its average level of satisfaction. This leads to the self-directed construction of knowledge, grounded in interaction rather than dependent on predefined goals or externally imposed, state-based reward systems.
The agent's ability to formulate new goals stems from the under-determined nature of interactional motivation. This intrinsic motivational framework allows individual experience to shape the emergence of future goals, affording the agent a degree of freedom and adaptability beyond what is typically achievable through goal-driven or reward-based models.
Integrating Interactional motivation into the SingularityNET ecosystem
We propose to develop a relatively simple software prototype to demonstrate how interactional motivation can be integrated within a platform of the SingularityNET project. One of the primary candidate platforms under consideration is AIRIS (Autonomous Intelligent Reinforcement Interpreted Symbolism).
Building on our existing tutorial in interactional motivation, we will adapt and implement this framework within the selected platform, and design a tailored benchmark to evaluate interactional motivation in this new context. We expect results similar to our current demonstrations, but we will work with the SinguratyNET team to move forward towards more advanced findings.
Laying the foundations for future advancements
Much like reinforcement learning, interactional motivation enables the modeling of fundamental drives such as seeking nourishment and avoiding harm. However, a key conceptual distinction lies in the fact that these drives can be modeled without presupposing a predefined problem space characterized by states and transitions. Furthermore, it avoids imposing an a priori ontology of the world onto the agent. This makes interactional motivation particularly well suited for guiding robots in open-world environments, where the number of possible states is virtually infinite and the domain ontology is initially unknown. By avoiding these constraints, interactional motivation offers a framework that can scale with the complexity of real-world scenarios.
By modeling basic drives, interactional motivation can be likened to the innate motivations found in animals. It enables the definition of drives aligned with human needs and capable of supporting socially beneficial behaviors like those observed in domesticated animals. To advance this research, we are developing the Petitcat project, an open-source initiative designed to demonstrate how interactional motivation can generate lifelike behaviors in companion robots. We see this as a significant first step toward the development of socially acceptable humanoid robots.
Petitcat is the leading project of PetiteIA. It is an affordable robotics platform built on open-source hardware and powered by a brain-inspired cognitive architecture driven by interactional motivation. Example use cases include a robot playing by learning to push simple objects into a desired position and collaborative interactions between multiple robots as shown in preliminary study videos.
Beyond animal-level cognition, we are laying the foundations to explore how interactional motivation can interface with other forms of motivation in both artificial general intelligence (AGI) and robotics, and how agents might dynamically balance multiple motivational systems. We are open to collaboration with SingularityNET on other plateforms or to investigate how tools from the SingularityNET ecosystem might be integrated into the Petitcat project to enhance its capabilities.
Sort by