Project details
Introduction
In order to achieve AGI, we need methods for promoting and assessing a diverse set of capabilities across a broad spectrum of intelligent skills and behaviors. In prior research (Hussain, 2024), I laid out a high-level approach to creating "drivers" for arbitrary components of "intelligence" that would support the development of AGI across multiple AI techniques. The notional framework is intrinsically modular and extensible and can be used at several levels of the process for creating an intelligence – at the level of assessing part of an intelligence, at the level of a single intelligence learning from its world, at the level of multiple intelligences collaborating within the world, at the level of evaluating multiple intelligences to determine relative goodness, and at an aggregate level across a process. The drivers encompass motivational criteria for more subjective behaviors and goals such as ethics, as well as needs-based goals of embodied AIs and more traditional quantitative assessments against task-specific objective criteria. The drivers can be very specific to the real-world, thereby reinforcing human-like behaviors, or they can be specific to an arbitrarily-defined world or context, thereby reinforcing "alien" behaviors, as described in the RFP.
Approach
In the proposed research, I will extend my prior research to create a concrete framework that can be used within the PRIMUS Hyperon infrastructure. The proposed work will primarily be conceptual in nature, but will elaborate on how the motivational mechanisms can be used by an AI developer to grow the intelligence capabilities of their systems along multiple dimensions. Since intelligence is multi-facted and AGI will need to achieve success against many of those facets, the framework will lay out guidelines for how to "grow" the capabilities of an AI over time.
Ultimately, the goal of the framework is to motivate both the human researchers and the AI systems themselves to pursue and achieve significant improvements along multiple dimensions of intelligence. Key to this is understanding and specifying how to adapt the guiding purposes of an AI depending on the current context. It is not enough just to do something effectively - the AI must also do the appropriate set of things in the appropriate circumstances, and adapt as the circumstances change, as consequences are experienced, and as changing priorities reveal themselves.
Impact on Hyperon
The reference paper (Hussain, 2024, link provided) provides the underpinning that will inform the motivational framework that will be developed in this effort. While specific prototyping will not be done, the framework will be fleshed out with specific examples of motivational drivers and how to apply them in context. This includes identifying a method for specifying a dynamic "multi-objective" function comprising objective and subjective elements that may change over-time and/or context. To ensure positive future application to Hyperon is feasible, certain key elements of the motivational framework will be described using the MeTTa language. In particular, specifying the ability to context-switch and change motivations using MeTTa will be explored.
Flexible motivational framework
The flexible motivational framework will be designed to be modular and scalable, enabling it to handle diverse AGI systems with varying cognitive architectures and motivational needs. For human-like AGI, it could leverage psychological and sociocultural principles (e.g., Maslow's hierarchy of needs, the theory of planned behavior), whereas for alien intelligences, the system might incorporate entirely different reward mechanisms based on non-human understanding of "value" or "need."
This adaptability is achieved by defining a core motivational structure that can evolve and incorporate new layers of influence depending on the AGI's experiences and interactions. A central concept would be modular motivational drivers (e.g., goal-oriented behavior, curiosity, social bonding, ethical behavior) that can be adjusted in priority or structure as the AGI encounters new environments, experiences new demands/restrictions, and/or develops more sophisticated cognitive abilities.
Detailed Use Cases:
Chatbot Systems:
In chatbot systems, the motivational framework could manifest through adaptive dialogue strategies that align with the user's emotional state, intent, and long-term goals. For example, an emotional chatbot might prioritize empathy-based responses in certain contexts (e.g., calming an upset user), while in other contexts, it could adopt a more informative or action-driven approach (e.g., assisting with a task). As the chatbot interacts more with users, it would dynamically adjust its motivational priorities, learning when to shift focus between providing information, building rapport, or ensuring task completion.
Humanoid Robots:
For humanoid robots, the motivational framework would guide complex behavior in social settings, such as prioritizing human well-being, security, or cooperation. In a healthcare environment, for instance, the robot might prioritize actions based on ethical guidelines (e.g., ensuring patient comfort) while also aligning with personal task completion (e.g., delivering medication). It could adapt to environmental changes, such as the arrival of new patients, by adjusting its motivational priorities to focus on urgent care needs, learning when to shift between autonomy (in assisting with routines) and collaboration (working with human staff).
Scalability and Adaptability
The framework must handle large Atomspaces interacting with dynamically changing environments and adjust motivational priorities based on internal and external factors. The motivational framework's scalability relies on its ability to manage a distributed knowledge structure (e.g., an Atomspace) where data and motivational states are not only recorded but can evolve in response to real-time environmental shifts. This scalability ensures that large, complex networks of AGIs can adapt to the varying demands of different environments, adjusting priorities on-the-fly.
The system could include mechanisms like context-aware motivation scaling, where an AGI adjusts its motivation levels based on environmental stimuli (e.g., changes in available resources, the introduction of new tasks, or the presence of novel agents). The framework could also employ self-organizing principles, where the AGI continuously refines its own motivational state, learning from both internal feedback and environmental cues.
For instance, in a large-scale system such as a fleet of autonomous delivery robots, each robot might prioritize different aspects (speed, accuracy, resource conservation) depending on the current task complexity, external factors like weather or traffic conditions, and the overall state of the system.
Ethical Alignment
Ethical alignment is a critical consideration for any AGI system, particularly as these entities gain greater autonomy and decision-making power. The motivational framework will integrate ethical reasoning layers that prioritize human-centered values, such as fairness, safety, transparency, and respect for autonomy. This could be implemented using ethical models like utilitarianism (maximizing well-being for all agents), deontology (adhering to moral rules), and virtue ethics (promoting moral character and flourishing).
For example, in a healthcare setting, an AGI might be motivated by a set of ethical considerations that guide it to prioritize patient welfare, while still considering efficiency and cost-effectiveness. The AGI would need to resolve conflicts between these different motivational drives and adjust its behavior to align with overarching ethical guidelines, ensuring its actions remain socially and morally beneficial.
Foundation for future research
The proposed "driver"-based motivational framework will provide a solid foundation for ongoing research and innovation in AGI. It will facilitate experimentation with various motivational systems, exploring how AGIs can best adapt to complex, dynamic environments and socially interactive contexts. Additionally, the framework's modular design will allow for the continuous integration of new theories, models, and findings from psychology, sociology, AI ethics, and computational neuroscience, ensuring it remains at the cutting edge of AGI research.
Note on References:
"Let's Evolve Intelligence, not Solutions," shows recent work outlining a methodology for achieving AGI by defining and creating a rich array of quantifiable intelligence capabilities and drivers to achieve them across multiple, different AI techniques.
"An abstraction framework for cooperation among agents and people in a virtual world" paper shows past work on a framework for coordinating competing motivations of game AIs.
"POIROT. Integrated learning of web service procedures," paper shows past work on integrating diverse symbolic AI methods using a shared specification language.
Join the Discussion (0)
Please create account or login to post comments.