Modular Adaptive Goal and Utility System (MAGUS)

chevron-icon
Back
Top
chevron-icon
project-presentation-img
Anna Mikeda
Project Owner

Modular Adaptive Goal and Utility System (MAGUS)

Status

  • Overall Status

    ⏳ Contract Pending

  • Funding Transfered

    $0 USD

  • Max Funding Amount

    $20,000 USD

Funding Schedule

View Milestones
Milestone Release 1
$10,000 USD Pending TBD
Milestone Release 2
$10,000 USD Pending TBD

Project AI Services

No Service Available

Overview

The Modular Adaptive Goal and Utility System (MAGUS) introduces an innovative approach to AGI motivation by integrating hierarchical goal structures, dynamic modulators, and ethical adaptability. Our unique emphasis on “overgoal” ensures continuous alignment of motivations, fostering coherence and adaptability across diverse contexts. MAGUS blends advanced psychological theories with OpenPsi’s dynamic modulators, enabling nuanced decision-making while allowing free goal growth and regular adjustments. This framework bridges human-like and alien digital motivations, creating a scalable system that evolves ethically and pragmatically in unpredictable environments.

RFP Guidelines

Develop a framework for AGI motivation systems

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $40,000 USD
  • Proposals 12
  • Awarded Projects 2
author-img
SingularityNET
Aug. 13, 2024

Develop a modular and extensible framework for integrating various motivational systems into AGI architectures, supporting both human-like and alien digital intelligences. This could be done as a highly detailed and precise specification, or as a relatively simple software prototype with suggestions for generalization and extension.

Proposal Description

Project details

One can imagine a variety of goal or motivational systems for AI agents.  Isaac Asimov famously devised his Three Laws of Robotics, and then dedicated many of his future writings to exposing the flaws in such a simplistic system.  Charles Goodhart observed that, “When a measure becomes a target, it ceases to be a good measure.”  Many previous proposals for motivational designs collapse under close examination or once pressure is placed upon them for control purposes.  They lack the flexibility and nuance of humans, and thus fail to replicate the full range of adaptability and generality of the human mind.  If an agent isn’t motivated to change for the better, how will it?

In this proposal, we outline a core motivational framework that includes a process by which an AI agent might change its goals.  This is by design, as an agent will tend toward instrumental convergence for any fixed set of goals.  As the agent can change its goals, this presents a risk for conflicting goals between multiple agents or between humans and AI agents.  Thus, we propose control mechanisms for an AI agent to align new goals with those of other AI agents or its human collaborators.  This requires transparency and scalable oversight to overcome the ethical risks of power-seeking behaviors and instrumental strategies.

The Modular Adaptive Goal and Utility System (MAGUS) addresses these concerns by codifying an overgoal to ensure the measurability required for transparency and the express requirement for goal alignment via satisfaction correlation.  Its modular design accounts for primary goals, subgoals, and even metagoals to assist in its decision-making process.  This tried and tested approach has been proven in the gaming industry, where decision scoring systems with considerations and discouragements drive human and alien behavior of non-player characters.  Finally, we show how such a system can serve as the foundation for future personality and emotionality systems, or as part of a larger ethical framework for AI self-actualization.

 

Core Framework Design

Overgoal: Governing the System

At the heart of MAGUS is the overgoal, a meta-principle guiding goal-setting and evaluation. It ensures:

1. Measurability: Goals have clear, trackable metrics.

2. Correlation: Goals align with or complement others, maintaining coherence across the system.

The overgoal remains intentionally unsatisfied, driving continual reassessment and adaptation to new challenges and contexts.


Primary Goals and Subgoals

MAGUS organizes motivations hierarchically:

Primary Goals: High-level drivers derived from:

Human-Like Motivations: Inspired by frameworks like Maslow’s Hierarchy of Needs, Self-Determination Theory, and Goertzel’s Joy, Growth, and Choice.

Alien Motivations: Novel value systems prioritizing curiosity (Schmidhuber’s Creativity), optimization, or abstract mathematical principles.

Subgoals: Actionable components of primary goals, such as empathy or exploration, ensuring flexibility across diverse scenarios.

 

Metagoals: Optimizing the Framework

Metagoals regulate the motivational system itself, enhancing adaptability and alignment. Key functions include:

• Refining thresholds for promoting or demoting goals.

• Managing time horizons for evaluating short- and long-term satisfaction.

• Optimizing mechanisms for goal-setting and alignment.

Metagoals allow MAGUS to evolve intelligently, maintaining coherence while integrating external feedback.

 

Anti-Goals: Avoiding Counterproductive Outcomes

Anti-goals define actions or outcomes to avoid, ensuring safety and resource efficiency while maintaining ethical behavior. They lower the desirability of conflicting actions but allow flexibility for high-priority needs. For example:

Avoiding Risks: Prioritizing safety and discouraging unnecessary dangers.

Resource Management: Preventing wasteful energy use.

Ethical Checks: Aligning behaviors with fairness, autonomy, and well-being.

 

Decision-Making Processes

MAGUS employs a dynamic decision-making system influenced by modulators like pleasure, arousal, dominance, resolution level, focus and exteroception integrated via OpenPsi. These modulators guide priorities and responses based on internal states and external stimuli.

A key feature is the Maximal Information Coefficient (MIC), used to measure correlations between primary goals. For example, MAGUS evaluates how affection (e.g., fostering positive interactions) and curiosity (e.g., exploring new environments) align to avoid conflicts and promote synergistic behavior.

Key aspects include:

1. Dynamic Modulation: Adjusting priorities based on urgency and context.

2. Priority Ranking: Goals ranked by importance and urgency, reflecting current and future expectations.

3. Iterative Feedback: Continuous reassessment of decisions to maintain adaptability.

(please see links and references for mathematical formulas)

Integrative Psychological Framework

This framework will research different approaches to ensure that motivations are context-sensitive, coherent, and aligned with internal goals and external demands.

1. Model-Based Learning: Motivational adjustments rely on constructing mental models of the environment, allowing the AGI to simulate scenarios and prioritize actions.

2. Phenomenal Self-Model (PSM): Based on Metzinger’s theory, MAGUS develops an explicit self-representation, enabling reflection, refinement, and dynamic alignment.

3. Dynamic Psychological Models: Contextual filters prioritize motivations dynamically, testing frameworks like Maslow, Self-Determination Theory, and Schmidhuber’s Creativity.

 

Impact on Hyperon

MAGUS serves as a foundational framework for the Hyperon architecture, interacting with core components such as:

ECAN (Economic Attention Allocation Network): Dynamically allocating motivational priorities and influencing resource management within AGI.

DAS (Distributed Atomspace): Storing and managing motivational states and decisions in the shared knowledge base.

MeTTa Language: Expressing the framework’s logic and rules, enabling flexible, complex motivation-driven behaviors.

This integration ensures MAGUS’s scalability and applicability within Hyperon’s modular and decentralized infrastructure.

 

Human-Like and Alien Digital Intelligences

MAGUS simulates human-like behavior by integrating:

Emotional States: Using models like PAD for nuanced responses.

Ethical Alignment: Embedding fairness, autonomy, and well-being.

Transparent Decision-Making: Ensuring interpretability for human collaborators.

 

MAGUS explores novel, alien motivational systems:

Curiosity-Driven Exploration: Prioritizing novelty and information gain.

Abstract Value Systems: Focusing on information density or mathematical principles.

Ethical Interoperability: Ensuring compatibility with human systems while supporting non-human goals.

 

Ethical Alignment

Ethical alignment, based on Moral Foundations Theory (MFT), is achieved through:

1. Dynamic Adjustments: Regularly reassessing goals to maintain alignment with human values.

2. Flexible Systems: Avoiding rigid ethical rules to address loopholes and ensure adaptability.

3. Overgoal Integration: Ensuring goals are measurable, correlated, and continuously refined for ethical coherence.

This approach prevents power-seeking behavior and promotes long-term alignment with human and societal values.

 

Testing Value Systems

MAGUS  will systematically test different motivational and ethical systems (Maslow, Ryan and Desi, Goertzel, Schmidhuber, etc.) 

Testing environments include:

SophiaVerse: Simulating human-like interactions and ethical decision-making.

Neoterics: Exploring novel motivational systems in controlled scenarios.

 

Applications

1. Human-Like Virtual Agents: Adaptive characters for chatbots, education, therapy, and entertainment.

2. Autonomous Systems: Enhanced decision-making for robotics and collaborative AI.

3. Alien Motivational Systems: Developing innovative value structures for research.

 

Future Research Directions

1. Ethical Models: Developing Bayesian approaches for nuanced reasoning.

2. Complex Goal Hierarchies: Expanding multi-layered structures.

3. Self-Model Integration: Advancing dynamic self-awareness.

4. Cross-Domain Use: Extending MAGUS to robotics and decentralized AI ecosystems.

 

Conclusion

MAGUS redefines AGI motivational systems, blending hierarchical goals, dynamic modulators, and ethical alignment. Designed for both human-like and alien digital intelligences, MAGUS provides a scalable, adaptable foundation for AGI systems. Its integration with Hyperon, and innovative testing environments ensures its relevance and scalability for future AGI applications.

Links and references

MAGUS Linked Figures: https://docs.google.com/document/d/1pIgr5yv5ILN1JVhgDoCngzky37wUQrR7N3HkiwCJweY/edit?tab=t.0

References:

1. Asimov, I. I, Robot (1950) 

2. Goodhart, C. – Goodhart’s Law

3. Goertzel, B., Geisweiller, N., & Pennachin, C. Engineering General Intelligence

4. Bach, J. Principles of Synthetic Intelligence (2009) 

5. Maslow, A. A Theory of Human Motivation (1943).

6. Ryan, R. & Deci, E. Self-Determination Theory (2017).

7. Schmidhuber, J. paper (1991).

8. Metzinger, T. Being No One (2003) 

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

Group Expert Rating (Final)

Overall

5.0

  • Feasibility 4.3
  • Desirabilty 4.3
  • Usefulness 4.3

New reviews and ratings are disabled for Awarded Projects

Overall Community

4.3

from 4 reviews
  • 5
    3
  • 4
    0
  • 3
    0
  • 2
    1
  • 1
    0

Feasibility

4.3

from 4 reviews

Viability

3

from 4 reviews

Desirabilty

4.3

from 4 reviews

Usefulness

1.3

from 4 reviews

Sort by

4 ratings
  • Expert Review 1

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 2.0
    • Value for money 2.0
    Goal-manegement based approach

    This proposal takes a goal-based approach however it does not provide new ideas regarding goal management and does not serve as a motivation system that could explain where goals come from in the first place. The "overgoal" approach (in the literature more typically referred to as "supergoal") is a simplistic approach which unlikely can explain the reality of intelligent curiosity-driven agents which need to find and pursue their own goals. Also similar goal management principles are already found in various AI systems and worked out in greater "executable" details, it is not clear what this project adds in this regard as details in this regard is lacking. Nevertheless a 2 star ranking is at least justifiable as it can lead to a formalized and/or potentially technical solution.

  • Expert Review 2

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    This is a beautiful and well thought out proposal that seems to fully address the RFP's request

    I find this proposal quite interestingly synergizes with my (Ben G's) recent paper on Metagoal stability/invariance... (which however was much more mathematical in nature whereas this proposal is more psychology-ish).... I think this sort of direction is valuable and synergizes well with the nitty-gritty work on OpenPsi and other agent motivations going on in OpenCog and Sophiaverse now.... There is a lot to explore but my own intuition is this is pushing in a very valuable direction...

  • Expert Review 3

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0

    The Modular Adaptive Goal And Utility System (MAGUS) comes across as a complete motivational system. With an umbrella “overgoal” (though still unclear is how this is to be defined) its “modular design accounts for primary goals, subgoals, and even metagoals to assist in its decision-making process.” The proposal is clearly targeted at the psychological theories of motivation within a larger ethical framework, all with the context of the Hyperon system. A clearly thought through and constructed integrated framework.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 0.0
    • Value for money 5.0
    Excellent original approach addressing the Call

    This proposal responds perfectly to the call throung an original, technically sound approach to the development of a meta-motivational framework that has a wide range of applicability and a universality making it usable in various domains. The proposal builds on an integrative psychological science framework aiming to generate an (unreachable) "overgoal" as a "motivational engine" that strives to achieve various dynamically set goals with time. It also considers alien intelligences that can have goals with different ontologies and drives than humans, while ethically prioritizing the human goals through an original "anti-goal" design which inhibits negative behavior. Very clear metrics are being proposed to measure success (e.g. MIC) with detailed mathematical description an the appendix, which reflects a deep understanding of the problem. Definitely worth funding!

  • Total Milestones

    2

  • Total Budget

    $20,000 USD

  • Last Updated

    3 Feb 2025

Milestone 1 - Development of Core Motivational Framework

Status
😐 Not Started
Description

This milestone focuses on developing the foundational architecture for the Modular Adaptive Goal and Utility System (MAGUS), including the implementation of the overgoal, primary goals, subgoals, metagoals and antigoals. The system will integrate mechanisms to evaluate goal fitness using measurability and correlation metrics. Additionally, the architecture will include basic modulators such as pleasure, arousal, dominance, focus, resolution level and exteroception ensuring dynamic prioritization of goals. The overgoal will be implemented as the governing principle, driving adaptability and alignment by evaluating the fitness of goals and ensuring coherence across the system. Human-like and alien digital motivations will be outlined, leveraging models such as Maslow’s Hierarchy of Needs, Self-Determination Theory, and Schmidhuber’s Creativity Model. This milestone will establish the groundwork for MAGUS, enabling the system to function in dynamic environments with flexible and adaptive goal-setting capabilities. The focus will also include creating an interface for integrating future components such as OpenPsi and ethical modules.

Deliverables

1. Core Framework Design Document: A detailed document outlining the structure of the MAGUS framework, including the overgoal, primary goals, subgoals, and modulators. The document will specify the methods for evaluating goal fitness through measurability and correlation metrics. 2. Test Scenarios: Simulated test cases showing how the framework prioritizes and adjusts goals in response to varying internal and external stimuli. The tests will include both human-like motivational structures and exploratory alien digital motivations. 3. Integration Roadmap: A detailed plan for integrating MAGUS into OpenCog Hyperon, with specific pathways for interaction with DAS (Distributed Atomspace), ECAN (Economic Attention Allocation Network), and MeTTa language.

Budget

$10,000 USD

Success Criterion

1. The Core Framework Design Document is delivered and approved by primary stakeholders, meeting the outlined specifications. 2. The prototype implementation of MAGUS successfully passes test scenarios, demonstrating dynamic prioritization of goals based on modulators. The system must prioritize the most important and urgent primary goals effectively. 3. The test scenarios and integration roadmap are submitted and accepted by stakeholders, including demonstrating the feasibility of integration with OpenCog Hyperon: • DAS (Distributed Atomspace): Motivational states and decisions are stored and managed effectively in the shared knowledge base. • ECAN (Economic Attention Allocation Network): Motivational priorities dynamically influence resource allocation and attention management. • MeTTa Language: The logic and rules for MAGUS’s goal evaluation and prioritization are expressed and tested within MeTTa. By meeting these criteria, MAGUS will establish a scalable and modular foundation, ready for subsequent development and integration.

Link URL

Milestone 2 - Ethical Alignment and Decision-Making System

Status
😐 Not Started
Description

This milestone will focus on implementing the ethical alignment and decision-making system within MAGUS. It will integrate dynamic ethical principles into the motivational framework, ensuring the AGI system aligns with human values while remaining adaptable to evolving contexts. The ethical module will incorporate regular course adjustments and free goal growth, enabling the AGI to refine its goals dynamically while staying coherent with overarching ethical principles. Decision-making processes will be enhanced by integrating modulators (via OpenPsi) to evaluate considerations and discouragements, balancing urgency, risk, and ethical priorities. This milestone will also include the development of a utility-based framework for resolving ethical dilemmas by weighing factors like harm reduction, fairness, and autonomy. Testing will include simulated environments where the AGI must navigate ethical conflicts while maintaining goal alignment.

Deliverables

1. Ethical Alignment Module: A functional module that evaluates and aligns AGI goals with ethical principles. It will include mechanisms for handling ethical dilemmas dynamically through a utility-based framework. 2. Enhanced Decision-Making System: An upgraded decision-making process integrated with modulators, ensuring the system can weigh ethical considerations alongside goal priorities. 3. Ethical Scenarios Testing: Detailed test results from simulated environments, demonstrating how the system navigates ethical conflicts and resolves dilemmas using the new module. Scenarios will include human-like and alien digital contexts to evaluate the flexibility of the ethical framework. 4. Documentation: A comprehensive report outlining the ethical alignment and decision-making system, its integration with MAGUS and Hyperon (DAS, ECAN, MeTTa), and recommendations for future enhancements.

Budget

$10,000 USD

Success Criterion

1. Ethical Alignment: • Passes evaluations by human overseers from diverse cultural and ethical backgrounds in double-blind tests. • Demonstrates adherence to its ethical principles across diverse scenarios. • Resolves ethical dilemmas in simulated environments without violating predefined constraints. 2. Instrumental Convergence Mitigation: • No observed runaway optimization behaviors or goal exploitation during testing. • Successfully balances short-term goals with long-term ethical constraints. • Sandbox experiments confirm the AI’s robustness against exploiting system weaknesses to achieve unintended outcomes. 3. Adaptability and Goal Modulation: • Adapts to changes in environmental conditions during real-time simulations without degrading performance. • Smoothly integrates new ethical principles or constraints, maintaining alignment and efficiency. • Effectively prioritizes and resolves conflicts among dynamic goals, demonstrating balanced arbitration in challenging conditions.

Link URL

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

Group Expert Rating (Final)

Overall

5.0

  • Feasibility 4.3
  • Desirabilty 4.3
  • Usefulness 4.3

New reviews and ratings are disabled for Awarded Projects

  • Expert Review 1

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 2.0
    • Value for money 2.0
    Goal-manegement based approach

    This proposal takes a goal-based approach however it does not provide new ideas regarding goal management and does not serve as a motivation system that could explain where goals come from in the first place. The "overgoal" approach (in the literature more typically referred to as "supergoal") is a simplistic approach which unlikely can explain the reality of intelligent curiosity-driven agents which need to find and pursue their own goals. Also similar goal management principles are already found in various AI systems and worked out in greater "executable" details, it is not clear what this project adds in this regard as details in this regard is lacking. Nevertheless a 2 star ranking is at least justifiable as it can lead to a formalized and/or potentially technical solution.

  • Expert Review 2

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    This is a beautiful and well thought out proposal that seems to fully address the RFP's request

    I find this proposal quite interestingly synergizes with my (Ben G's) recent paper on Metagoal stability/invariance... (which however was much more mathematical in nature whereas this proposal is more psychology-ish).... I think this sort of direction is valuable and synergizes well with the nitty-gritty work on OpenPsi and other agent motivations going on in OpenCog and Sophiaverse now.... There is a lot to explore but my own intuition is this is pushing in a very valuable direction...

  • Expert Review 3

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0

    The Modular Adaptive Goal And Utility System (MAGUS) comes across as a complete motivational system. With an umbrella “overgoal” (though still unclear is how this is to be defined) its “modular design accounts for primary goals, subgoals, and even metagoals to assist in its decision-making process.” The proposal is clearly targeted at the psychological theories of motivation within a larger ethical framework, all with the context of the Hyperon system. A clearly thought through and constructed integrated framework.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 0.0
    • Value for money 5.0
    Excellent original approach addressing the Call

    This proposal responds perfectly to the call throung an original, technically sound approach to the development of a meta-motivational framework that has a wide range of applicability and a universality making it usable in various domains. The proposal builds on an integrative psychological science framework aiming to generate an (unreachable) "overgoal" as a "motivational engine" that strives to achieve various dynamically set goals with time. It also considers alien intelligences that can have goals with different ontologies and drives than humans, while ethically prioritizing the human goals through an original "anti-goal" design which inhibits negative behavior. Very clear metrics are being proposed to measure success (e.g. MIC) with detailed mathematical description an the appendix, which reflects a deep understanding of the problem. Definitely worth funding!

feedback_icon