Modular Adaptive Goal and Utility System (MAGUS)

chevron-icon
Back
Top
chevron-icon
project-presentation-img
user-profile-img
Anna Mikeda
Project Owner

Modular Adaptive Goal and Utility System (MAGUS)

Status

  • Overall Status

    🛠️ In Progress

  • Funding Transfered

    $10,000 USD

  • Max Funding Amount

    $20,000 USD

Funding Schedule

View Milestones
Milestone Release 1
$10,000 USD Transfer Complete 16 May 2025
Milestone Release 2
$0 USD Pending 16 May 2025
Milestone Release 3
$5,000 USD Pending TBD
Milestone Release 4
$5,000 USD Pending TBD

Project AI Services

No Service Available

Overview

The Modular Adaptive Goal and Utility System (MAGUS) introduces an innovative approach to AGI motivation by integrating hierarchical goal structures, dynamic modulators, and ethical adaptability. Our unique emphasis on “overgoal” ensures continuous alignment of motivations, fostering coherence and adaptability across diverse contexts. MAGUS blends advanced psychological theories with OpenPsi’s dynamic modulators, enabling nuanced decision-making while allowing free goal growth and regular adjustments. This framework bridges human-like and alien digital motivations, creating a scalable system that evolves ethically and pragmatically in unpredictable environments.

RFP Guidelines

Develop a framework for AGI motivation systems

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $40,000 USD
  • Proposals 2
  • Awarded Projects 2
author-img
SingularityNET
Aug. 13, 2024

Develop a modular and extensible framework for integrating various motivational systems into AGI architectures, supporting both human-like and alien digital intelligences. This could be done as a highly detailed and precise specification, or as a relatively simple software prototype with suggestions for generalization and extension.

Proposal Description

Project details

One can imagine a variety of goal or motivational systems for AI agents.  Isaac Asimov famously devised his Three Laws of Robotics, and then dedicated many of his future writings to exposing the flaws in such a simplistic system.  Charles Goodhart observed that, “When a measure becomes a target, it ceases to be a good measure.”  Many previous proposals for motivational designs collapse under close examination or once pressure is placed upon them for control purposes.  They lack the flexibility and nuance of humans, and thus fail to replicate the full range of adaptability and generality of the human mind.  If an agent isn’t motivated to change for the better, how will it?

In this proposal, we outline a core motivational framework that includes a process by which an AI agent might change its goals.  This is by design, as an agent will tend toward instrumental convergence for any fixed set of goals.  As the agent can change its goals, this presents a risk for conflicting goals between multiple agents or between humans and AI agents.  Thus, we propose control mechanisms for an AI agent to align new goals with those of other AI agents or its human collaborators.  This requires transparency and scalable oversight to overcome the ethical risks of power-seeking behaviors and instrumental strategies.

The Modular Adaptive Goal and Utility System (MAGUS) addresses these concerns by codifying an overgoal to ensure the measurability required for transparency and the express requirement for goal alignment via satisfaction correlation.  Its modular design accounts for primary goals, subgoals, and even metagoals to assist in its decision-making process.  This tried and tested approach has been proven in the gaming industry, where decision scoring systems with considerations and discouragements drive human and alien behavior of non-player characters.  Finally, we show how such a system can serve as the foundation for future personality and emotionality systems, or as part of a larger ethical framework for AI self-actualization.

 

Core Framework Design

Overgoal: Governing the System

At the heart of MAGUS is the overgoal, a meta-principle guiding goal-setting and evaluation. It ensures:

1. Measurability: Goals have clear, trackable metrics.

2. Correlation: Goals align with or complement others, maintaining coherence across the system.

The overgoal remains intentionally unsatisfied, driving continual reassessment and adaptation to new challenges and contexts.


Primary Goals and Subgoals

MAGUS organizes motivations hierarchically:

Primary Goals: High-level drivers derived from:

Human-Like Motivations: Inspired by frameworks like Maslow’s Hierarchy of Needs, Self-Determination Theory, and Goertzel’s Joy, Growth, and Choice.

Alien Motivations: Novel value systems prioritizing curiosity (Schmidhuber’s Creativity), optimization, or abstract mathematical principles.

Subgoals: Actionable components of primary goals, such as empathy or exploration, ensuring flexibility across diverse scenarios.

 

Metagoals: Optimizing the Framework

Metagoals regulate the motivational system itself, enhancing adaptability and alignment. Key functions include:

• Refining thresholds for promoting or demoting goals.

• Managing time horizons for evaluating short- and long-term satisfaction.

• Optimizing mechanisms for goal-setting and alignment.

Metagoals allow MAGUS to evolve intelligently, maintaining coherence while integrating external feedback.

 

Anti-Goals: Avoiding Counterproductive Outcomes

Anti-goals define actions or outcomes to avoid, ensuring safety and resource efficiency while maintaining ethical behavior. They lower the desirability of conflicting actions but allow flexibility for high-priority needs. For example:

Avoiding Risks: Prioritizing safety and discouraging unnecessary dangers.

Resource Management: Preventing wasteful energy use.

Ethical Checks: Aligning behaviors with fairness, autonomy, and well-being.

 

Decision-Making Processes

MAGUS employs a dynamic decision-making system influenced by modulators like pleasure, arousal, dominance, resolution level, focus and exteroception integrated via OpenPsi. These modulators guide priorities and responses based on internal states and external stimuli.

A key feature is the Maximal Information Coefficient (MIC), used to measure correlations between primary goals. For example, MAGUS evaluates how affection (e.g., fostering positive interactions) and curiosity (e.g., exploring new environments) align to avoid conflicts and promote synergistic behavior.

Key aspects include:

1. Dynamic Modulation: Adjusting priorities based on urgency and context.

2. Priority Ranking: Goals ranked by importance and urgency, reflecting current and future expectations.

3. Iterative Feedback: Continuous reassessment of decisions to maintain adaptability.

(please see links and references for mathematical formulas)

Integrative Psychological Framework

This framework will research different approaches to ensure that motivations are context-sensitive, coherent, and aligned with internal goals and external demands.

1. Model-Based Learning: Motivational adjustments rely on constructing mental models of the environment, allowing the AGI to simulate scenarios and prioritize actions.

2. Phenomenal Self-Model (PSM): Based on Metzinger’s theory, MAGUS develops an explicit self-representation, enabling reflection, refinement, and dynamic alignment.

3. Dynamic Psychological Models: Contextual filters prioritize motivations dynamically, testing frameworks like Maslow, Self-Determination Theory, and Schmidhuber’s Creativity.

 

Impact on Hyperon

MAGUS serves as a foundational framework for the Hyperon architecture, interacting with core components such as:

ECAN (Economic Attention Allocation Network): Dynamically allocating motivational priorities and influencing resource management within AGI.

DAS (Distributed Atomspace): Storing and managing motivational states and decisions in the shared knowledge base.

MeTTa Language: Expressing the framework’s logic and rules, enabling flexible, complex motivation-driven behaviors.

This integration ensures MAGUS’s scalability and applicability within Hyperon’s modular and decentralized infrastructure.

 

Human-Like and Alien Digital Intelligences

MAGUS simulates human-like behavior by integrating:

Emotional States: Using models like PAD for nuanced responses.

Ethical Alignment: Embedding fairness, autonomy, and well-being.

Transparent Decision-Making: Ensuring interpretability for human collaborators.

 

MAGUS explores novel, alien motivational systems:

Curiosity-Driven Exploration: Prioritizing novelty and information gain.

Abstract Value Systems: Focusing on information density or mathematical principles.

Ethical Interoperability: Ensuring compatibility with human systems while supporting non-human goals.

 

Ethical Alignment

Ethical alignment, based on Moral Foundations Theory (MFT), is achieved through:

1. Dynamic Adjustments: Regularly reassessing goals to maintain alignment with human values.

2. Flexible Systems: Avoiding rigid ethical rules to address loopholes and ensure adaptability.

3. Overgoal Integration: Ensuring goals are measurable, correlated, and continuously refined for ethical coherence.

This approach prevents power-seeking behavior and promotes long-term alignment with human and societal values.

 

Testing Value Systems

MAGUS  will systematically test different motivational and ethical systems (Maslow, Ryan and Desi, Goertzel, Schmidhuber, etc.) 

Testing environments include:

SophiaVerse: Simulating human-like interactions and ethical decision-making.

Neoterics: Exploring novel motivational systems in controlled scenarios.

 

Applications

1. Human-Like Virtual Agents: Adaptive characters for chatbots, education, therapy, and entertainment.

2. Autonomous Systems: Enhanced decision-making for robotics and collaborative AI.

3. Alien Motivational Systems: Developing innovative value structures for research.

 

Future Research Directions

1. Ethical Models: Developing Bayesian approaches for nuanced reasoning.

2. Complex Goal Hierarchies: Expanding multi-layered structures.

3. Self-Model Integration: Advancing dynamic self-awareness.

4. Cross-Domain Use: Extending MAGUS to robotics and decentralized AI ecosystems.

 

Conclusion

MAGUS redefines AGI motivational systems, blending hierarchical goals, dynamic modulators, and ethical alignment. Designed for both human-like and alien digital intelligences, MAGUS provides a scalable, adaptable foundation for AGI systems. Its integration with Hyperon, and innovative testing environments ensures its relevance and scalability for future AGI applications.

Links and references

MAGUS Linked Figures: https://docs.google.com/document/d/1pIgr5yv5ILN1JVhgDoCngzky37wUQrR7N3HkiwCJweY/edit?tab=t.0

References:

1. Asimov, I. I, Robot (1950) 

2. Goodhart, C. – Goodhart’s Law

3. Goertzel, B., Geisweiller, N., & Pennachin, C. Engineering General Intelligence

4. Bach, J. Principles of Synthetic Intelligence (2009) 

5. Maslow, A. A Theory of Human Motivation (1943).

6. Ryan, R. & Deci, E. Self-Determination Theory (2017).

7. Schmidhuber, J. paper (1991).

8. Metzinger, T. Being No One (2003) 

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

Group Expert Rating (Final)

Overall

5.0

  • Compliance with RFP requirements 4.3
  • Solution details and team expertise 4.3
  • Value for money 4.3

New reviews and ratings are disabled for Awarded Projects

Overall Community

4.3

from 5 reviews
  • 5
    3
  • 4
    0
  • 3
    0
  • 2
    1
  • 1
    0

Feasibility

4.3

from 5 reviews

Viability

3

from 5 reviews

Desirabilty

4.3

from 5 reviews

Usefulness

1.3

from 5 reviews

Sort by

5 ratings
  • Expert Review 1

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 2.0
    • Value for money 0.0
    Goal-manegement based approach

    This proposal takes a goal-based approach however it does not provide new ideas regarding goal management and does not serve as a motivation system that could explain where goals come from in the first place. The "overgoal" approach (in the literature more typically referred to as "supergoal") is a simplistic approach which unlikely can explain the reality of intelligent curiosity-driven agents which need to find and pursue their own goals. Also similar goal management principles are already found in various AI systems and worked out in greater "executable" details, it is not clear what this project adds in this regard as details in this regard is lacking. Nevertheless a 2 star ranking is at least justifiable as it can lead to a formalized and/or potentially technical solution.

  • Expert Review 2

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 0.0
    This is a beautiful and well thought out proposal that seems to fully address the RFP's request

    I find this proposal quite interestingly synergizes with my (Ben G's) recent paper on Metagoal stability/invariance... (which however was much more mathematical in nature whereas this proposal is more psychology-ish).... I think this sort of direction is valuable and synergizes well with the nitty-gritty work on OpenPsi and other agent motivations going on in OpenCog and Sophiaverse now.... There is a lot to explore but my own intuition is this is pushing in a very valuable direction...

  • Expert Review 3

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 0.0

    The Modular Adaptive Goal And Utility System (MAGUS) comes across as a complete motivational system. With an umbrella “overgoal” (though still unclear is how this is to be defined) its “modular design accounts for primary goals, subgoals, and even metagoals to assist in its decision-making process.” The proposal is clearly targeted at the psychological theories of motivation within a larger ethical framework, all with the context of the Hyperon system. A clearly thought through and constructed integrated framework.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    Excellent original approach addressing the Call

    This proposal responds perfectly to the call throung an original, technically sound approach to the development of a meta-motivational framework that has a wide range of applicability and a universality making it usable in various domains. The proposal builds on an integrative psychological science framework aiming to generate an (unreachable) "overgoal" as a "motivational engine" that strives to achieve various dynamically set goals with time. It also considers alien intelligences that can have goals with different ontologies and drives than humans, while ethically prioritizing the human goals through an original "anti-goal" design which inhibits negative behavior. Very clear metrics are being proposed to measure success (e.g. MIC) with detailed mathematical description an the appendix, which reflects a deep understanding of the problem. Definitely worth funding!

  • Total Milestones

    4

  • Total Budget

    $20,000 USD

  • Last Updated

    15 Sep 2025

Milestone 1 - Core Framework, Goal Modulation, and Decision Prototyping

Status
😀 Completed
Description

This milestone establishes the foundational architecture for MAGUS, focusing on the overgoal, primary goals, subgoals, considerations, discouragements, and modulators. These components will form the basis for dynamic goal prioritization, ensuring the system can adjust motivations based on internal states and external stimuli. The modulators—including pleasure, arousal, dominance, focus, resolution level, and exteroception—will be integrated to influence decision scoring and prioritization, allowing for adaptive responses. Considerations (factors promoting decisions) and discouragements (constraints limiting undesirable behaviors) will refine goal selection, ensuring a balance between immediate priorities and long-term coherence. Additionally, early decision-making prototyping will begin, allowing for simulated evaluations of goal selection and responsiveness. The framework will remain modular to allow future expansion into broader AGI architectures.

Deliverables

1. Core Framework Design Document – A structured document outlining: • Overgoal mechanics and their function in regulating primary and subgoals. • Definition and interaction of modulators in decision-making. • Methods for evaluating goal fitness using measurability and correlation. 2. Prototype Implementation – A basic functional version of MAGUS with: • Dynamic goal selection and prioritization based on modulators. • Initial decision-scoring system integrating considerations and discouragements.

Budget

$10,000 USD

Milestone 2 - Testing & Exploratory Prototyping of Adaptive Goal Structures

Status
🧐 In Progress
Description

This milestone focuses on testing and refining MAGUS’s adaptive goal structures in simulated environments, ensuring the system dynamically adjusts based on internal states and external stimuli. Additionally, exploratory prototyping will begin, investigating how MAGUS might interact with symbolic and sub-symbolic reasoning architectures. This includes studying potential interfaces with cognitive frameworks such as OpenCog Hyperon, PLN, and NARS, as well as more general AI decision-making systems. While no direct integration will be attempted, this milestone will evaluate the feasibility of such connections, ensuring MAGUS remains modular and adaptable to evolving AGI architectures.

Deliverables

1. Test Scenarios & Results – Simulated tests validating MAGUS’s ability to adjust goals dynamically. 2. Exploratory Prototyping Report – A document assessing potential future integrations with reasoning and planning architectures. 3. Refinement of Goal Fitness Metrics – Adjustments to measurability and correlation scoring, improving system adaptability.

Budget

0

Link URL

Milestone 3 - Metagoals, Anti-Goals, and Early Planning Prototyping

Status
🧐 In Progress
Description

This milestone expands the decision-making framework by introducing metagoals and anti-goals, allowing MAGUS to self-optimize and regulate goal-setting dynamically. Metagoals enable the system to refine goal promotion and demotion mechanisms, optimizing long-term coherence. Anti-goals serve as constraints, discouraging actions that conflict with overarching priorities (e.g., energy efficiency, risk avoidance). Additionally, given the uncertainty around Hyperon’s action planning system, this milestone explores basic goal-directed action planning, potentially implementing a simple planner or behavior tree to demonstrate MAGUS in action.

Deliverables

1. Metagoals & Anti-Goals Module – Enabling self-regulation and long-term goal refinement. 2. Early Planning Prototype – A minimal implementation of goal-directed action planning, either as a standalone module or an interface to existing planners. 3. Updated Decision-Making System – Enhanced decision-scoring to account for long-term strategy adjustments using metagoals.

Budget

$5,000 USD

Link URL

Milestone 4 - Ethical Testing, Full System Evaluation, and Research Paper

Status
🧐 In Progress
Description

The final milestone focuses on testing MAGUS’s adaptability, ethical constraints, and decision-making robustness. The system will undergo simulated ethical dilemmas to validate its ability to balance autonomy, fairness, and risk mitigation. Additionally, this milestone will assess the feasibility of integrating MAGUS with existing AI architectures, particularly how it could interface with symbolic reasoning and planning systems. If applicable, exploratory tests will be conducted with general AI frameworks to refine integration pathways. A formal research paper summarizing MAGUS’s development, testing, and future directions will be prepared, ensuring broad dissemination of findings.

Deliverables

1. Ethical Scenarios Testing Results – Case studies evaluating AGI responses to complex decision-making conditions. 2. Final System Evaluation Report – Assessing MAGUS’s adaptability and potential integration pathways. 3. Research Paper – A structured publication summarizing the project’s key findings and its implications for future AGI development.

Budget

$5,000 USD

Link URL

Join the Discussion (1)

Sort by

1 Comment
  • 0
    commentator-avatar
    cybermancer
    Apr 3, 2025 | 7:30 PM

    Hello Anna, can I collaborate with you on this project as a volunteer? I was very interested in submitting an RFP but didn't have enough time to put together a strong proposal. I do have some experience with some of the concepts described in your proporsal e.g., HGN, planning, machine ethics (consequentialism, deotology) and others. Please reach out to me, so that we can discuss further.

Expert Ratings

Reviews & Ratings

Group Expert Rating (Final)

Overall

5.0

  • Compliance with RFP requirements 4.3
  • Solution details and team expertise 4.3
  • Value for money 4.3

New reviews and ratings are disabled for Awarded Projects

  • Expert Review 1

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 2.0
    • Value for money 0.0
    Goal-manegement based approach

    This proposal takes a goal-based approach however it does not provide new ideas regarding goal management and does not serve as a motivation system that could explain where goals come from in the first place. The "overgoal" approach (in the literature more typically referred to as "supergoal") is a simplistic approach which unlikely can explain the reality of intelligent curiosity-driven agents which need to find and pursue their own goals. Also similar goal management principles are already found in various AI systems and worked out in greater "executable" details, it is not clear what this project adds in this regard as details in this regard is lacking. Nevertheless a 2 star ranking is at least justifiable as it can lead to a formalized and/or potentially technical solution.

  • Expert Review 2

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 0.0
    This is a beautiful and well thought out proposal that seems to fully address the RFP's request

    I find this proposal quite interestingly synergizes with my (Ben G's) recent paper on Metagoal stability/invariance... (which however was much more mathematical in nature whereas this proposal is more psychology-ish).... I think this sort of direction is valuable and synergizes well with the nitty-gritty work on OpenPsi and other agent motivations going on in OpenCog and Sophiaverse now.... There is a lot to explore but my own intuition is this is pushing in a very valuable direction...

  • Expert Review 3

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 0.0

    The Modular Adaptive Goal And Utility System (MAGUS) comes across as a complete motivational system. With an umbrella “overgoal” (though still unclear is how this is to be defined) its “modular design accounts for primary goals, subgoals, and even metagoals to assist in its decision-making process.” The proposal is clearly targeted at the psychological theories of motivation within a larger ethical framework, all with the context of the Hyperon system. A clearly thought through and constructed integrated framework.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    Excellent original approach addressing the Call

    This proposal responds perfectly to the call throung an original, technically sound approach to the development of a meta-motivational framework that has a wide range of applicability and a universality making it usable in various domains. The proposal builds on an integrative psychological science framework aiming to generate an (unreachable) "overgoal" as a "motivational engine" that strives to achieve various dynamically set goals with time. It also considers alien intelligences that can have goals with different ontologies and drives than humans, while ethically prioritizing the human goals through an original "anti-goal" design which inhibits negative behavior. Very clear metrics are being proposed to measure success (e.g. MIC) with detailed mathematical description an the appendix, which reflects a deep understanding of the problem. Definitely worth funding!

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.