chevron-icon
Active RFP

Utilize LLMs for modeling within MOSES

Top
chevron-icon
SingularityNET
RFP Owner

Utilize LLMs for modeling within MOSES

Utilize LLMs for modeling and generation within MOSES (Meta-Optimizing Semantic Evolutionary Search)

  • Type SingularityNET RFP
  • Total RFP Funding $150,000 USD
  • Proposals 9
  • Awarded Projects 1

Overview

  • Est. Complexity

    💪 50/ 100

  • Est. Execution Time

    ⏱️ 6 Months

  • Proposal Winners

    🏆 Multiple

  • Max Funding / Proposal

    $80,000USD

RFP Details

Short summary

This RFP invites proposals to explore the integration of LLMs into the MOSES evolutionary algorithm. Researchers can pursue one of several approaches, including generation modeling, fitness function learning, fitness estimation, investigation into domain-independent “cognitively motivated” fitness functions, or propose new innovative ways to leverage LLMs to enhance MOSES's capabilities within the OpenCog Hyperon framework.

Main purpose

The purpose of this RFP is to pioneer the use of LLMs in conjunction with MOSES to improve program generation, cross-domain learning, generic fitness functions, and fitness evaluation. The goal is to fund research that advances the intersection of LLMs and evolutionary algorithms, contributing to the broader development of Artificial General Intelligence (AGI) within SingularityNET's ecosystem.

Long description

Context and background: 

SingularityNET Foundation, in collaboration with other partners such as the OpenCog Foundation and TrueAGI, is working toward a scalable implementation of the Hyperon AGI framework running on decentralized infrastructure, and toward implementation of the PRIMUS cognitive architecture within this framework.

Hyperon and PRIMUS are complex systems involving multiple components, which need to demonstrate appropriate functionalities both individually and in combination. This RFP aims to address a portion of this overall need, via funding the initial iteration of one significant component of PRIMUS within Hyperon.

MOSES (Meta-Optimizing Semantic Evolutionary Search) is an evolutionary algorithm designed to evolve computer programs by optimizing them toward a specific goal or fitness function. MOSES uses a probabilistic modeling technique called Estimation of Distribution Algorithms (EDA) to build and refine populations of programs. It searches for optimal or near-optimal solutions by exploring the space of program structures in a more guided and efficient way, leveraging probabilistic models to predict and generate promising new program candidates. This makes it suitable for complex, symbolic tasks in the pursuit of AGI​​.

The goal of this project is to utilize LLMs for modeling and generation within MOSES. There exist at least three ways in the MOSES algorithm in which this could be accomplished.

  1. Generation Modeling: Supposing you have a single fitness function, MOSES evolves a population of programs to fulfill that fitness function using genetic programming. If your program population were large enough you could possibly fine tune the model on that population of MOSES programs, and then query the fine tuned model as to which extensions of the programs look most promising. Then you could use that in place of probabilistic modeling which, in the case of MOSES, is the EDA portion of the algorithm.
  2. Fitness Functions: One could apply the modeling across fitness functions. Suppose you have multiple fitness functions from some domain. Then you can learn one, fine tune the neural net with all of them, and then there can be some cross learning. An example of how this could be used is as follows:
    1. Suppose you need to solve 100 different genomics problems or thousands of different financial prediction problems. In combination with a long-term memory, could one learn patterns about what works that could be abstracted across different fitness functions within such domains.
    2. Additionally evolutionary processes in cognitive systems can benefit from investigations into cognitively motivated domain-independent fitness functions, such as evaluating the population according to their long-term success or contribution to generate correct predictions, and/or for the agent to achieve relevant goals.
  3. Fitness Estimation: Use a neural network to give a better or more efficient estimate of fitness. The neural network of such a fitness estimation need not be generative.

Collaboration:

This RFP may be followed by subsequent RFPs for applications that leverage Hyperon/PRIMUS to carry out various applications, and that aim to guide Hyperon/PRIMUS systems in cognitive development toward beneficial AGI.

RFP Expected Outcomes:

The primary outcomes of this RFP should demonstrate progress in integrating LLM-based techniques into the MOSES framework, enhancing its capabilities in modeling and generation. Specific expected results include:

  • Demonstration of LLM-Enhanced Modeling: Successful use of LLMs to fine-tune models based on the population of programs evolved in MOSES. These models should suggest promising extensions for programs or identify patterns that go beyond the current probabilistic modeling (EDA). What is important here is to measure whether, and by how much, LLM-integrated MOSES performs better compared to the current EDA-based MOSES.
  • Fitness Function Cross-Learning: Application of LLMs across multiple fitness functions to abstract patterns that could improve solutions for various problem domains (e.g., genomics or financial prediction). Demonstrate how cross-domain learning enhances MOSES’s efficiency. Hereby also domain-independent fitness functions can be considered, such as prediction success of programs and goal achievement competence in case the programs in the population are used for decision making.
  • Improved Fitness Estimation: Development of an LLM-based neural network for predicting the fitness of program candidates, aiming to reduce the computational costs of fitness evaluations. Since program fitness is problem/context-dependent, there is a challenge to ensure that the LLM retains generality and flexibility, perhaps by incorporating the task description as part of the prompt. This network does not need to be generative but should outperform current estimation methods.
  • Technical Implementation Milestones: Delivery of functional code that integrates LLMs into the MOSES framework in at least one of the proposed methods (modeling, fitness functions, or fitness estimation).
  • Benchmarking Results: Comparative performance benchmarks, with the current MOSES implementation, showing the efficiency gains from LLM integration, such as time reduction in program evolution, better performance with smaller populations, or improvements in fitness outcomes.

Functional Requirements

Must Have:

  • Integration of LLMs with MOSES
    The solution must integrate LLMs with MOSES, achieving one of the following or proposing a novel approach:
    • Generation Modeling: Fine-tune an LLM on the population of MOSES programs and use it to suggest promising program extensions, potentially replacing the probabilistic modeling (EDA) component.
    • Fitness Function Cross-Learning: Apply LLM modeling across multiple fitness functions from different problem domains, leveraging cross-learning to abstract useful patterns across domains.
    • Fitness Estimation: Use a neural network to efficiently estimate program fitness in MOSES, improving over traditional methods. The model can be discriminative and does not need to be generative.
    • Alternative LLM Integration: Proposers may suggest another innovative approach for integrating LLMs into the MOSES framework that demonstrates significant benefits. For example, let the generated programs make explicit predictions, and keep track of how often it predicted correctly or incorrectly. Then for instance create a PLN or NAL truth value out of it. That way it could potentially become easier to handle which instances of the population to keep, namely the ones with the highest truth value and goal-related ones. Or if the models can invoke actions, the prediction success can measure how likely the generated procedures achieve (or contribute to) goal achievement.
  • Reproducibility
    The solution must provide sufficient documentation and code to ensure that the experiments and results can be reproduced by other researchers.
  • Compatibility with MOSES
    The solution must work seamlessly within the MOSES framework, regardless of the specific method chosen for LLM integration. It should enhance the evolutionary process in some demonstrable way to increase the fitness of the population more effectively or to perform better with smaller populations.

Should Have:

  • Improved Efficiency
    If focusing on fitness estimation or program generation, the LLM should improve the efficiency of these tasks compared to traditional approaches, either by reducing computational costs or by improving the quality of the results.
  • Cross-Domain Learning (if applicable)
    If the approach involves cross-learning between multiple fitness functions, the LLM should demonstrate the ability to transfer knowledge between different problem domains, such as genomics and financial prediction, improving the solution's generality.
  • Enhancement of Probabilistic Modeling (if applicable)
    If using generation modeling, the LLM should enhance or replace the existing probabilistic modeling (EDA) within MOSES, providing superior performance in predicting program extensions.

Could Have:

  • Dynamic Adaptation:
    The LLM could dynamically adapt its predictions or suggestions as more data is gathered through the evolutionary process in MOSES.
  • Evidence Collection:
    Individuals/Programs of the population could be evaluated according to problem-independent lifelong fitness metrics such as the ability to predict correctly or to bring about (or contribute to reaching) relevant outcomes over time.
  • Transfer Learning:
    The LLM could leverage transfer learning techniques to apply knowledge gained from one task or domain to another, minimizing the need for extensive retraining on new tasks.

Won’t Have (Yet):

  • Scalability for Production:
    Scalability for large-scale or production environments is not required. The focus of this RFP is on pioneering research and demonstrating the feasibility of integrating LLMs into MOSES, not on ensuring scalability.

Non-functional Requirements

  • The solution should integrate with the MOSES framework
  • Recommended to use Python, C++, or MeTTa, but other languages or frameworks can be proposed if justified for the experiment
  • A modular design is encouraged, ideally supporting usage from MeTTa for easier integration with other Hyperon components
  • Fine-tuning data should be high quality where possible, but experimental use of available data is acceptable for early-stage exploration
  • Comprehensive documentation is required, covering installation, configuration, usage, and guidelines for extending or adjusting the LLM integration
  • Testing and validation should ensure the LLM integration works as expected, with comparisons to traditional MOSES methods in domains suitable for MOSES.

Main evaluation criteria

Feasibility of the proposed approach and of the proposer’s ability to deliver the results in the given timeframe

  • Does the proposal meet the requirements and advance the objectives of the RFP

Pre-existing R&D

  • Has the team previously done similar or related research or development work in other platforms / languages / contexts?

Team competence

  • Does the team have relevant skills?
  • Does the team have knowledge about Evolutionary Algorithms and sufficient understanding about MOSES in particular?

Cost

  • Does the proposal offer good value for money in terms of progressing capabilities of MOSES for future use in Hyperon?

Timeline

  • Does the proposal include a set of clearly defined milestones?

Other resources

Hyperon and related AI-platforms are quickly evolving! This is a bit of a moving target, but the internal SingularityNET team will be available for help and expert advice, where needed. Also included:

  • SingularityNET technology links
  • Educational materials and resources for learning MeTTa
  • SingularityNET holds MeTTa study group calls every other week. Proposers are welcome to attend for support from our researchers and community.
  • Recurring Hyperon study group calls for community are currently being planned. These will cover MOSES, ECAN, PLN, and other key components of the OpenCog and PRIMUS Hyperon cognitive architectures.
  • Access to the SingularityNET World Mattermost server, with a dedicated channel for discussion and support among the RFP-winning teams and SingularityNET resources.
  • https://voyager.minedojo.org/

RFP Status

Completed & Awarded

The community and public are invited to view the full proposals and give feedback. During this time the RFP committee will doing their formal selection process to award winning proposals.

View Awarded Projects
9 proposals
rfp=proposal-img
EXPERT REVIEW 4.6

Integrating LLMs into MOSES Framework

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
Nishant
Nov. 5, 2024
rfp=proposal-img
EXPERT REVIEW 4.0

LLM + Genetic Optimizing Multi Objective Functions

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
Luke Mahoney (MLabs)
Dec. 4, 2024
rfp=proposal-img
EXPERT REVIEW 3.6

Question of Semantics

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
evolveai
Dec. 6, 2024
rfp=proposal-img
EXPERT REVIEW 3.3

Proposal to utilise LLMs for Modeling in MOSES

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
aasavravi1234
Dec. 6, 2024
rfp=proposal-img
EXPERT REVIEW 3.0

A Logic-based Natural Language Interface (LNLI)

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
SelcukTopal
Nov. 25, 2024
rfp=proposal-img
EXPERT REVIEW 2.6

Code summary writing

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
Tofara Moyo
Dec. 4, 2024
rfp=proposal-img
EXPERT REVIEW 1.6

Decentralized AGI-Powered Trading

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
Terrence Hooi
Nov. 15, 2024
rfp=proposal-img
EXPERT REVIEW 1.0

par

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
ArielPaddock
Oct. 31, 2024
rfp=proposal-img
EXPERT REVIEW 1.0

PAIDX

  • Type SingularityNET RFP
  • Funding Request n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
saint david
Oct. 24, 2024
1 Projects
rfp=proposal-img

Direct the MOSES Evolutionary Exploration via LLMs

  • Type SingularityNET RFP
  • Funding Awarded n/a
  • RFP Guidelines Utilize LLMs for modeling within MOSES
author-img
Patrick Nercessian
Dec. 8, 2024

Join the Discussion (0)

feedback_icon