Relat.ing: BGI + Psychotechnology

chevron-icon
Back
Top
chevron-icon
project-presentation-img
Chris Ewald
Project Owner

Relat.ing: BGI + Psychotechnology

Expert Rating

n/a
  • Proposal for BGI Nexus 1
  • Funding Request $50,000 USD
  • Funding Pools Beneficial AI Solutions
  • Total 6 Milestones

Overview

Relat.ing is reimagining online interpersonal connection by pioneering a human-first video platform supported by AI—enhancing relationality while ethically gathering social and emotional intelligence for future benevolent AI applications. Built for intimate gatherings, not enterprises, relat.ing aims to foster the optimal conditions for human connection, while sharing transformative psychotechnologies from ancient lineages and post-conventional cultures. Open-source and interoperable with the SingularityNET ecosystem, this space will serve as social infrastructure and an ethical data source for Human+AI co-evolution. This is the killer app SingularityNET needs to give BGI grounded roots.

Proposal Description

How Our Project Will Contribute To The Growth Of The Decentralized AI Platform

We're building a bridge between human wisdom & BGI - an evolutionary cultural sandbox where meaningful relational exploration shapes the future of collective intelligence. 

Relat.ing nurtures natural patterns of connection while gathering curated data for BGI creation, creating a mutual feedback loop that fosters both immediate human flourishing and the development of true BGI. 

This living laboratory of Human+AI co-creation guides technology toward deeper understanding of relationship.

Our Team

Chris Ewald - CTO and cofounder of Interbe.ing. Relational practice wizard.

Daniel Lindenberger - Director of Thaumazo and a core group alumni to Group Works (A group dynamics Pattern Language)

Christine Francis - Mystic earth creature turned experience designer.

Ben Smith - Civilization Research Institute curriculum developer & post-conventional community organizer.

Ufuk Çetincan & Elif Kavalci - Brand design, illustration, UI/UX and art direction Co-founders of Thinker&Tailor.

AI services (New or Existing)

Speech Emotion Recognition

How it will be used

We are going to send live chunked speech data from inside of a video meeting to this model and show it's results inside of our video platform. It will be running alongside a conversation.

English Speech Recognition

How it will be used

We will send chunked audio data to this model for audio/video meeting transcriptions.

The core problem we are aiming to solve

Our most intimate online gatherings are trapped in corporate software designed for business needs, not human connection. Transformative wisdom for human relating exists in post-conventional communities, but it remains inaccessible to most people—creating a growing crisis of digital disconnection and shallow online interaction.

Even more concerning: as AI systems learn from our current digital behavior, they're being trained on impoverished models of human connection. Without access to deeper patterns of relating, AGI risks developing blindspots about healthy human connection.

Relat.ing offers a different path: imagine our most meaningful moments shaping the future of intelligence.

 

Our specific solution to this problem

Relat.ing is a video platform that simultaneously solves the immediate need for better online gathering spaces designed for human connection & wiser relating, while building essential infrastructure for ethically gathered data that fuels socially intelligent AI.

Starting with the livekit.io video platform infrastructure, we will add novel elements that foster connection and point users towards better relational skills. Softer video layouts that are well-suited for relationality will be added. Surfacing group pattern dynamics from the Group Works deck will be incorporated. AI agents with easy interoperability with the SingularityNET ecosystem will be enabled.

A backend toolkit will allow end users to perform semantic & vector searching of their transcript history and allow access to user customizable and pre-defined AI prompts for transcript analysis. These and other features allow more people to use advanced AI workflows to gain valuable insights from past conversations.

Export of video, audio, and relationally annotated transcript data to SingularityNET models will be streamlined and easy. This unlocks a curated relational dataset that can be leveraged to create future BGI.

There are countless ways this foundation can be built upon to foster the co-evolution of Human+AI systems. Better human relating feeds into better data for the creation of BGI. Better BGI feeds into AI agents that help humans relate better. Let’s start the flywheel on this upward benefit spiral.

Project details

The origin story of Relat.ing & Team-Project fit

The idea for relat.ing came out of Chris’ experience with Interbe.ing. Interbe.ing is a global community of meditators that do participatory relational practices together online. In that context, the go-to tool, zoom, was found to introduce undesirable impediments in the fostering of interpersonal connection. The enterprise vibes significantly detract from genuine human connection.

Looking around and seeing all other intimate online gatherings usually using zoom, the question was asked: Why are we using enterprise tools for our most intimate online video gathering spaces? We can do better than this. 

The truth is, online video platforms are inherently designed to prioritize enterprise customers over intimate gatherings. The reason is simple: these companies raise tens to hundreds of millions of dollars in VC funding, and the investors behind them demand returns. As a result, video platforms inevitably cater to enterprise needs, often at the expense of fostering genuine, intimate human connections.

A bootstrapped company that raises small amounts of funding from aligned and community driven sources like deepfunding.ai, is able to maintain individuals as the core customers. Today, anyone with a device wishes to connect in online video spaces. Online coaches, virtual conferences, relational practice and spiritual communities, 1-1 meetings, are just a few examples of spaces that wish to connect more intimately. 

In the liminal metaverse called Vibecafe, Chris met Ben Smith. Ben is an integral member of many post-conventional communities who are particularly appropriate as early adopters and test users of this tool. Some of these communities include Limicon, Monastic Academy, Emergent Commons, VillageCo, The Fractal Community, Civilization Research Institute and other metamodern communities. Relat.ing is planned to be built in relationship with these communities to meet their needs in ways that current video platforms do not.

Christine met Ben Smith at Limicon 2024 at the moonshot workshop that Ben hosted. Christine has been deeply involved in many transformative culture projects as she has personally walked the path of healing, transformation, and wisdom gathering for many years.

Through the metamodernism community, Chris met Daniel Lindenberger, who was a core team member of Group Works: The Group Pattern Language Project. In this project, the Group Works team identified and made explicit core patterns that exist in real-time group dynamics. These patterns use appreciative inquiry to surface core design principles which make group interactions connective, meaningful, and effective. When one has learned to perceive these patterns in real-time, a radical upleveling of group coherence and flow is unlocked.

By building the Group Works Deck directly into an online video platform we both center the primacy of functional group dynamics and human relating, and provide a clear, concrete tool for capacity building in this regard. Because this tool is methodology independent and emphasizes patterns that are easy for anyone to recognize, it becomes easy for groups to embody the patterns described.

Open source supported by a sustainable business

Given the vastly expansive potential and variety of skills and expertise in these areas, we feel it is important for the relat.ing platform to be open source. Experts skilled in various practices are able to contribute their expertise to the commons. A discovery area is envisioned where end-users can find and explore new useful, fun, or developmental practices.

In addition to releasing relat.ing as open source software for all to freely use, an intention of this grant is to bootstrap a sustainable business that will be able to support the operational costs and ongoing development of the software beyond the scope of this grant. We have identified “online coaching” as the most clear and straightforward initial target market ($3.2B 2022 Market size, 14% CAGR).

Making advanced AI workflows available to end users

Relat.ing will feature a backend toolkit that will allow end users to perform semantic & vector searching of their transcript history. Users will be able to Bring-their-own (BYO) API keys of various ai providers to be able to analyze their past conversations. Powerful and insightful pre-defined AI prompts will be offered. Users can create their own AI prompt templates with variables available for transcripts and other meeting context. These and other features allow more people to use advanced AI workflows to gain valuable insights from past conversations.

Providing value to BGI

Export of video, audio, and relationally annotated transcript data to SingularityNET models will be streamlined and easy. This unlocks a curated relational dataset that can be leveraged to create future BGI, creating a mutual feedback loop that fosters both immediate human flourishing and the development of true BGI. 

Inviting users into a co-creative process with AI, including them as an integral ingredient in the data gathering, sets a standard for ethical AI integration into often exclusively human centric relationality exercises, opening the door for AI augmented human co-evolution.

Ethics is the core

This project champions wise human relat.ing at its very core, and seeks to be a grounded launch point for the development of future BGI systems. By focusing on relational skills and using post-conventional relational patterns, this project aims for BGI systems to foster only the wisest and most skilled aspects of human relating and social intelligence. Informed by a Buddhist view of ethics, we hold that spontaneous and playful action rooted in heart-centered connection is truly the highest form of embodied ethics. Insofar as our platform promotes spontaneous and heart-centered play between people, it is also encouraging this example of ethical behavior.

The potential of the Cultural Practice Evolution Sandbox

The Group Works Deck is just the start, and a grounded beginning of what can be built into an online video platform that focuses on fostering wiser interpersonal relating. Additional interactions that we can build into this video platform include the following:

  • Active Listening Challenges

  • Perspective Taking Games

  • Conflict Navigation

  • Embodied Decision Making

  • Trust Building Exercises

  • Emotional Intelligence Training

  • Coherence Building

  • Attunement Adventures

  • Collective Intelligence Challenges

  • Developmental Relational Practices

  • Community Weaving

  • Somatic Intelligence Practices

  • Authenticity Practices


Each practice could have:

- Progressive difficulty levels

- Clear success metrics

- Gamified reward systems

- Team challenges

- Achievement tracking

- Community recognition elements

The above lists point to the potential of what can be unlocked within a dedicated environment we are calling the “Cultural Practice Evolution Sandbox”. It is an evolving space that supports people to deepen their connections with each other by using one of the many practices included with relat.ing. Future AI guidance is envisioned that may point people to the best-fit practices for their specific needs.

The Cultural Practice Evolution Sandbox will be enabled with underlying technology. By leveraging livekit.io, we are able to bring AI agents into a video call context, where they can instruct and/or facilitate any of the above interactions.

 

Existing resources

Livekit.io for video infrastructure: https://livekit.io

Group Works deck for our core relationality upgrade: https://groupworksdeck.org

 

Open Source Licensing

AGPL - Affero GPL

Livekit.io - Apache License 2.0 

Group Works deck: Creative Commons Attribution-Share Alike 3.0

Additional videos

Dan Siegel on the importance of relational practices: https://youtu.be/uo8Yo4UE6g0

Was there any event, initiative or publication that motivated you to register/submit this proposal?

A personal referral

Describe the particulars.

Ben Smith was referred to this grant by Holly Denman, in the Metamodern Spirituality facebook group.

Proposal Video

Placeholder for Spotlight Day Pitch-presentations. Video's will be added by the DF team when available.

  • Total Milestones

    6

  • Total Budget

    $50,000 USD

  • Last Updated

    24 Feb 2025

Milestone 1 - Project Setup

Description

Corporate structure is setup. Team is organized and ready to collaborate with software. Brand Identity is created.

Deliverables

- Setup corporate structure using Clerky or Stripe Atlas - Organization software setup - Initial Brand Identity - Project scoping and organization completed

Budget

$9,000 USD

Success Criterion

All deliverables completed. Team is happy with the brand identity.

Milestone 2 - Video conferencing minimum featureset

Description

The first milestone will be to build the features necessary to be able to easily host a video call and have guests join. Livekit.io will be doing much of the heavy lifting here.

Deliverables

The features include the following: - User account sign up and login - Basic user profile with avatars - Ability to create or host a video call - Shareable links for others to join a video call or add to a calendar invite - Video call features ( mostly already done by livekit ) - Waiting rooms with mic and camera device selection and testing - Video functionality such as mic and camera toggles - Background noise reduction - Share screen

Budget

$8,000 USD

Success Criterion

The core team and various close test users are able to reliably and delightfully use relat.ing as a replacement for zoom.

Milestone 3 - Layouts and breakouts

Description

The first set of features that move us beyond the corporate grid and into a more relational style are new video layouts

Deliverables

- Breakout room functionality - Novel Video call layouts: - Circles - All speakers arranged in a circle with circular video feeds - Circumambulating circle - All speakers slowly circling around a user selectable central object. A campfire image is the default builtin. - Popcorn - Speakers arranged in a semi random circular style evoking “popcorn”

Budget

$6,000 USD

Success Criterion

Features implemented, tested, and found to work reliably.

Milestone 4 - Group Works deck

Description

Incorporate Group Works into the application.

Deliverables

- Surfacing of group works deck cards via radial menu 1-2 days - Tag a given moment with a card either appreciating or noticing 1-2 days - AI capacity to surface a given card via AI agent (proof of concept) 1 week - Shared storyboarding for exploring cards 2 weeks

Budget

$9,000 USD

Success Criterion

Features implemented, tested, and found to work reliably.

Milestone 5 - Meeting history toolbox

Description

In this milestone we will build out the backend of the the user account call history

Deliverables

- Search tools - Semantic search - GraphRAG search - AI Processing - BYO API Key configurable in settings - Great predefined prompts for analyzing past conversations - User definable prompting templates where transcripts and other meeting context can be injected as variables

Budget

$9,000 USD

Success Criterion

Features implemented, tested, and found to work reliably.

Milestone 6 - SingularityNET Interop

Description

SingularityNET Interop

Deliverables

- Configure SingularityNET account credentials in user settings - Export audio video or transcript data to various SingularityNET models - Template for using a SingularityNET model as a in-meeting AI agent - Using the speech emotion recognition model as the proof-of-concept - https://beta.singularitynet.io/servicedetails/org/naint/service/speech-emotions

Budget

$9,000 USD

Success Criterion

Features implemented, tested, and found to work reliably.

Join the Discussion (4)

Sort by

4 Comments
  • 0
    commentator-avatar
    Sky Yap
    Mar 9, 2025 | 1:11 PM

    Hey, I'm really intrigued by the concept behind Relat.ing, but I'm having a hard time visualizing how this new kind of video platform will actually work. Could you elaborate on how you're planning to build it? For example, what does the user experience look like compared to traditional video apps? How do the AI components—such as speech emotion recognition and transcript analysis—integrate into the platform to enhance genuine human connection? I really have a hard time visualizing how it works. Can you help me understand your vision?

    • 1
      commentator-avatar
      Chris Ewald
      Mar 16, 2025 | 10:08 AM

      Sure. I’ll answer your questions one at a time.   > Could you elaborate on how you're planning to build it?   For the implementation, this will be a custom react web application that builds on top of livekit.io which provides a reliable open-source video call infrastructure for both frontend and backend. I’d be happy to expand on more technical details if desired. > For example, what does the user experience look like compared to traditional video apps?  The user experience can be separated into 2 main categories. In “in-call” UI/UX, and the “backend account” UI/UX. For the in-call UI/UX, we intend to use what we are calling video progressive enhancement. This is an extension to the core concept of Progressive Enhancement to mean that at base, we are going to mimic the core in-call functionality of zoom. From that core foundation, features will be able to be flexibly added to “enhance” the video experience. At the least enhanced foundational level, relat.ing will feel a lot like zoom with simply some additional video call layout options. The first additional layout options identified are “Campfire” and “Popcorn”. In campfire, call participants will be in a circle around a campfire (by default, the center image will be customizable). In “Popcorn”, call participants will be distributed somewhat randomly in circular video containers. This scope of work is intended to be completed by milestone 3. In the next milestone, we will implement the Group Works deck features as the primary enhancement. The Group Works deck is a comprehensive and insightful pattern language that elucidates distinct characteristics of human group dynamics. By recognizing these qualities and patterns in real-time, individuals can significantly enhance their ability to facilitate and engage effectively with groups. We intend to allow a user to surface a pattern ( card with heart description ) from behind a menu. We wish to allow users to tag a moment in the meeting transcript with a pattern. This will surface group dynamics to AI, enabling new possibilities for AI analysis and fine-tuning. A stretch goal is to enable real-time analysis of a conversation and surfacing of patterns. In the second category, we have the backend account features. Here the user will be able to configure their account and view their call history. On the call history page, a user will be able to view a log of their past conversations with full transcripts. A user can search through their entire call history, using both text and vector-based searching, enabling search to be used for both keywords and semantic meaning. This page will enable easy-to-use AI analysis of their past calls. We intend to provide excellent built-in prompt templates for analyzing conversations in a way that instructs great insights and better relating. We also wish to provide an AI prompt template builder, so users can write their own prompts for conversation analysis with call transcripts and other call metadata available as template variables. > How do the AI components—such as speech emotion recognition and transcript analysis—integrate into the platform to enhance genuine human connection? In general, one goal of relat.ing is to provide a bring-your-own (BYO) API key for all AI services. Rather than locking users into paying extra for traditional AI service, we wish for users to have choice in the AI services they use. The transcription analysis AI service use is straightforward. It will be used to generate textual transcripts that will be available in a user’s call history. Which transcription service will be used is intended to be configurable in the user account settings. In addition, we would like to allow conversation transcripts with group works deck annotations to be easily sent to other models in the SingularityNET ecosystem. The emotion recognition aspect is intended to be implemented as a livekit agent. This means that it will join an ongoing call and display the output of the emotion recognition AI service in text in a box. We don’t intend this to be a hugely useful feature in itself, but rather its major value will be in creating a code template for any future SingularityNET AI services. By creating the initial code template and bi-directional communication between livekit.io agents to SingularityNET AI services, we hope to open up a new use case for the easy consumption for SingularityNET AI services. Ideally, this will inspire new unique AI services to be created for this context. I hope this answers your questions well. Please let me know if anything is unclear here or can be expanded on.

      • 0
        commentator-avatar
        Sky Yap
        Mar 17, 2025 | 9:03 AM

        Thank you for your detailed explanation. I now have a clearer understanding of your proposal and its objectives. Integrating features like the "Campfire" and "Popcorn" layouts add a meaningful human touch to the platform. Wishing you the best with this innovative project!

    • 0
      commentator-avatar
      Chris Ewald
      Mar 16, 2025 | 10:19 AM

      A well formatted version of the above comment can be found here

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

feedback_icon