
Chris Ewald
Project OwnerChris is the core organizer of this project and will be the main technology developer. He brings his technology, project management, and relational practice expertise to animate this project.
Relat.ing is reimagining online interpersonal connection by pioneering a human-first video platform supported by AI—enhancing relationality while ethically gathering social and emotional intelligence for future benevolent AI applications. Built for intimate gatherings, not enterprises, relat.ing aims to foster the optimal conditions for human connection, while sharing transformative psychotechnologies from ancient lineages and post-conventional cultures. Open-source and interoperable with the SingularityNET ecosystem, this space will serve as social infrastructure and an ethical data source for Human+AI co-evolution. This is the killer app SingularityNET needs to give BGI grounded roots.
We are going to send live chunked speech data from inside of a video meeting to this model and show it's results inside of our video platform. It will be running alongside a conversation.
We will send chunked audio data to this model for audio/video meeting transcriptions.
Corporate structure is setup. Team is organized and ready to collaborate with software. Brand Identity is created.
- Setup corporate structure using Clerky or Stripe Atlas - Organization software setup - Initial Brand Identity - Project scoping and organization completed
$9,000 USD
All deliverables completed. Team is happy with the brand identity.
The first milestone will be to build the features necessary to be able to easily host a video call and have guests join. Livekit.io will be doing much of the heavy lifting here.
The features include the following: - User account sign up and login - Basic user profile with avatars - Ability to create or host a video call - Shareable links for others to join a video call or add to a calendar invite - Video call features ( mostly already done by livekit ) - Waiting rooms with mic and camera device selection and testing - Video functionality such as mic and camera toggles - Background noise reduction - Share screen
$8,000 USD
The core team and various close test users are able to reliably and delightfully use relat.ing as a replacement for zoom.
The first set of features that move us beyond the corporate grid and into a more relational style are new video layouts
- Breakout room functionality - Novel Video call layouts: - Circles - All speakers arranged in a circle with circular video feeds - Circumambulating circle - All speakers slowly circling around a user selectable central object. A campfire image is the default builtin. - Popcorn - Speakers arranged in a semi random circular style evoking “popcorn”
$6,000 USD
Features implemented, tested, and found to work reliably.
Incorporate Group Works into the application.
- Surfacing of group works deck cards via radial menu 1-2 days - Tag a given moment with a card either appreciating or noticing 1-2 days - AI capacity to surface a given card via AI agent (proof of concept) 1 week - Shared storyboarding for exploring cards 2 weeks
$9,000 USD
Features implemented, tested, and found to work reliably.
In this milestone we will build out the backend of the the user account call history
- Search tools - Semantic search - GraphRAG search - AI Processing - BYO API Key configurable in settings - Great predefined prompts for analyzing past conversations - User definable prompting templates where transcripts and other meeting context can be injected as variables
$9,000 USD
Features implemented, tested, and found to work reliably.
SingularityNET Interop
- Configure SingularityNET account credentials in user settings - Export audio video or transcript data to various SingularityNET models - Template for using a SingularityNET model as a in-meeting AI agent - Using the speech emotion recognition model as the proof-of-concept - https://beta.singularitynet.io/servicedetails/org/naint/service/speech-emotions
$9,000 USD
Features implemented, tested, and found to work reliably.
Please create account or login to post comments.
Reviews & Ratings
Please create account or login to write a review and rate.
Check back later by refreshing the page.
© 2024 Deep Funding
Sky Yap
Mar 9, 2025 | 1:11 PMEdit Comment
Processing...
Please wait a moment!
Hey, I'm really intrigued by the concept behind Relat.ing, but I'm having a hard time visualizing how this new kind of video platform will actually work. Could you elaborate on how you're planning to build it? For example, what does the user experience look like compared to traditional video apps? How do the AI components—such as speech emotion recognition and transcript analysis—integrate into the platform to enhance genuine human connection? I really have a hard time visualizing how it works. Can you help me understand your vision?
Chris Ewald
Project Owner Mar 16, 2025 | 10:08 AMEdit Comment
Processing...
Please wait a moment!
Sure. I’ll answer your questions one at a time. > Could you elaborate on how you're planning to build it? For the implementation, this will be a custom react web application that builds on top of livekit.io which provides a reliable open-source video call infrastructure for both frontend and backend. I’d be happy to expand on more technical details if desired. > For example, what does the user experience look like compared to traditional video apps? The user experience can be separated into 2 main categories. In “in-call” UI/UX, and the “backend account” UI/UX. For the in-call UI/UX, we intend to use what we are calling video progressive enhancement. This is an extension to the core concept of Progressive Enhancement to mean that at base, we are going to mimic the core in-call functionality of zoom. From that core foundation, features will be able to be flexibly added to “enhance” the video experience. At the least enhanced foundational level, relat.ing will feel a lot like zoom with simply some additional video call layout options. The first additional layout options identified are “Campfire” and “Popcorn”. In campfire, call participants will be in a circle around a campfire (by default, the center image will be customizable). In “Popcorn”, call participants will be distributed somewhat randomly in circular video containers. This scope of work is intended to be completed by milestone 3. In the next milestone, we will implement the Group Works deck features as the primary enhancement. The Group Works deck is a comprehensive and insightful pattern language that elucidates distinct characteristics of human group dynamics. By recognizing these qualities and patterns in real-time, individuals can significantly enhance their ability to facilitate and engage effectively with groups. We intend to allow a user to surface a pattern ( card with heart description ) from behind a menu. We wish to allow users to tag a moment in the meeting transcript with a pattern. This will surface group dynamics to AI, enabling new possibilities for AI analysis and fine-tuning. A stretch goal is to enable real-time analysis of a conversation and surfacing of patterns. In the second category, we have the backend account features. Here the user will be able to configure their account and view their call history. On the call history page, a user will be able to view a log of their past conversations with full transcripts. A user can search through their entire call history, using both text and vector-based searching, enabling search to be used for both keywords and semantic meaning. This page will enable easy-to-use AI analysis of their past calls. We intend to provide excellent built-in prompt templates for analyzing conversations in a way that instructs great insights and better relating. We also wish to provide an AI prompt template builder, so users can write their own prompts for conversation analysis with call transcripts and other call metadata available as template variables. > How do the AI components—such as speech emotion recognition and transcript analysis—integrate into the platform to enhance genuine human connection? In general, one goal of relat.ing is to provide a bring-your-own (BYO) API key for all AI services. Rather than locking users into paying extra for traditional AI service, we wish for users to have choice in the AI services they use. The transcription analysis AI service use is straightforward. It will be used to generate textual transcripts that will be available in a user’s call history. Which transcription service will be used is intended to be configurable in the user account settings. In addition, we would like to allow conversation transcripts with group works deck annotations to be easily sent to other models in the SingularityNET ecosystem. The emotion recognition aspect is intended to be implemented as a livekit agent. This means that it will join an ongoing call and display the output of the emotion recognition AI service in text in a box. We don’t intend this to be a hugely useful feature in itself, but rather its major value will be in creating a code template for any future SingularityNET AI services. By creating the initial code template and bi-directional communication between livekit.io agents to SingularityNET AI services, we hope to open up a new use case for the easy consumption for SingularityNET AI services. Ideally, this will inspire new unique AI services to be created for this context. I hope this answers your questions well. Please let me know if anything is unclear here or can be expanded on.
Sky Yap
Mar 17, 2025 | 9:03 AMEdit Comment
Processing...
Please wait a moment!
Thank you for your detailed explanation. I now have a clearer understanding of your proposal and its objectives. Integrating features like the "Campfire" and "Popcorn" layouts add a meaningful human touch to the platform. Wishing you the best with this innovative project!
Chris Ewald
Project Owner Mar 16, 2025 | 10:19 AMEdit Comment
Processing...
Please wait a moment!
A well formatted version of the above comment can be found here