Valuing Invisible Flows for Collective Evolution

chevron-icon
Back
Top
chevron-icon
project-presentation-img
Shikhar Agarwal
Project Owner

Valuing Invisible Flows for Collective Evolution

Expert Rating

n/a
  • Proposal for BGI Nexus 1
  • Funding Request $50,000 USD
  • Funding Pools Beneficial AI Solutions
  • Total 4 Milestones

Overview

Seeding transition pathways to a post-polycrisis world requires radical and safe experimentation. As change-makers, we need a way to evaluate actions, track impact, and communicate learnings without sacrificing systemic complexity. We, as Prisma, organise action-learning journeys, and we are attempting to integrate experiential learning and collective journaling with AI-enabled semantic query & insights generation. Our intention is to generate a multi-dimensional timeline of shifts in thinking and being - which will contribute to regenerative intervention design, feedback on real-time systems change, and enhanced collective intelligence. Using this, any purposeful group can evolve itself.

Proposal Description

How Our Project Will Contribute To The Growth Of The Decentralized AI Platform

For us, ethics and safety are central - user data is processed anonymously and never stored in raw form. Analysis is contextual, weighted by dynamic benchmarks that adjust based on stakeholder feedback and evolving conceptual frameworks - generating insights responsibly and free from bias. Eventually, data analysis will be decentralized, leveraging blockchain technology for computation and using a decentralized ledger framework to guarantee privacy, ownership, and transparency. 

Our Team

Delfi: designed evolutionary learning journeys for 2000 Argentinian schools. Tabs: telemetry data quality in Formula 1, second hire at renewable energy startup using AI for grid analytics, dissertation used NLP for evolutionary topic models. Shik: a decade of experience facilitating and founding climate justice collectives, dissertation studied economic resilience of rural Indian communities. Mercy: global coordinator for community learning hubs across Africa & embedded in blockchain ecosystems.

AI services (New or Existing)

Textual Emotion Recognition

How it will be used

This service will be used for sentiment analysis of journal entries.

The Temporal Insight Generator (TIG)

Type

New AI service

Purpose

TIG service is designed to analyze temporally evolving datasets of audio transcriptions or text using a single statistical model. It enables insight generation by identifying trends, shifts, and emergent patterns over time, transforming raw chronological text data into structured analytical outputs. By integrating natural language input and statistical timeseries analysis, TIG tries to provide context-aware reflections, making it valuable for applications such as participatory programmes.

AI inputs

- A natural language query specifying a focus of analysis (e.g., “How has sentiment around topic X evolved?”). - A dataset of transcriptions or text entries with associated timestamps. - A predefined statistical model (e.g., topic frequency over time, or sentiment trend analysis).

AI outputs

- A structured time-series representation of the requested insight (e.g., a JSON object of time-stamped values). - A natural language summary contextualizing the trend, key shifts, and potential interpretations. (Optional) A visualization-ready dataset for UI integration.

Named Entity Recognition

How it will be used

This service will assist in tagging and categorizing journal entries.

Abstractive Summarisation

How it will be used

This service will assist in generating insights from journal entries.

The core problem we are aiming to solve

Despite decades of technological innovation - technology and deep collective transformation remain largely divorced. Countless hours of labour by changemakers have little proof of measurable systemic impact. Changemaking frameworks continue to struggle from lack of real-time contextual information. 

Collectively, this represents a deficit in tools that enable new ways of knowing and being - key for systems change.

We need to be able to see real-time shifts in governance, culture, economics, sustainability and consciousness.

What if there was a tool that: 

  1. visualizes data-backed insights from experimentation
  2. tracks patterns over time, and;
  3. connects personal development with collective evolution?

 

Our specific solution to this problem

We’re building an AI-powered bot array that empowers any initiative in capturing and synthesizing emergent insights from facilitated real-world experiments in action-learning. Designed as a proof of concept, it demonstrates how reflective practices can be integrated into systemic evaluation processes through lightweight AI-driven tools.

Core Functionalities:

  1. Prompting & Data Generation:

    • Facilitator-driven or AI-triggered journaling & feedback prompts delivered via Telegram, with input accepted in voice or text format.

    • Prompts address facilitation elements (e.g., addressing collective psycho-emotional blockages)

    • Prompts are tied to daily agendas

  2. Multi-Dimensional Analysis:

    • Inputs are tagged under multi-capitals (social, economic, natural, cultural, etc), regenerative design (living systems, nested wholes, potential thinking etc) and facilitation methodology (processes, outcomes, decisions, etc)

    • Analysis is done using tagging, sentiment analysis, clustering, topic evolution, and statistical analysis to yield trends, patterns and depth in stakeholder and capital-specific development trajectories.

  3. Insights Generation:

    • The aim of insight generation is to create an exploration interface that supports pattern recognition, enabling reflective engagement through intuitive, context-aware visualizations and adaptive responses.

This proof of concept will demonstrate how group dynamics can be made visible, while laying the foundation for more elaborate analysis and synthesis.

Project details

Overview

The bot system is a voice-first analytics pipeline that captures Telegram voice notes, transcribes them, generates vector embeddings, and presents insights via a timeline UI. It prompts reflection, analyzes participation, and synthesizes insights for participants, facilitators, and evaluators.

The system has three stages: Input Protocol, Preparation, and Insight Generation. We've previously tested practice-based Telegram bots, and the front-end is already in progress. Development will proceed step by step, with the most effort focused on data preparation.

Insights are dynamically generated through UI interactions, enabled by clustering, tagging, and topic evolution. Users request insights via natural language input, combining LLM-based semantic search with statistical analytics. A set of defined “lenses” (statistical methods) will refine timeline entries for relevance.

1. Input Protocol: Ingest, Processing and Question APIs

The timelining bot can be added to multiple group chats, distinguishing between circles of purpose and trust (e.g., facilitator circle, hub team, participant groups). Vercel functions handle API services, ensuring seamless connections and concurrency. 

Purpose

To design the data solicitation, capturing and storage mechanism.

Core Capabilities

  • Capturing inputs via Telegram (primarily voice) and storing it.

  • Designing questions in alignment with the daily agenda (which will later be used to contextualise the reflections by the content of the day: workshops, excursions, visitors etc.)

  • Sending regularly scheduled or facilitator-driven questions.

1.1 Ingest & Processing System

Overview

The bot will collect participant entries, enabling tailored user experiences and meaningful insights.

Implementation Approach

The first priority is a simple and reliable ingest system, which will begin development before grant approval. Grant funding will support post-processing, including vector embeddings, response generation, and timeline contextualization.

Ingest Workflow

  1. Users send voice notes in the timelining Telegram chat.

  2. The bot detects messages and sends them to the ingest API.

  3. The ingest API saves audio and metadata to AWS S3.

  4. The processing API transcribes the audio, generates vector embeddings, and stores them in PostgreSQL.

  5. Data is secured for insights and retrieval.

System Overview

Telegram Chat Interface

  • User Interface: Users interact via Telegram voice notes.

  • Bot Functionality: The bot listens for new messages and sends data to the API.

Ingest Service (timelining_ingest)

  • Functionality: Receives data, downloads audio, and stores it with metadata in AWS S3.

  • Hosting: Hosted on Vercel Functions for scalability.

Processing Service (timelining_process)

  • Transcription: Converts voice notes into text.

  • Vector Embeddings: Uses models like BERT or OpenAI for semantic search.

  • Data Storage: Transcriptions and embeddings stored in PostgreSQL with vector extensions.

  • Large File Handling: Splits large audio files to ensure proper processing.

  • Hosting: Hosted on Vercel or another serverless platform.

Data Storage

  • AWS S3: Stores raw audio files and metadata.

  • PostgreSQL (Vector Extension): Stores transcriptions and embeddings for efficient retrieval.

Key Components

  • timelining_bot: Listens for voice notes and updates the API.

  • timelining_ingest: Processes voice notes and stores them in S3.

  • timelining_process: Transcribes, generates vector embeddings, and stores data in PostgreSQL.

  • AWS S3 Bucket: Holds raw audio and metadata.

  • PostgreSQL Database: Manages transcriptions and embeddings for semantic search.

1.2 Question System

Overview

Facilitators can send reflective questions via the existing Telegram bot, which stores them in the PostgreSQL database alongside participant responses. These questions can be scheduled, manually triggered, or contextually generated based on ongoing discussions.

Workflow

  1. Facilitator sends a question in the timelining Telegram chat using a predefined format or command (e.g., /ask What inspired you today?).

  2. Bot detects the question and forwards it to the timelining_questions API.

  3. API validates and stores the question in the PostgreSQL database with metadata (timestamp, facilitator ID, group ID, optional tags).

  4. Scheduled or AI-driven questions can also be added to the database and posted automatically at set intervals.

  5. Participants respond, and their answers are linked to the corresponding question for contextual analysis.

2. Preparation: Data Tagging, Analysis & LLM Interfacing

Purpose

To generate further data layers that can be used by conceptual & statistical models during the insight-generating stage.

Core Capabilities

The data preparation stage enriches raw transcriptions by generating structured data layers that enhance statistical analyses. 

  • Clustering & Grouping: Identifies patterns in conversations, grouping related voice notes by themes or participant engagement.

  • Tagging & Annotation: Applies metadata tags (e.g., topics, sentiment, key terms) to enable refined filtering and retrieval.

  • Topic Evolution Tracking: Detects how themes develop over time, mapping the trajectory of discussions.

 

Analysis Framework

The analytical logic for the bot is deeply rooted in the multi-capitals framework, which serves as the primary lens for tagging, analysing, and interpreting journaling data and other content. This framework underpins the evaluation of flows of capital—both tangible and intangible—across systemic, group and individual levels. By layering the analysis, starting with flows of capital, moving through regenerative systems shifts, group facilitation methodology, and down to individual layers, the bot provides a comprehensive understanding of personal development, co-creation efficacy, and systemic change.


The multi-capitals framework identifies various forms of capital that individuals, groups, and systems draw upon and build through their interactions as part of the Action Learning Journey. These include

  1. Natural Capital: Ecological assets such as ecosystems, biodiversity, and natural resources.

  2. Social Capital: Relationships, networks, trust, and social cohesion.

  3. Cultural Capital: Shared values, traditions, knowledge systems, and cultural practices.

  4. Human Capital: Skills, knowledge, emotional intelligence, and personal development.

  5. Economic Capital: Financial resources and economic assets.

  6. Built Capital: Physical infrastructure such as buildings, tools, and technology.

  7. Political Capital: Influence within decision-making processes and governance structures.

  8. Spiritual Capital: Represents the intangible values, beliefs, and sense of purpose that guide individuals and communities. 

 

Implementation Approach

The system facilitates a comprehensive, AI-aided exploration of journaling data using a multi-capitals framework, which analyses tangible and intangible flows of capital (such as social, cultural, and economic) across individual, group, and systemic levels. Users submit natural language queries, which are semantically matched to voice notes through vector embeddings. The voice note transcripts are analyzed for trends and changes in capital flows over time, and insights are presented alongside relevant voice notes in a timeline UI.

1. User Query Input:
Users enter queries in natural language, which are parsed and analysed using NLP techniques to identify key entities and context.

2. Semantic Search:
Queries are transformed into vectors, which are matched with relevant voice notes based on semantic similarity, not just keywords.

3. Statistical Analysis on Retrieved Data:
Statistical models identify trends, patterns, and shifts over time, including sentiment analysis, topic modelling, entity evolution, and trend detection. Voice notes are pre-tagged with themes, and dynamic tagging highlights relevant notes.

4. Generating Insights:
The system synthesizes insights from sentiment analysis, topic modelling, and trend detection, providing high-level summaries such as "Trust evolved from positive to critical discussions by mid-period" or "Social capital surged after the opening ceremony."

5. Handling Unsupported Queries:
The system validates queries before processing, providing a clear response for unsupported requests and prompting users for more specific queries.

6. Presenting the Timeline and Voice Notes:
Insights are displayed in the timeline UI, allowing users to filter by query, sentiment, and tags. Users can explore specific voice notes and view metadata such as sentiment scores and associated tags.

3. Insight Generation & Timeline UI

Purpose

A single interactive interface will allow users to browse, search, and interpret voice notes along a timeline. 

Core Capabilities

The interface provides:

  1. A visual timeline of voice notes, mapped chronologically.

  2. Search & filtering tools to surface relevant voice notes based on natural language queries.

  3. AI-assisted insights (statistical analysis, topic trends, sentiment shifts).

  4. Multi-layered context views (raw audio, transcription, metadata, and analysis).

This ensures AI helps illuminate patterns, but the final interpretation remains with the human user.

Design Principles

  • Preserving Context:
    Provide multi-layered views with audio, transcripts, metadata, sentiment trends, and key topics. Enable semantic and keyword-based searches.

  • Retaining Nuance:
    Show sentiment distributions and multiple tags for entries.

  • AI as a Tool, Not Authority:
    Ensure AI is explainable with reasoning and alternative tags. Users can override AI tags to improve relevance.

  • Balancing AI and Human Input:
    Allow comparison of NLP models and AI uncertainty, empowering users to explore insights interactively.

  • Supporting Human Exploration:
    Provide timeline visualizations, statistical overlays, and multi-modal queries for exploring trends over time.

  • Transparency and Iteration:
    Display raw data with AI as optional support, encouraging users to refine insights through comparisons and adjustments.

  • Human-Driven Interpretation:
    Final insights are generated by users, not AI.

Given the complexity of our analytical model, we are assuming that RAG Frameworks will also feature in the development.

Needed resources

We will consider hiring an AI Developer for the processing stage, to make the best use of the data collected. Specifically, we’d like to address making visible the evolution of dynamics over time, ideally generating a unique timeline for different queries. 

Existing resources

  1. Existing prisma evaluation front-end (images attached above and below)

  2. Prior work on telegram bots, on a previous action-learning journey: https://docs.prisma.events/Context%20%26%20Narrative/Replace%20Academy%20Case%20Study/

  3. We have received funding from Catalyst Fund 13 to organise the next action-learning journey in Accra, Ghana: https://projectcatalyst.io/funds/13/cardano-open-ecosystem/wada-hub-hackathon-a-local-community-catalyst

  4. Existing vercel subscription for existing tooling

  5. Existing Regenerative Frameworks from the Regenesis Institute and Multi-Capital Methodology Development from Shikhar’s thesis.

This was demo’ed for a previous event we facilitated at, in association with another aligned partner

 

Links and references

  1. Our website
  2. Our documentation site
  3. On Ground Hackathon Host in Ghana - ARC
  4. Our Regenerative Knowledge & Practice partners - The Regensis Institute
  5. WADA - Our African Blockchain & AI Development Network Partner
  6. Shikhar's Thesis using the Multi-Capital Methodology (Link)
  7. Regenerative Tech Partner
  8. Tobias' Dissertation 'A Novel Approach to Topic Evolution' (Link)
  9. Advisors
    1. Juvid Aryaman (Data Scientist)
    2. Ayrton San Joaquin (AI Ethics & Governance)
    3. Jessica Groopman (Regen Tech) 

Was there any event, initiative or publication that motivated you to register/submit this proposal?

A personal referral

Proposal Video

Placeholder for Spotlight Day Pitch-presentations. Video's will be added by the DF team when available.

  • Total Milestones

    4

  • Total Budget

    $50,000 USD

  • Last Updated

    24 Feb 2025

Milestone 1 - Ingest System Development & Initial UI Setup

Description

Prepare the system for data collection by developing the ingest pipeline for Telegram voice notes and ensuring basic UI functionality for early testing. This milestone ensures that stakeholders can immediately start using the system during the action learning journey in Ghana.

Deliverables

1. Telegram bot for voice note ingestion, with metadata storage in PostgreSQL and file storage in AWS S3. 2. Initial front-end UI that allows users to see ingested voice notes and confirm uploads. 3. Documentation for onboarding community members and developers to test the system.

Budget

$6,000 USD

Success Criterion

1. Back-end Development: a. Successfully capture and store at least 30 voice notes from test groups. 2. Front-end Development: a. Basic UI allows users to see their uploaded voice notes and metadata. b. Users receive confirmation upon successful upload. 3. Testing & Stakeholder Feedback: a. System tested with 3 Telegram groups (20+ users). b. At least 15 participants from the hackathon successfully submit voice notes. c. Feedback collected from lead impulses to refine ingestion process.

Milestone 2 - Processing System Development & Expanded UI

Description

Implement voice-to-text transcription and vector embedding for searchability. Expand the UI to display transcribed text and allow simple filtering.

Deliverables

1. API for transcribing voice notes to text. 2. Vector embeddings stored in PostgreSQL for retrieval. 3. UI update allowing users to view transcriptions.

Budget

$9,000 USD

Success Criterion

1. Back-end Development: a. 95% transcription accuracy on clear voice samples. b. Successfully generate vector embeddings for 100+ voice notes. 2. Front-end Development: a. UI displays transcripts alongside original voice notes. b. Users can filter transcriptions by date and sender. 3. Testing & Stakeholder Feedback: a. 10+ test users successfully retrieve transcriptions. b. Initial stakeholder feedback collected to refine transcription output.

Milestone 3 - Preparing System (Tagging, Clustering, Sentiments)

Description

Implement automated tagging, clustering, and sentiment analysis to add structure to the data and improve searchability. This enables real-time insight generation during the hackathon.

Deliverables

1. Automated tagging system for voice note transcripts. 2. Clustering algorithm for grouping related voice notes. 3. Sentiment analysis integration.

Budget

$18,500 USD

Success Criterion

1. Back-end Development: a. Tags generated for 400+ voice notes with 90%+ accuracy. b. Clustering algorithm groups related notes with 80%+ accuracy. 2. Front-end Development: a. UI updates allow users to filter by topic and sentiment. 3. Testing & Stakeholder Feedback: a. 20+ test users use the filtering system successfully. b. Stakeholder review confirms usefulness of clustering and sentiment analysis.

Milestone 4 - Insight Generation & Full UI

Description

Develop a natural language query system that enables real-time insight retrieval. Implement statistical models to identify discussion trends and generate actionable reports.

Deliverables

1. Query processing system for retrieving insights. 2. LLM-generated summaries of discussion trends. 3. Final UI with full navigation and interactive elements.

Budget

$16,500 USD

Success Criterion

1. Back-end Development: a. Query system achieves positive responses on 3 main test query types. b. Statistical model selection successfully generates trend insights. 2. Front-end Development: a. Users can enter queries and receive structured insights. b. Timeline UI enables navigation of discussion trends. 3. Testing & Stakeholder Feedback: a. Thorough review with hackathon participants post-intensive. b. Final stakeholder review validates system readiness for second iteration.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

feedback_icon