Milestone Release 1 |
$5,000 USD | Transfer Complete | TBD |
Milestone Release 2 |
$10,000 USD | Transfer Complete | TBD |
Milestone Release 3 |
$10,000 USD | Transfer Complete | TBD |
Milestone Release 4 |
$10,000 USD | Transfer Complete | TBD |
Milestone Release 5 |
$10,000 USD | Transfer Complete | TBD |
Milestone Release 6 |
$10,000 USD | Transfer Complete | TBD |
Milestone Release 7 |
$10,000 USD | Transfer Complete | TBD |
Milestone Release 8 |
$11,000 USD | Transfer Complete | TBD |
Milestone Release 9 |
$12,500 USD | Transfer Complete | TBD |
Milestone Release 10 |
$12,500 USD | Transfer Complete | TBD |
Milestone Release 11 |
$11,500 USD | Transfer Complete | TBD |
Milestone Release 12 |
$15,000 USD | Pending | TBD |
Milestone Release 13 |
$22,500 USD | Pending | TBD |
"This milestone is primarily utilitarian as we focus on Speech to Text onboarding for student input, then implementation. We're actually onboarding two services: Google Cloud Services for immediate use, and Whisper open-source onboarding for long term use. We're testing the service with a Kaggle .wav dataset of voice recordings to test transcriptions. The results are looking good. We intend to use this for voice commands such as ""next question"", ""I choose answer A"" and ""please repeat"" to keep the scope manageable. In combination with NLP and perhaps recent remarkable advancements with LLVM, we can see possibilities for later development of free-conversation with avatars trained on the study content. On the avatar side, quality of synthetic voice is important for student comfort and clarity. For text to speech, we've connected the avatars to Azure voices, which provides more than 200 synthetic voices in several nationalities with a quite naturalistic sound. Like other services, these voices could accept SSML tags to speed up, slow down, emulate prosody, and emphasis text. At the mid-point for the metaverse development, it's becoming important to refactor out old models and content. The software suite used for avatar realism and lipsync have also improved quite impressively. Our original models require updating. We're encountering overlapping compatibility issues that impact avatar realism now, so we're taking this opportunity to refactor, before advancing into the ASL module which will be focusing on avatar non-verbal communication. Avatars now speak using Azure voices while delivering tests, presenting content relevant to the site, or other dynamic text. For situations where the text will not change, we can pre-animate lipsync using superior software, then import those keyframes into Unity. When using text to speech, we use a blend of facial animation plus 'live' lip-sync that attempts to follow the waveform shape, and match reasonable lip-sync letter forms in real time. We've added a guide for the new student in the form of a floating round UAV probe. This virtual UAV can also provide aerial views of the site and guide the student to the next avatar with a dialogue to follow. "
Steady. We would always like to be further ahead; Christmas/New Years and securing ML talent consumed time.
Advancing quickly on the immersive project front. Progress is steady. Machine Learning experts are in high demand, we are a little delayed here. 3D modelling talent we are finding in new graduates, giving them an early opportunity to work and for us, faster model prep than we can do.
Very well. We are accomplishing more than expected at this early stage, creating worlds and avatars with high quality, and writing many new documents for creating lessons with.
We have completed M-1 and M-0 (milestones). Updated the X2 series reactors in a shipping container build; created instructional avatars (realistic humanoids), created first series of scripts, updated avatar clothes and uniforms, created a downloadable app for laptops, tablets and for VR (Quest 2).
We propose an adaptive AI service API to improve sales and study within virtual worlds by reducing UI and UX friction as natural language processing, behavioral avatar AI, and accessibility intelligence. A partnership between Carbix Corporation and Xyris XR Design provides business acumen, an advanced complex technology, a library of existing meta verse projects, and interactive avatars, giving us a solid foundation for the success of this project.
New reviews and ratings are disabled for Awarded Projects
Check back later by refreshing the page.
Signed Contract
$5,000 USD
1. Scene updates for facility and detailed Lesson scripts for current equipment. Additional avatar dialogue scripts for X2 and CCU provided in text or spreadsheet form. 2. VR Headset models reference list. Immersive scene model updates. 2X new avatars (female). Corrections to avatar posture. 3. Desktop .exe recompilation of Geofloat scene. 4. Additional PCVR (tethered Oculus Quest) VR-Desktop .exe compilation.
$10,000 USD
1. Nature based scenes of cement plants and X2-CCU installation. Training scripts/syllabi for facility/layout for avatars. Pop up guidance for users. Compiled Windows .exe. 2. New project scene with Iceland Geothermal and embedded training scripts. Compiled Windows .exe.
$10,000 USD
1. Lean Strategy Doc (Adaptive Learning, Learning graph ontology examples. Web-facing lesson creator UI proposals) 2. Two Dynamic models for CCU: Reactor and ID (induced draft) fan system. PC and PCVR exes of new CCU models.
$10,000 USD
SNET API delivering lesson graphs and Recommendations (rev 1 MVP). Please refer to attachment “Anticipated API Overview – Module 1 Adaptive Learning”. VR grab/manipulate object functionality.
$10,000 USD
1. Add X2- CCU for geo-thermal with nature and technical build scenes. 2. New .exe of Cement and geothermal scenes utilizing Carbix models. 3. Improvement of Adaptive Learning recommendation weighting algorithms. 4. (SNET API Adaptive Learning Rev 2 MVP). Please refer to attachment “Anticipated API Overview – Module 1 Adaptive Learning”.
$10,000 USD
Lean Strategy Doc (Integration of open-source speech recognition library and linguistic tagging library). Speech to text SNET api rev 1 MVP. Please refer to attachment “Anticipated API Overview – Module 2 Speech Recognition”.
$10,000 USD
Command recognition (fx “Please repeat”, “Tell me more”, “I got it”). Avatar text responses returned via API which further the lesson guidance and confirm user input. Conversational Avatar SNET api rev 2 MVP. Please refer to attachment “Anticipated API Overview – Module 2 Speech Recognition”.
$11,000 USD
Document: project strategy plan to create ASL recognition focusing on interpreting lesson plan phrases, confirmation commands, static letters and dynamic phrases. ML solution description and desired outcomes.
$12,500 USD
Motion capture library of alphabet and lesson content with recorded video of human ASL translator (including facial expressions). Initial machine learning implementation. A test showing responsiveness of system to webcam ASL input under ideal lighting conditions, finger markers and controlled speed of human translator’s arm motion.
$12,500 USD
ASL to Text SNET API rev 1 MVP. Please refer to attachment “Anticipated API Overview – Module 3 ASL Recognition”. Functionality: ASL-to-text, learner confirmation commands, pipeline from ASL-to-text results to linguistic tagging, piping of results through Adaptive Learning graph, Adaptive learning graph returning Lesson recommendation and Avatar conversation text (Round Trip of lesson delivery / student choice / ASL recognition / return via SNET API of lesson recommendation).
$11,500 USD
Post dev – Marketing on SNET, primarily for API calls.
$15,000 USD
Post dev – Marketing on SNET, primarily for API calls and user engagement/focus groups metrics.
$22,500 USD
Please create account or login to post comments.
Reviews & Ratings
New reviews and ratings are disabled for Awarded Projects
Check back later by refreshing the page.
© 2024 Deep Funding
Sort by