The problems with execution of the Risk Assessment app on the SNET AI Marketplace have been resolved. The Risk-Aware Data Generator is now launched on the marketplace; however, its execution is currently having problems. We are working with the SNET team to debug those problems.
During the month of November our projects got stalled regarding the integration with SingularityNET. Our approach to resolving this problem has been two-fold. We reached back to SNET and got a commitment from Serguey Shalyapin for support and a data scientist new hire has agreed to review the issues.
Photrek will provide a Coupled Variational Autoencoder as a service for the SingularityNet community. This algorithm will enable the learning of risk-aware models that can generate robust, accurate simulations and forecasts.
provides services to improve machine intelligence for complex systems. Photrek's current projects include environmental detection systems, decentralized governance, and development of risk-aware machine learning algorithms.
Problem Description
SingularityNet applications, such as financial trading, sustainability, and health and longevity, require training from voluminous, reliable datasets. Often, collecting relevant datasets suffers from incomplete gaps in the data. There is a need to fill the gaps with accurate simulations based on models that are aware of the applications' underlying risks.
Solution Description
Photrek will use a Coupled Variational Autoencoder (C-VAE) machine learning (ML) algorithm to create risk-aware datasets. The algorithm comprises three components: a variational autoencoder capable of learning probabilistic models; a Coupled Evidence Lower Bound (C-ELBO) that generalizes the log-likelihood and divergence metrics with a tunable risk tolerance; and a dynamic time-series model.
Project Benefits for SNET AI platform
The Photrek team will host on the SingularityNet marketplace the Coupled Variational Autoencoder algorithm. The CVAE service will provide the SNET community with the ability to learn risk-aware probabilistic models. These models will have interpretable features, be able to generate datasets and classify objects, and be tunable to a desired degree of risk tolerance. The Photrek team held problem-identifying meetings with SingularityDOA, SingularityNet, and Rejuve. These discussions identified the ability to fill gaps in real-world data as a crucial issue in improving the pipeline of machine learning forecasting capability. For instance, SingularityDOA continuously collects market data, but often there are gaps in the data. The Rejuve team requires the integration of cohort datasets with differing degrees of completeness. And the sustainability team is seeking to ensure that the complexity of climate models does not lead to overfitting and thus over-confident forecasting. The Photrek Coupled VAE algorithm will provide simulated interpolation of these gaps that can be tuned based on the degree of risk desired for the model.
Competitive Landscape
Within ML/AI, there are two principal technologies that have been used as data generators, the VAE and the Generative Adversarial Networks (GAN). While the GAN technology has been highly successful on metrics of generator performance, like many deep learning methodologies the resulting networks are difficult to interpret. In contrast, the VAE produces a probabilistic model whose dimensions can be used to control semantic features. To enhance the independence of the features, GoogleMind developed the beta-VAE, which strengthens the relative weight of the divergence of the probabilistic model from a prior with independent dimensions. While this technique has been successful in improving the “disentanglement” it is a trade-off with the reconstruction performance. The coupled VAE provides a tunable improvement in both the latent probability divergence and the reconstruction. Rejuve’s Deborah Duong explained that she is currently using the beta-VAE technique, so this will serve as a good test case for Photrek to demonstrate its competitive advantage.
Marketing & Competition
Robust, accurate models that can be used to generate data and forecasts are crucial for a variety of markets. As a result of participating in the NSF Cornell I-Corps Customer Discovery in Summer 2021, we engaged over 30 industry leaders to determine “pain points” in business forecasting. A common theme was the need for addressing data sets with gaps. This proposal works to satisfy this demand. As one example, Photrek recently completed an analysis of the severe weather forecasting market. The global weather forecasting systems market is anticipated to grow to $4 - $10 B by 2028 at a Compound Annual Growth Rate (CAGR) of between 5% to 8%, according to Emergen of British Columbia and Grand View Research of San Francisco, respectively (Grand View Research, 2020; Emergen, 2021). Both marketing reports cite big data analytics and the development of Internet-of-Things (IoT) and AI capabilities as drivers of this growth. However, they indicate inconsistent and incomplete data along with the complexities of modeling and forecasting as limiting factors.
To initiate our marketing efforts within the SingularityNET community, Photrek discussed modeling, data generation, and forecasting needs with three teams within the SingularityNET community, the Climate Sustainability team (Matt Ikle), the Singularity DOA team working on financial forecasting (Nejc Znidar), and the Rejuve on longevity (Deborah Duong). A common thread in these discussions was the challenges of training ML algorithms with sporadic datasets. In May, Photrek is scheduled to brief Ben Goertzel on the DC-VAE technology and we will be keen to leverage his guidance on market needs for robust models of complex data.
Needed Resources
The Photrek team expects to collaborate closely with the SingularityNet developers on the integration requirements. We have included in our costs an additional AI programmer to work with the team members identified. Once the integration is complete we will work closely with the SingularityNet marketing team to promote the application.
Long Description
Time series data often suffer from missing values, with data either missing completely at random (MCAR) or more critically, missing not at random (MNAR). We use the generative capabilities of Variational Auto-Encoders (VAEs) to fill in (impute) these missing data. In particular, we will apply VAE technology-based techniques that have been developed through our research into Coupled Systems and implement emerging methods using dynamical systems approaches in VAEs.
Figure 1: Basic Autoencoder
These ideas have grown out of research originally conducted in commercial and academic settings. Working with Constantino Tsallis, External Faculty fellow of the Santa Fe Institute, and Thistleton as consultants, Nelson, and his team demonstrated the use of a generalized entropy to measure hidden structure in complex signals
(Marsh and Nelson, 2005), invented the generalized Box-Muller method to generate q-Gaussians (which are equivalent to the coupled-Gaussians described in the current work)
(Thistleton, et al. 2007), and contributed to the understanding of the origins of long-range correlations. Building upon this research, Nelson developed methods leading to improvements in probability forecasting algorithms. Nelson continued the development of these ideas while conducting fundamental research as a Research Professor at Boston University proving the role of NSC in modeling the statistics of nonlinear systems (
Nelson et al. 2017), discovering new statistical estimators for heavy-tail distributions
(Nelson, 2020), and prototyping the coupled VAE algorithm
A python library integrated with Google’s TensorFlow (and extendable to PyTorch) has been developed to support this work. This library includes Nonlinear Statistical Coupling (NSC)
(Clements, et al.), a collection of functions for modeling the generalization of information theory to nonlinear complex systems in both Python and Mathematica, and Coupled-VAE (C-VAE)
(Chen et al.), which contains the python software for learning robust models. The project supported under this solicitation will make these techniques available to the SingularityNet community.
The Dynamic Coupled VAE (DC-VAE) will learn dynamic models of time series events with adjustable levels of risk tolerance. This technology builds on the Coupled VAE algorithm to combine deep learning, probabilistic programming, and complex systems theory to learn complex models (deep learning) while maintaining interpretability (probabilistic programming) and strengthening robustness against rare events (complex systems theory). The work supported under this solicitation will complete offline prototype design and testing. We will propose a subsequent project which will integrate this capability into the SingularityNet framework.
We are confident that the DC-VAE approach will result in learning robust, accurate forecasts based upon studies we have completed on image processing tasks for the coupled-VAE algorithm. A VAE uses neural networks to encode and decode a probabilistic layer that stores an interpretable model. We have obtained improvements in the reconstruction of MNIST images as the coupling value of the negative Evidence Lower Bound (ELBO) is increased. This increases the cost of low-likelihood events and drives the training to learn a model that reduces the extremes of the reconstructed log-likelihood. The result is an improvement in the accuracy, measured by the geometric mean. This geometric mean is a translation of the information-theoretic measure of the average log-likelihood, and the robustness, a generalization of the information-theoretic metric and measured by the generalized mean. The decisiveness, measured by the arithmetic mean of the likelihoods, is sensitive to the best performing likelihoods and is closely related to the classification performance of a decision algorithm.
The generalized information-theoretic tools used in this project have achieved success by highlighting inadequacies in existing forecasting methods (Nelson and Brooks, 2020). In prior work, Nelson uncovered similar issues of over-confidence (Tgavalekos et al. 2010) in otherwise highly sophisticated detection systems.
Figure 3: Reconstruction Histogram of Coupled-VAE from (Cao et al.)
Shown are histograms of the probability likelihood of a reconstructed image. On the left is the standard VAE (=0) with corrupted shot noise input. On the right is the Coupled VAE (=0.1) which shows many orders of magnitude improvements in the Robustness and Accuracy. The Accuracy (blue line) is the geometric mean of the likelihoods and is a translation of the information-theoretic metric, the average log-likelihood. The Robustness (green, -2/3 generalized mean) and Decisiveness (red, arithmetic mean) are translations of a generalized information-theoretic metric and are sensitive to the worst and best likelihoods, respectively.
Review For: Risk-Aware Data Generation for SingularityNet Applications
Expert Review
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary)This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary)This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
New reviews and ratings are disabled for Awarded Projects
No Reviews Avaliable
Check back later by refreshing the page.
Expert Review (anonymous)
Final Group Rating
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary)This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary)This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
Nice to meet you! If you have any question about our services,
feel free to contact us.
Send a message
Sending…
We're on it!
You'll receive an email reply within 1-2 days.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok
Sort by