NeuralProphet is an open-sourced time-series forecasting library. It is a hybrid general additive model that combines the simplicity and usability of Facebook’s Prophet with the performance and flexibility of a neural network. As of date, NP has reached 1.2 million+ downloads and 2,700+ GitHub stars.
with the performance and flexibility of a neural network. As of writing this proposal, NeuralProphet has surpassed 1.35 million downloads and 2,750 GitHub stars. The team behind this proposal are also the core developers and maintainers of NeuralProphet.
This service acts as an API wrapper to the underlying NeuralProphet library, which includes the NeuralProphet model class. The wrapper automatically configures the hyperparameters that need to be passed into the NeuralProphet model based on the user's input data, as well as preprocessing that data wherever needed. Therefore, all the user need to do is input his/her data directly into this service API, bypassing the need to manually write Python code to use the NeuralProphet library. Nevertheless, the API service allows the user to customize certain hyperparameters, such as known seasonal patterns or events, if necessary.
The NeuralProphet library is created by Oskar T. and is open-sourcedunder his GitHub account. Lead core developers and maintainersinclude Kevin R.C. and Richard S. Please see the Related links section at the end for more details.
As for this service, the project lead is Kevin R.C., with Oskar T. and Richard S. as the technical advisers. Please see the Team section for more details.
Solution Description
NeuralProphet is a generalized additive model with the following components: trend, seasonality, events/holidays, autoregression, and lagged regressors/covariates. Its autoregression and lagged regressors/covariate uses an underlying neural network model called AR-Net, with PyTorch Lightning implementation. Its other components use more statistical methods (e.g., Fourier terms for seasonality) hence the hybrid nature.
Project Benefits for SNET AI platform
This NeuralProphet service will be utilized by the SYBIL: The General-Purpose Forecaster service as its hybrid base model. See our separate SYBIL proposal herein DF2 - Pool A [New projects].
Aside from SYBIL, the NeuralProphet service can also be its standalone forecasting service as an ample alternative to the existing Time-Series Analysis service. Although further research and testing need to be done, we expect the NeuralProphet service’s comparative strengths to include the following:
Automatic preprocessing steps
Handling multivariates as lagged and future regressors
Longer forecasting horizons
Support and maintenance directly from the NeuralProphet developers
Service License Info
The NeuralProphet library is open-sourced under the MIT License - link
This service will be open-sourced under the Apache 2.0 License.
Review For: Onboard NeuralProphet: a hybrid time-series forecasting library
Expert Review
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
Total Milestones
5
Total Budget
$14,000 USD
Last Updated
17 Jun 2024
Milestone 1 - Develop wrapper code
Status
😀 Completed
Description
Develop wrapper code for the NP service to preprocess input data and execute the underlying NP model by passing in the necessary parameters.
Have the wrapper code and library successfully integrated as an AWS or cloud service. For example, how to handle time-out and latency from multiple users.
New reviews and ratings are disabled for Awarded Projects
No Reviews Avaliable
Check back later by refreshing the page.
Expert Review (anonymous)
Final Group Rating
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
Proposal Summary
Verifying...
Please wait a moment!
Receive notifications on Deep Funding
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok
Sort by