kounchev
Project OwnerPI and Project developer. Delivers the theoretical results in the area of Polyharmonic interpolators and their algorithmic implementation.
The present project focuses on the main algorithm of LLMs. We reconsider the LLM architecture in the Transformers' part by replacing the standard FFNs (feed-forward networks) with a different type of multivariate interpolator called PHI (polyharmonic interpolators). This would enormously increase the flexibility of the Transformers' architecture, the speed and capacity of the training and embedding phase, as well as the inference. The PHIs leverage the well-established multivariate theory of interpolation using solutions or piecewise solutions of elliptic equations. A very big advantage of the PHI paradigm is the interpretability of the parameters.
New AI service
We provide an enormous improvement of the LLMs in terms of the speed and computation efficiency of the Learning/Training phase as well as the interpretability of the parameters.
Input: LLM with standard Transformer architecture.
LLM with innovative PHI-Transformer architecture.
We construct the PHI interpolators: first in the case of spherically symmetric interfaces and boundaries; second in the case of translation invariant hyperplane interface symmetry and boundaries. Description of the basis function models - spherical harmonics and exponential functions.
Algorithms for the construction of the basis of functions on the interface surfaces in the case of hyperspheres and the parameters of the PHI in the spherically symmetric case; similar construction of the basis functions on the hyperplanes and the related parameters of the PHI.
$14,000 USD
Successful tests for special interfaces and data.
We will provide research proving the advantage of the PHI paradigm over the usual NN paradigm. We will make numerous experiments for justification.
Results of experiments proving the advantage of the PHI over the standard NNs for interpolation purposes.
$14,000 USD
The experiments have to show that PHI provides enormous improvement in speed, power of computation, and interpretability of the parameters, compared with the usual NNs.
Integration of the PHI machine in the Transformers' architecture. We provide a smooth mechanism for the complete architecture.
PHI-Transformer architecture. Results of experiments on synthetic and real data.
$14,000 USD
Delivery of a fully functioning PHI-Transformer architecture, showcasing an essential advantage of PHI-Transformer over usual Transformer in tests on synthetic and real data.
We use the PHI-Transformer architecture for Learning/Training on real data to show advantage over the usual Transformers' architecture. We showcase some results with Inference.
Results of experiments with Learning/Training on real data. Experiments with Inference on real data.
$8,000 USD
Essential advantage of the PHI-Transformers over the usual Transformers' architecture, in terms of speed and computational cost.
Reviews & Ratings
Please create account or login to write a review and rate.
Check back later by refreshing the page.
© 2024 Deep Funding
Join the Discussion (0)
Please create account or login to post comments.