2024 | Viacheslav Kovtun, Torki Altameem, Mohammed Al-Maitah, Wojciech Kempa
This research article presents a method for estimating probability density functions of parameters in small data models with stochastic parameters, using entropy maximization. The study formalizes the procedure for calculating optimal estimates of probability density functions for both linear and nonlinear static and dynamic small data models, taking into account specific hypotheses about the properties of the studied object. The methodology combines probability theory, mathematical statistics, information theory, and stochastic programming. The key contribution is the use of entropy maximization on sets derived from a small number of censored measurements to formalize these models. The approach allows for the determination of optimal probability density functions for various small data models, reducing the problem to a canonical form of stochastic linear programming with probabilistic constraints. The results demonstrate the effectiveness of this method in estimating probability density functions for both static and dynamic small data models, with applications in machine learning and statistical inference. The study highlights the importance of considering stochastic parameters and the impact of noise in small data scenarios, providing a framework for handling uncertainty in data modeling. The findings contribute to the development of more robust and accurate models in machine learning, particularly in scenarios with limited data.This research article presents a method for estimating probability density functions of parameters in small data models with stochastic parameters, using entropy maximization. The study formalizes the procedure for calculating optimal estimates of probability density functions for both linear and nonlinear static and dynamic small data models, taking into account specific hypotheses about the properties of the studied object. The methodology combines probability theory, mathematical statistics, information theory, and stochastic programming. The key contribution is the use of entropy maximization on sets derived from a small number of censored measurements to formalize these models. The approach allows for the determination of optimal probability density functions for various small data models, reducing the problem to a canonical form of stochastic linear programming with probabilistic constraints. The results demonstrate the effectiveness of this method in estimating probability density functions for both static and dynamic small data models, with applications in machine learning and statistical inference. The study highlights the importance of considering stochastic parameters and the impact of noise in small data scenarios, providing a framework for handling uncertainty in data modeling. The findings contribute to the development of more robust and accurate models in machine learning, particularly in scenarios with limited data.