This paper addresses the challenge of handling continuous variables in Bayesian classifiers, particularly the common assumption that these variables follow a Gaussian distribution. The authors propose using nonparametric kernel density estimation as an alternative to the traditional Gaussian assumption. They present experimental results on both natural and artificial datasets, comparing the performance of the naive Bayesian classifier with and without the Gaussian assumption. The results show significant improvements in accuracy for several datasets when using kernel density estimation, suggesting that this method is effective for learning Bayesian models. The paper also discusses the theoretical properties of kernel density estimation and provides a detailed analysis of the experimental findings, highlighting the advantages of the flexible Bayesian classifier in various scenarios.This paper addresses the challenge of handling continuous variables in Bayesian classifiers, particularly the common assumption that these variables follow a Gaussian distribution. The authors propose using nonparametric kernel density estimation as an alternative to the traditional Gaussian assumption. They present experimental results on both natural and artificial datasets, comparing the performance of the naive Bayesian classifier with and without the Gaussian assumption. The results show significant improvements in accuracy for several datasets when using kernel density estimation, suggesting that this method is effective for learning Bayesian models. The paper also discusses the theoretical properties of kernel density estimation and provides a detailed analysis of the experimental findings, highlighting the advantages of the flexible Bayesian classifier in various scenarios.