We use a new approach which simulates the idea of Bayesian method with a difference. In traditional Bayesian method one considers a prior distribution, may be a conjugate one, which is centered at super parameters. Our method of data augmentation is supposed to help for small samples or cases where a very few observations are available. We use prior distributions, centered at the given observations ( calling them to be super parameters ), to generate a larger artificial dataset which may be termed as second generation dataset. This larger second generation dataset is then used to draw inferences. The powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (such as neural networks) as well as the creation of new types, such as generalized linear models and multilevel models. Increased computing power has also led to the growing popularity of computationally intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible.