Wednesday, March 4, 2020

Parametric and Nonparametric Methods in Statistics

Parametric and Nonparametric Methods in Statistics There are a few divisions of topics in statistics. One division that quickly comes to mind is the differentiation between descriptive and inferential statistics. There are other ways that we can separate out the discipline of statistics. One of these ways is to classify statistical methods as either parametric or nonparametric. We will find out what the difference is between parametric methods and nonparametric methods.  The way that we will do this is to compare different instances of these types of methods. Parametric Methods Methods are classified by what we know about the population we are studying.  Parametric methods are typically the first methods studied in an introductory statistics course. The basic idea is that there is a set of fixed parameters that determine a probability model. Parametric methods are often those for which we know that the population is approximately normal, or we can approximate using a normal distribution after we invoke the central limit theorem.  There are two parameters for a normal distribution: the mean and the standard deviation. Ultimately the classification of a method as parametric depends upon the assumptions that are made about a population. A few parametric methods include: Confidence interval for a population mean, with known standard deviation.Confidence interval for a population mean, with unknown standard deviation.Confidence interval for a population variance.Confidence interval for the difference of two means, with unknown standard deviation. Nonparametric Methods To contrast with parametric methods, we will define nonparametric methods. These are statistical techniques for which we do not have to make any assumption of parameters for the population we are studying. Indeed, the methods do not have any dependence on the population of interest. The set of parameters is no longer fixed, and neither is the distribution that we use.  It is for this reason that nonparametric methods are also referred to as distribution-free methods. Nonparametric methods are growing in popularity and influence for a number of reasons. The main reason is that we are not constrained as much as when we use a parametric method.  We do not need to make as many assumptions about the population that we are working with as what we have to make with a parametric method. Many of these nonparametric methods are easy to apply and to understand. A few nonparametric methods include: Sign test for population meanBootstrapping techniquesU test for two independent meansSpearman correlation test Comparison There are multiple ways to use statistics to find a confidence interval about a mean.  A parametric method would involve the calculation of a margin of error with a formula, and the estimation of the population mean with a sample mean.  A nonparametric method to calculate a confidence mean would involve the use of bootstrapping. Why do we need both parametric and nonparametric methods for this type of problem? Many times parametric methods are more efficient than the corresponding nonparametric methods. Although this difference in efficiency is typically not that much of an issue, there are instances where we do need to consider which method is more efficient.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.