Quantcast
Channel: Large sample asymptotic/theory - Why to care about? - Cross Validated
Viewing all articles
Browse latest Browse all 2

Large sample asymptotic/theory - Why to care about?

$
0
0

I hope that this question does not get marked "as too general" and hope a discussion gets started that benefits all.

In statistics, we spend a lot of time learning large sample theories. We are deeply interested in assessing asymptotic properties of our estimators including whether they are asymptotically unbiased, asymptotically efficient, their asymptotic distribution and so on. The word asymptotic is strongly tied with the assumption that $n \rightarrow \infty$.

In reality, however, we always deal with finite $n$. My questions are:

1) what do we mean by large sample? How can we distinguish between small and large samples?

2) When we say $n \rightarrow \infty$, do we literally mean that $n$ should go to $\infty$?

e.x. for binomial distribution, $\bar{X}$ needs about n = 30 to converge to normal distribution under CLT. Should we have $n \rightarrow \infty$ or in this case by $\infty$ we mean 30 or more?!

3) Suppose we have a finite sample and suppose that We know everything about asymptotic behavior of our estimators. So what? suppose that our estimators are asymptotically unbiased, then do we have an unbiased estimate for our parameter of interest in our finite sample or it means that if we had $n \rightarrow \infty$, then we would have an unbiased one?

As you can see from the questions above, I'm trying to understand the philosophy behind "Large Sample Asymptotics" and to learn why we care? I need to get some intuitions for the theorems I'm learning.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles



Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>
<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596344.js" async> </script>