forked from saundersg/Statistics-Notebook
-
Notifications
You must be signed in to change notification settings - Fork 3
/
MakingInference.html
852 lines (779 loc) · 30.6 KB
/
MakingInference.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="generator" content="pandoc" />
<meta http-equiv="X-UA-Compatible" content="IE=EDGE" />
<title>Making Inference</title>
<script src="site_libs/header-attrs-2.14/header-attrs.js"></script>
<script src="site_libs/jquery-3.6.0/jquery-3.6.0.min.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link href="site_libs/bootstrap-3.3.5/css/cerulean.min.css" rel="stylesheet" />
<script src="site_libs/bootstrap-3.3.5/js/bootstrap.min.js"></script>
<script src="site_libs/bootstrap-3.3.5/shim/html5shiv.min.js"></script>
<script src="site_libs/bootstrap-3.3.5/shim/respond.min.js"></script>
<style>h1 {font-size: 34px;}
h1.title {font-size: 38px;}
h2 {font-size: 30px;}
h3 {font-size: 24px;}
h4 {font-size: 18px;}
h5 {font-size: 16px;}
h6 {font-size: 12px;}
code {color: inherit; background-color: rgba(0, 0, 0, 0.04);}
pre:not([class]) { background-color: white }</style>
<script src="site_libs/navigation-1.1/tabsets.js"></script>
<style type="text/css">
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
ul.task-list{list-style: none;}
</style>
<link rel="stylesheet" href="styles.css" type="text/css" />
<style type = "text/css">
.main-container {
max-width: 940px;
margin-left: auto;
margin-right: auto;
}
img {
max-width:100%;
}
.tabbed-pane {
padding-top: 12px;
}
.html-widget {
margin-bottom: 20px;
}
button.code-folding-btn:focus {
outline: none;
}
summary {
display: list-item;
}
details > summary > p:only-child {
display: inline;
}
pre code {
padding: 0;
}
</style>
<style type="text/css">
.dropdown-submenu {
position: relative;
}
.dropdown-submenu>.dropdown-menu {
top: 0;
left: 100%;
margin-top: -6px;
margin-left: -1px;
border-radius: 0 6px 6px 6px;
}
.dropdown-submenu:hover>.dropdown-menu {
display: block;
}
.dropdown-submenu>a:after {
display: block;
content: " ";
float: right;
width: 0;
height: 0;
border-color: transparent;
border-style: solid;
border-width: 5px 0 5px 5px;
border-left-color: #cccccc;
margin-top: 5px;
margin-right: -10px;
}
.dropdown-submenu:hover>a:after {
border-left-color: #adb5bd;
}
.dropdown-submenu.pull-left {
float: none;
}
.dropdown-submenu.pull-left>.dropdown-menu {
left: -100%;
margin-left: 10px;
border-radius: 6px 0 6px 6px;
}
</style>
<script type="text/javascript">
// manage active state of menu based on current page
$(document).ready(function () {
// active menu anchor
href = window.location.pathname
href = href.substr(href.lastIndexOf('/') + 1)
if (href === "")
href = "index.html";
var menuAnchor = $('a[href="' + href + '"]');
// mark it active
menuAnchor.tab('show');
// if it's got a parent navbar menu mark it active as well
menuAnchor.closest('li.dropdown').addClass('active');
// Navbar adjustments
var navHeight = $(".navbar").first().height() + 15;
var style = document.createElement('style');
var pt = "padding-top: " + navHeight + "px; ";
var mt = "margin-top: -" + navHeight + "px; ";
var css = "";
// offset scroll position for anchor links (for fixed navbar)
for (var i = 1; i <= 6; i++) {
css += ".section h" + i + "{ " + pt + mt + "}\n";
}
style.innerHTML = "body {" + pt + "padding-bottom: 40px; }\n" + css;
document.head.appendChild(style);
});
</script>
<!-- tabsets -->
<style type="text/css">
.tabset-dropdown > .nav-tabs {
display: inline-table;
max-height: 500px;
min-height: 44px;
overflow-y: auto;
border: 1px solid #ddd;
border-radius: 4px;
}
.tabset-dropdown > .nav-tabs > li.active:before {
content: "";
font-family: 'Glyphicons Halflings';
display: inline-block;
padding: 10px;
border-right: 1px solid #ddd;
}
.tabset-dropdown > .nav-tabs.nav-tabs-open > li.active:before {
content: "";
border: none;
}
.tabset-dropdown > .nav-tabs.nav-tabs-open:before {
content: "";
font-family: 'Glyphicons Halflings';
display: inline-block;
padding: 10px;
border-right: 1px solid #ddd;
}
.tabset-dropdown > .nav-tabs > li.active {
display: block;
}
.tabset-dropdown > .nav-tabs > li > a,
.tabset-dropdown > .nav-tabs > li > a:focus,
.tabset-dropdown > .nav-tabs > li > a:hover {
border: none;
display: inline-block;
border-radius: 4px;
background-color: transparent;
}
.tabset-dropdown > .nav-tabs.nav-tabs-open > li {
display: block;
float: none;
}
.tabset-dropdown > .nav-tabs > li {
display: none;
}
</style>
<!-- code folding -->
</head>
<body>
<div class="container-fluid main-container">
<div class="navbar navbar-default navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-bs-toggle="collapse" data-target="#navbar" data-bs-target="#navbar">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="index.html">Statistics Notebook</a>
</div>
<div id="navbar" class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
R Help
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="RCommands.html">R Commands</a>
</li>
<li>
<a href="RMarkdownHints.html">R Markdown Hints</a>
</li>
<li>
<a href="RCheatSheetsAndNotes.html">R Cheatsheets & Notes</a>
</li>
<li>
<a href="DataSources.html">Data Sources</a>
</li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Describing Data
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="GraphicalSummaries.html">Graphical Summaries</a>
</li>
<li>
<a href="NumericalSummaries.html">Numerical Summaries</a>
</li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Making Inference
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="MakingInference.html">Making Inference</a>
</li>
<li>
<a href="tTests.html">t Tests</a>
</li>
<li>
<a href="WilcoxonTests.html">Wilcoxon Tests</a>
</li>
<li>
<a href="Kruskal.html">Kruskal-Wallis Test</a>
</li>
<li>
<a href="ANOVA.html">ANOVA</a>
</li>
<li>
<a href="LinearRegression.html">Linear Regression</a>
</li>
<li>
<a href="LogisticRegression.html">Logistic Regression</a>
</li>
<li>
<a href="ChiSquaredTests.html">Chi Squared Tests</a>
</li>
<li>
<a href="PermutationTests.html">Randomization</a>
</li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Analyses
<span class="caret"></span>
</a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="./Analyses/StudentHousing.html">Good Example Analysis</a>
</li>
<li>
<a href="./Analyses/StudentHousingPOOR.html">Poor Example Analysis</a>
</li>
<li>
<a href="./Analyses/Rent.html">Rent</a>
</li>
<li>
<a href="./Analyses/Stephanie.html">Stephanie</a>
</li>
<li>
<a href="./Analyses/t Tests/HighSchoolSeniors.html">High School Seniors</a>
</li>
<li>
<a href="./Analyses/Wilcoxon Tests/RecallingWords.html">Recalling Words</a>
</li>
<li>
<a href="./Analyses/ANOVA/MyTwoWayANOVA.html">My Two-way ANOVA</a>
</li>
<li>
<a href="./Analyses/Kruskal-Wallis Test/Food.html">Food</a>
</li>
<li>
<a href="./Analyses/Linear Regression/MySimpleLinearRegression.html">My Simple Linear Regression</a>
</li>
<li>
<a href="./Analyses/Linear Regression/CarPrices.html">Car Prices</a>
</li>
<li>
<a href="./Analyses/Logistic Regression/MyLogisticRegression.html">My Logistic Regression</a>
</li>
<li>
<a href="./Analyses/Chi Squared Tests/MyChiSquaredTest.html">My Chi-sqaured Test</a>
</li>
</ul>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
</ul>
</div><!--/.nav-collapse -->
</div><!--/.container -->
</div><!--/.navbar -->
<div id="header">
<h1 class="title toc-ignore">Making Inference</h1>
</div>
<script type="text/javascript">
function showhide(id) {
var e = document.getElementById(id);
e.style.display = (e.style.display == 'block') ? 'none' : 'block';
}
</script>
<hr />
<p>It is common to only have a <strong>sample</strong> of data from some
population of interest. Using the information from the sample to reach
conclusions about the population is called <em>making inference</em>.
When statistical inference is performed properly, the conclusions about
the population are almost always correct.</p>
<div id="hypothesis-testing" class="section level2">
<h2>Hypothesis Testing</h2>
<div style="padding-left:15px;">
<p>One of the great focal points of statistics concerns hypothesis
testing. Science generally agrees upon the principle that truth must be
uncovered by the process of elimination. The process begins by
establishing a starting assumption, or <em>null hypothesis</em> (<span
class="math inline">\(H_0\)</span>). Data is then collected and the
evidence against the null hypothesis is measured, typically with the
<span class="math inline">\(p\)</span>-value. The <span
class="math inline">\(p\)</span>-value becomes small (gets close to
zero) when the evidence is <em>extremely</em> different from what would
be expected if the null hypothesis were true. When the <span
class="math inline">\(p\)</span>-value is below the <em>significance
level</em> <span class="math inline">\(\alpha\)</span> (typically <span
class="math inline">\(\alpha=0.05\)</span>) the null hypothesis is
abandoned (rejected) in favor of a competing <em>alternative
hypothesis</em> (<span class="math inline">\(H_a\)</span>).</p>
<a href="javascript:showhide('progressionOfHypotheses')">Click for an
Example </a>
<div id="progressionOfHypotheses" style="display:none;">
<p>The current hypothesis may be that the world is flat. Then someone
who thinks otherwise sets sail in a boat, gathers some evidence, and
when there is sufficient evidence in the data to disbelieve the current
hypothesis, we conclude the world is not flat. In light of this new
knowledge, we shift our belief to the next working hypothesis, that the
world is round. After a while, someone gathers more evidence and shows
that the world is not round, and we move to the next working hypothesis,
that it is oblate spheroid, i.e., a sphere that is squashed at its poles
and swollen at the equator.</p>
<p><img src="Images/progressionOfHypotheses.png" /></p>
<p>This process of elimination is called hypothesis testing. The process
begins by establishing a <em>null hypothesis</em> (denoted symbolically
by <span class="math inline">\(H_0\)</span>) which represents the
current opinion, status quo, or what we will believe if the evidence is
not sufficient to suggest otherwise. The alternative hypothesis (denoted
symbolically by <span class="math inline">\(H_a\)</span>) designates
what we will believe if there is sufficient evidence in the data to
discredit, or “reject,” the null hypothesis.</p>
<p>See the <a
href="http://statistics.byuimath.com/index.php?title=Lesson_2:_The_Statistical_Process_%26_Design_of_Studies#Making_Inferences:_Hypothesis_Testing">BYU-I
Math 221 Stats Wiki</a> for another example.</p>
</div>
<p><br /></p>
<div id="managing-decision-errors" class="section level3">
<h3>Managing Decision Errors</h3>
<p>When the <span class="math inline">\(p\)</span>-value approaches
zero, one of two things must be occurring. Either an extremely rare
event has happened or the null hypothesis is incorrect. Since the second
option, that the null hypothesis is incorrect, is the more plausible
option, we reject the null hypothesis in favor of the alternative
whenever the <span class="math inline">\(p\)</span>-value is close to
zero. It is important to remember that rejecting the null hypothesis
could however be a mistake.</p>
<div style="padding-left:30px; padding-right:10%;">
<table>
<thead>
<tr class="header">
<th> </th>
<th><span class="math inline">\(H_0\)</span> True</th>
<th><span class="math inline">\(H_0\)</span> False</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><strong>Reject</strong> <span
class="math inline">\(H_0\)</span></td>
<td>Type I Error</td>
<td>Correct Decision</td>
</tr>
<tr class="even">
<td><strong>Accept</strong> <span
class="math inline">\(H_0\)</span></td>
<td>Correct Decision</td>
<td>Type II Error</td>
</tr>
</tbody>
</table>
</div>
<p><br /></p>
</div>
<div id="type-i-error-significance-level-confidence-and-alpha"
class="section level3">
<h3>Type I Error, Significance Level, Confidence and <span
class="math inline">\(\alpha\)</span></h3>
<p>A <strong>Type I Error</strong> is defined as rejecting the null
hypothesis when it is actually true. (Throwing away truth.) The
<strong>significance level</strong>, <span
class="math inline">\(\alpha\)</span>, of a hypothesis test controls the
probability of a Type I Error. The typical value of <span
class="math inline">\(\alpha = 0.05\)</span> came from tradition and is
a somewhat arbitrary value. Any value from 0 to 1 could be used for
<span class="math inline">\(\alpha\)</span>. When deciding on the level
of <span class="math inline">\(\alpha\)</span> for a particular study it
is important to remember that as <span
class="math inline">\(\alpha\)</span> increases, the probability of a
Type I Error increases, and the probability of a Type II Error
decreases. When <span class="math inline">\(\alpha\)</span> gets
smaller, the probability of a Type I Error gets smaller, while the
probability of a Type II Error increases. <strong>Confidence</strong> is
defined as <span class="math inline">\(1-\alpha\)</span> or the opposite
of a Type I error. That is the probability of accepting the NULL when it
is in fact true.</p>
<p><br /></p>
</div>
<div id="type-ii-errors-beta-and-power" class="section level3">
<h3>Type II Errors, <span class="math inline">\(\beta\)</span>, and
Power</h3>
<p>It is also possible to make a <strong>Type II Error</strong>, which
is defined as failing to reject the null hypothesis when it is actually
false. (Failing to move to truth.) The probability of a Type II Error,
<span class="math inline">\(\beta\)</span>, is often unknown. However,
practitioners often make an assumption about a detectable difference
that is desired which then allows <span
class="math inline">\(\beta\)</span> to be prescribed much like <span
class="math inline">\(\alpha\)</span>. In essence, the detectable
difference prescribes a fixed value for <span
class="math inline">\(H_a\)</span>. We can then talk about the
<strong>power</strong> of of a hypothesis test, which is 1 minus the
probability of a Type II Error, <span
class="math inline">\(\beta\)</span>. See <a
href="https://en.wikipedia.org/wiki/Statistical_power">Statistical
Power</a> in Wikipedia for a starting source if your are interested. <a
href="http://rpsychologist.com/d3/NHST/" target="blank">This website</a>
provides a novel interactive visualization to help you understand power.
It does require a little background on <a
href="http://rpsychologist.com/d3/cohend/">Cohen’s D</a>.</p>
<p><br /></p>
</div>
<div id="sufficient-evidence" class="section level3">
<h3>Sufficient Evidence</h3>
<p>Statistics comes in to play with hypothesis testing by defining the
phrase “sufficient evidence.” When there is “sufficient evidence” in the
data, the null hypothesis is rejected and the alternative hypothesis
becomes the working hypothesis.</p>
<p>There are many statistical approaches to this problem of measuring
the significance of evidence, but in almost all cases, the final
measurement of evidence is given by the <span
class="math inline">\(p\)</span>-value of the hypothesis test. The <span
class="math inline">\(p\)</span>-value of a test is defined as the
probability of the evidence being as extreme or more extreme than what
was observed assuming the null hypothesis is true. This is an
interesting phrase that is at first difficult to understand.</p>
<p>The “as extreme or more extreme” part of the definition of the <span
class="math inline">\(p\)</span>-value comes from the idea that the null
hypothesis will be rejected when the evidence in the data is extremely
inconsistent with the null hypothesis. If the data is not extremely
different from what we would expect under the null hypothesis, then we
will continue to believe the null hypothesis. Although, it is worth
emphasizing that this does not prove the null hypothesis to be true.</p>
<p><br /></p>
</div>
<div id="evidence-not-proof" class="section level3">
<h3>Evidence not Proof</h3>
<p>Hypothesis testing allows us a formal way to decide if we should
“conclude the alternative” or “<em>continue</em> to accept the null.” It
is important to remember that statistics (and science) cannot
<em>prove</em> anything, just show evidence towards. Thus we never
really <em>prove</em> a hypothesis is true, we simply show evidence
towards or against a hypothesis.</p>
</div>
</div>
<p><br /></p>
</div>
<div id="pvalue" class="section level2">
<h2>Calculating the <span class="math inline">\(p\)</span>-Value</h2>
<div style="padding-left:15px;">
<p>Recall that the <span class="math inline">\(p\)</span>-value measures
how extremely the data (the evidence) differs from what is expected
under the null hypothesis. Small <span
class="math inline">\(p\)</span>-values lead us to discard (reject) the
null hypothesis.</p>
<p>A <span class="math inline">\(p\)</span>-value can be calculated
whenever we have two things.</p>
<ol style="list-style-type: decimal">
<li><p>A <em>test statistic</em>, which is a way of measuring how “far”
the observed data is from what is expected under the null
hypothesis.</p></li>
<li><p>The <em>sampling distribution</em> of the test statistic, which
is the theoretical distribution of the test statistic over all possible
samples, assuming the null hypothesis was true. Visit <a
href="http://statistics.byuimath.com/index.php?title=Lesson_6:_Distribution_of_Sample_Means_%26_The_Central_Limit_Theorem">the
Math 221 textbook</a> for an explanation.</p></li>
</ol>
<p>A <em>distribution</em> describes how data is spread out. When we
know the shape of a distribution, we know which values are possible, but
more importantly which values are most plausible (likely) and which are
the least plausible (unlikely). The <span
class="math inline">\(p\)</span>-value uses the <em>sampling
distribution</em> of the test statistic to measure the probability of
the observed test statistic being as extreme or more extreme than the
one observed.</p>
<p>All <span class="math inline">\(p\)</span>-value computation methods
can be classified into two broad categories, <em>parametric</em> methods
and <em>nonparametric</em> methods.</p>
<div id="parametric-methods" class="section level3">
<h3>Parametric Methods</h3>
<div style="padding-left:15px;">
<p>Parametric methods assume that, under the null hypothesis, the test
statistic follows a specific theoretical parametric distribution.
Parametric methods are typically more statistically powerful than
nonparametric methods, but necessarily force more assumptions on the
data.</p>
<p><em>Parametric distributions</em> are theoretical distributions that
can be described by a mathematical function. There are many theoretical
distributions. (See the <a
href="https://en.wikipedia.org/wiki/List_of_probability_distributions">List
of Probability Distributions</a> in Wikipedia for details.)</p>
<p>Four of the most widely used parametric distributions are:</p>
<div style="padding-left:15px;">
<hr />
<div id="normal" class="section level4">
<h4>The Normal Distribution</h4>
<a href="javascript:showhide('thenormaldistribution')">Click for Details
</a>
<div id="thenormaldistribution" style="display:none;">
<p>One of the most important distributions in statistics is the normal
distribution. It is a theoretical distribution that approximates the
distributions of many real life data distributions, like heights of
people, heights of corn plants, baseball batting averages, lengths of
gestational periods for many species including humans, and so on.</p>
<p>More importantly, the <em>sampling distribution</em> of the sample
mean <span class="math inline">\(\bar{x}\)</span> is normally
distributed in two important scenarios.</p>
<ol style="list-style-type: decimal">
<li>The parent population is normally distributed.</li>
<li>The sample size is sufficiently large. (Often <span
class="math inline">\(n\geq 30\)</span> is sufficient, but this is a
general rule of thumb that is sometimes insufficient.)</li>
</ol>
<div id="mathematical-formula" class="section level5">
<h5>Mathematical Formula</h5>
<p><span class="math display">\[
f(x | \mu,\sigma) =
\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}
\]</span> The symbols <span class="math inline">\(\mu\)</span> and <span
class="math inline">\(\sigma\)</span> are the two <em>parameters</em> of
this distribution. The parameter <span
class="math inline">\(\mu\)</span> controls the center, or mean of the
distribution. The parameter <span class="math inline">\(\sigma\)</span>
controls the spread, or standard deviation of the distribution.</p>
</div>
<div id="graphical-form" class="section level5">
<h5>Graphical Form</h5>
<p><img src="MakingInference_files/figure-html/unnamed-chunk-1-1.png" width="672" /></p>
</div>
<div id="comments" class="section level5">
<h5>Comments</h5>
<p>The usefulness of the normal distribution is that we know which
values of data are likely and which are unlikely by just knowing three
things:</p>
<ol style="list-style-type: decimal">
<li><p>that the data is normally distributed,</p></li>
<li><p><span class="math inline">\(\mu\)</span>, the mean of the
distribution, and</p></li>
<li><p><span class="math inline">\(\sigma\)</span>, the standard
deviation of the distribution.</p></li>
</ol>
<p>For example, as shown in the plot above, a value of <span
class="math inline">\(x=-8\)</span> would be very probable for the
normal distribution with <span class="math inline">\(\mu=-5\)</span> and
<span class="math inline">\(\sigma=2\)</span> (light blue curve).
However, the value of <span class="math inline">\(x=-8\)</span> would be
very unlikely to occur in the normal distribution with <span
class="math inline">\(\mu=3\)</span> and <span
class="math inline">\(\sigma=3\)</span> (gray curve). In fact, <span
class="math inline">\(x=-8\)</span> would be even more unlikely an
occurance for the <span class="math inline">\(\mu=0\)</span> and <span
class="math inline">\(\sigma=1\)</span> distribution (dark blue
curve).</p>
<p><br /> <br /></p>
</div>
</div>
<hr />
</div>
<div id="chisquared" class="section level4">
<h4>The Chi Squared Distribution</h4>
<a href="javascript:showhide('thechidistribution')">Click for Details
</a>
<div id="thechidistribution" style="display:none;">
<p>The <em>chi squared</em> distribution only allows for values that are
greater than or equal to zero. While it has a few real life
applications, by far its greatest use is theoretical.</p>
<p>The test statistic of the chi squared test is distributed according
to a chi squared distribution.</p>
<div id="mathematical-formula-1" class="section level5">
<h5>Mathematical Formula</h5>
<p><span class="math display">\[
f(x|p) = \frac{1}{\Gamma(p/2)2^{p/2}}x^{(p/2)-1}e^{-x/2}
\]</span> The only parameter of the chi squared distribution is <span
class="math inline">\(p\)</span>, which is known as the degrees of
freedom. Larger values of the parameter <span
class="math inline">\(p\)</span> move the center of the chi squared
distribution farther to the right. As <span
class="math inline">\(p\)</span> goes to infinity, the chi squared
distribution begins to look more and more normal in shape.</p>
<p>Note that the symbol in the denominator of the chi squared
distribution, <span class="math inline">\(\Gamma(p/2)\)</span>, is the
Gamma function of <span class="math inline">\(p/2\)</span>. (See <a
href="https://en.wikipedia.org/wiki/Gamma_function">Gamma Function</a>
in Wikipedia for details.)</p>
</div>
<div id="graphical-form-1" class="section level5">
<h5>Graphical Form</h5>
<p><img src="MakingInference_files/figure-html/unnamed-chunk-2-1.png" width="672" /></p>
</div>
<div id="comments-1" class="section level5">
<h5>Comments</h5>
<p>It is important to remember that the chi squared distribution is only
defined for <span class="math inline">\(x\geq 0\)</span> and for
positive values of the parameter <span class="math inline">\(p\)</span>.
This is unlike the normal distribution which is defined for all numbers
<span class="math inline">\(x\)</span> from negative infinity to
positive infinity as well as for all values of <span
class="math inline">\(\mu\)</span> from negative infinity to positive
infinity.</p>
<p><br /> <br /></p>
</div>
</div>
<hr />
</div>
<div id="the-t-distribution" class="section level4">
<h4>The t Distribution</h4>
<a href="javascript:showhide('thetdistribution')">Click for Details </a>
<div id="thetdistribution" style="display:none;">
<p>A close friend of the normal distribution is the t distribution.
Although the t distribution is seldom used to model real life data, the
distribution is used extensively in hypothesis testing. For example, it
is the sampling distribution of the one sample t statistic. It also
shows up in many other places, like in regression, in the independent
samples t test, and in the paired samples t test.</p>
<div id="mathematical-formula-2" class="section level5">
<h5>Mathematical Formula</h5>
<p><span class="math display">\[
f(x|p) =
\frac{\Gamma\left(\frac{p+1}{2}\right)}{\Gamma\left(\frac{p}{2}\right)}\frac{1}{\sqrt{p\pi}}\frac{1}{\left(1
+ \left(\frac{x^2}{p}\right)\right)^{(p+1)/2}}
\]</span></p>
<p>Notice that, similar to the chi squared distribution, the t
distribution has only one parameter, the degrees of freedom <span
class="math inline">\(p\)</span>. As the single parameter <span
class="math inline">\(p\)</span> is varied from <span
class="math inline">\(p=1\)</span>, to <span
class="math inline">\(p=2\)</span>, …, <span
class="math inline">\(p=5\)</span>, and larger and larger numbers, the
resulting distribution becomes more and more normal in shape.</p>
<p>Note that the expressions <span
class="math inline">\(\Gamma\left(\frac{p+1}{2}\right)\)</span> and
<span class="math inline">\(\Gamma(p/2)\)</span>, refer to the Gamma
function. (See <a
href="https://en.wikipedia.org/wiki/Gamma_function">Gamma Function</a>
in Wikipedia for details.)</p>
</div>
<div id="graphical-form-2" class="section level5">
<h5>Graphical Form</h5>
<p><img src="MakingInference_files/figure-html/unnamed-chunk-3-1.png" width="672" /></p>
</div>
<div id="comments-2" class="section level5">
<h5>Comments</h5>
<p>When the degrees of freedom <span
class="math inline">\(p=30\)</span>, the resulting t distribution is
almost indistinguishable visually from the normal distribution. This is
one of the reasons that a sample size of 30 is often used as a rule of
thumb for the sample size being “large enough” to assume the sampling
distribution of the sample mean is approximately normal.</p>
<p><br /> <br /></p>
</div>
</div>
<hr />
</div>
<div id="fdist" class="section level4">
<h4>The F Distribution</h4>
<a href="javascript:showhide('thefdistribution')">Click for Details </a>
<div id="thefdistribution" style="display:none;">
<p>Another commonly used distribution for test statistics, like in ANOVA
and regression, is the F distribution. Technically speaking, the F
distribution is the ratio of two chi squared random variables that are
each divided by their respective degrees of freedom.</p>
<div id="mathematical-formula-3" class="section level5">
<h5>Mathematical Formula</h5>
<p><span class="math display">\[
f(x|p_1,p_2) =
\frac{\Gamma\left(\frac{p_1+p_2}{2}\right)}{\Gamma\left(\frac{p_1}{2}\right)\Gamma\left(\frac{p_2}{2}\right)}\frac{\left(\frac{p_1}{p_2}\right)^{p_1/2}x^{(p_1-2)/2}}{\left(1+\left(\frac{p_1}{p_2}\right)x\right)^{(p_1+p_2)/2}}
\]</span> where <span class="math inline">\(x\geq 0\)</span> and the
parameters <span class="math inline">\(p_1\)</span> and <span
class="math inline">\(p_2\)</span> are the “numerator” and “denominator”
degrees of freedom, respectively.</p>
</div>
<div id="graphical-form-3" class="section level5">
<h5>Graphical Form</h5>
<p><img src="MakingInference_files/figure-html/unnamed-chunk-4-1.png" width="672" /></p>
</div>
<div id="comments-3" class="section level5">
<h5>Comments</h5>
<p>The effects of the parameters <span
class="math inline">\(p_1\)</span> and <span
class="math inline">\(p_2\)</span> on the F distribution are
complicated, but generally speaking, as they both increase the
distribution becomes more and more normal in shape.</p>
<p><br /></p>
</div>
</div>
<hr />
</div>
</div>
</div>
</div>
<div id="nonparametric-methods" class="section level3">
<h3>Nonparametric Methods</h3>
<div style="padding-left:15px;">
<p>Nonparametric methods place minimal assumptions on the distribution
of data. They allow the data to “speak for itself.” They are typically
less powerful than the parametric alternatives, but are more broadly
applicable because fewer assumptions need to be satisfied. Nonparametric
methods include <a href="WilcoxonTests.html">Rank Sum Tests</a> and <a
href="PermutationTests.html">Permutation Tests</a>.</p>
</div>
</div>
</div>
<footer>
</footer>
</div>
</div>
<script>
// add bootstrap table styles to pandoc tables
function bootstrapStylePandocTables() {
$('tr.odd').parent('tbody').parent('table').addClass('table table-condensed');
}
$(document).ready(function () {
bootstrapStylePandocTables();
});
</script>
<!-- tabsets -->
<script>
$(document).ready(function () {
window.buildTabsets("TOC");
});
$(document).ready(function () {
$('.tabset-dropdown > .nav-tabs > li').click(function () {
$(this).parent().toggleClass('nav-tabs-open');
});
});
</script>
<!-- code folding -->
<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
script.src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
document.getElementsByTagName("head")[0].appendChild(script);
})();
</script>
</body>
</html>