click below
click below
Normal Size Small Size show me how
I/O Psych 541
Chapter 7 and 8
Term | Definition |
---|---|
Confidence Intervals | a range of possible values for the parameter of function we are estimating. This constitutes an interval estimate. When we calculate a range of possible intervals, we can be more confident that the parameter we are estimating is in the interval |
95% intervals | the confidence level has a lower limit and an upper limit and values between these limits are “in” the confidence level |
how to calculate a confidence interval | the difference of the mean plus/minus critical value times standard error |
null hypothesis | no difference. H0 |
alternative hypothesis | difference. H1 |
"p" value | the probability of observing the effect or difference we obtained from our sample. |
reject the null hypothesis if: | p value is lower than alpha (0.05 usually) |
retain the null hypothesis if: | p value is higher than alpha (0.05 usually) |
effect size indexes | “how big” a difference or effect we have detected is regardless of level of statistical difference |
when to use a one-sample t test | works well with both large and small samples and when we don’t know population standard deviation |
reject null hypothesis when: | When t is larger than critical value |
when to use independent t tests | used when we have two separate samples (example – a control group and an experimental group) |
null hypothesis | population means are equal |
alternative hypothesis | population means are not equal |
paired samples t test | observations from one sample are paired with or linked to observations of second sample (ex – a pretest and post test scores of students) |
Cohen's D | Effect size index |
Cohen's D for two sample | difference of the means divided by pooled SD |
Cohen's D for one sample | difference of the means divided by SD |
For Cohen's D: Value of d <0.20 0.20 – 0.50 0.50 – 0.80 >0.80 | For Cohen's D: Effect Size Very Small Small Medium Large |
eta squared | useful effect-size index for the t test |
Calculate eta squared | t squared divided by t squared plus degrees of freedom |
ANOVA | Analysis of Variance - extension of the independent samples t test to three or more groups. |
between-groups variation or treatment effect | ANOVA partitions or “analyzes” the variation in the dependent variable into two sources. |
within-groups variation or error variance | ANOVA -- Some of the variation is due to differences among scores within separate groups |
In ANOVA, MSs are: | actual “variances being analyzed” |
calculate a mean square (MS) for each source of variation by: | dividing the sum of squares for the particular source by appropriate degrees of freedom |
F ratio | used to test the hypothesis of equality among the group means is a ratio of two variance estimates, in this case, the MS between groups or MSb divided by the MS within or MSw |
the estimate of the population variance treating all scores as a single dataset | The total degrees of freedom are N-1, and the total sum of squares is found by DEVSQ function. When that is divided by the total degrees of freedom |
within-groups sum of squares is | the sum of the squared deviations from the group mean for all scores within that group |
between-groups sum of squares is: | based on the deviations between the group means and the overall mean |
null hypothesis for ANOVA | the k population means are equivalent |
alternative hypothesis for ANOVA | specifies that the difference between at least one pair of means is not zero |
The one way ANOVA compares: | compares three or more means simultaneously |
The one-way ANOVA assumes: | __________ assumes interval or ratio data, independence, normality of distribution and equality of variance |
effect-size index for the one-way ANOVA (eta squared) is: | the between-groups sum of squares divided by the total sum of squares. |
The effect size index for the one-way ANOVA shows: | shows how much of the total variation in the dependent variable explained or “accounted for” by “Treatment” effects/differences among the means. If value= 0.20, about 20% of the variation in the dv is accounted for by knowledge of treatment condition |
Fisher LSD test | least significant difference. a way to control for the experimentwise error rate. |
Protected t test | overall alpha level for all comparisons is held at Acrit. alpha level is protected by using the within-groups MS and the degrees of freedom from all groups rather than the pooled standard deviation from the two groups being compared. |
Protected t test continued | compare this “least significant difference” to the actual differences among the means. Any difference larger in absolute value than LSD is statistically significant |
Tukey HD test | “Honestly Significant Difference” –uses the “studentized range statistic” Test is somewhat more conservative than LSD test. A difference between 2 means has to be larger to be significant w/the HSD test than the LSD test |
Bonferroni Corrections | adjustment to the alpha level to control for the experimentwise error rate. In general, if you perform k tests and want overall protection at the level of a, then you should use a/k as the criterion for significance for each comparison |
Partial eta squared | appropriate effect-size index for the two way ANOVA |
factors | In ANOVA terminology- independent variables are: |
independent variables in ANOVA | Each independent variable must have at least two levels. We are interested in examining the effects of these factors on the dependent variable separately and in combination. We call the separate effects main effects & the combined effect an interaction |