Embed Code - If you would like this activity on your web page, copy the script below and paste it into your web page.

Normal Size Small Size show me how

Normal Size Small Size show me how

# Statistics

### Statistics slides

Question | Answer |
---|---|

What are the assumptions for a dependent t-test? | Level of measurement is interval or ratio, the data is normally distributed, there was random sampling and/or assignment, there was equality of variance. |

What are the null and alternative hypotheses of a dependent t-test? | Null=there is no significant difference between means of sample group at time 1 and time 2 Alternative=there is a significant difference between sample group at time 1 and time 2. |

What is the research question behind dependent t-tests? | Is there a difference between group means at time 1 and time 2? |

What does the confidence interval for a dependent t-test tell us? | CI means that under repeated sampling 95% of CIs would contain the true population stat. |

What are the assumptions for an independent t-test? | Random sampling and/or assignment, independent observations, normally distributed, equality of variance, mutually exclusive IVs. |

What are the null and alternative hypotheses of an independent t-test? | Null=there was no difference between group means Alternative=there was a difference between group means |

What is the research Q behind independent t-tests? | Is there a difference in the means of two groups undergoing the same treatment? |

What does the confidence interval for an independent t-test tell us? | CI means that under repeated sampling 95% of CIs would contain the true population stat. |

What are the assumptions for an ANOVA? | Equality of variances, random sampling and assignment, normally distributed, independent observations. |

What is an effect size? (N & S p. 59, Pallant , 207-207 & 247) | The relative magnitude of the differences between means – if there’s a difference, is it an important one, basically. It is calculated by dividing the sum of the squares between-groups by the total sum of squares. |

What is the line equation (p. 134)? | Y = a+bx, where a is the intercept (the value of Y when x is equal to zero) and b is the slope. |

What is a definition of a regression line (p. 134)? | The straight line that passes through the data and minimizes the sum of the squared differences between the fitted and actual data points. |

Why is it called least squares regression (p. 134)? | Because it tries to find a line that has the least squared sum of differences between the actual and fitted data. |

What is a correlation coefficient (p. 137)? | A numerical, descriptive measure of the strength of the linear relationship between two variables. |

What is R2, the coefficient of determination? | The proportion of variance in the DV that is explained by the IV. |

How to interpret correlation coefficient? | The value between –1 and 1 that possesses the same negativity or positivity of the correlation (slope of the line) and has magnitude that expresses the degree of linear association between two variables. |

What is Pearson’s correlation? (p. 137) | Obtained by dividing the covariance of the two variables by the product of their standard deviations. It’s called r. |

Read a scatter plot and generate one as done on p. 128 | Graphs, Legacy Dialogues, Simple Scatter, Define. DV to the Y-axis, IV to the X-axis. Shows us outliers, checks for homoscedasticity, shows us what direction its correlated, if it is. |

How do you find ID number of outlier? | Data Label Mode icon on Chart Editor. Double click, click on bullseye and clock on outlier. |

What did Cohen say about guidelines for interpreting correlation coefficients (p. 132)? | Small=.10 to .29 Medium=.3 to .49 Large=.5 to 1. |

How to interpret a Pearson vs. Spearman correlation? | If there’s a negative sign in front of the correlation coefficient value, there’s a negative correlation. The further the correlation coefficient is from zero, the stronger the correlation – .1 to .29 is small, .3 to .49 is medium, and .5 to 1 is large. |

What assumptions a Spearman correlation makes. | It’s linear? |

What is a partial correlation? What research question does it address, p. 142-143 | Allows us to control for an additional variable, usually because you suspect it’s influencing other variables. Research Q=after controlling this variable, is there still a relationship between two others? |

How to read the printouts and run result for partial correlation (p. 144) | Analyze, correlate, partial. If the correlation coefficient varies between the two sections, the third variable does have an effect. |

What are the three types of regression? ( p. 147-148) | 1) Standard multiple regression, all IVs entered into the equation simultaneously. Heirarchical or sequential, IVs entered in blocks. Stepwise, lets SPSS take control. |

What are assumptions about regression? | Sufficient sample size, no multicollinearity or singularity, outliers dealt with, ND, linear, homoscedasticity, |

Correlation assumptions? | Interval or continuous level of measurement. Both pieces of information, X and Y, are from the same person (related pairs). Independence of observatiosn – observation 1 doesn’t influence observation 2. ND. Linear relationship. Homoscedasticity. |

Correlation research Q? | Is there a relationship between X and Y? |

Correlation null hypothesis? | There is no relationship. |

Correlation alternative hypothesis? | There is a relationship. |

What does Levene’s test tell us?? | Whether or not the variances in the groups are the same – whether or not the assumption of homogeneity has been violated. |

What does a post-hoc test tell us and why is this needed for an ANOVA and not for a t-test? | Used if the null hypothesis is rejected, to see which groups vary significantly. |

What is the definition of degrees of freedom (p.72) | Dfb (between groups) = number of groups minus 1. Dfw (within groups) = sample size minus number of groups. Dft (total) = Sample size minus one. |

What are two differences between a z-test and t-test? (p.71) | The t-test is used for two groups while the z-test is used for one. We cannot know the standard deviation for a t-test, whereas we know the standard deviation for a z-test. |

What are the rules about confidence intervals noted by Cunning and Finch on p. 74 of N&S? 1) | If the error bars don’t overlap, then the groups are sig at p=less than or equal to .01. 2) If amt of overlap=less than half of the CI, the sig. level=less than or equal to .05. 3) If overlap=more than half of the CI, dif is not statistically sig. |

What are the rules about Cohen’s effect size ( p. 74) | Effect size of .2 is considered to be small, .5 is moderate and .8 is large. |

What is a grand mean? | Mean of the means of several samples. |

What is the Sum of squares between groups? | Sum of the squared differences between the group means and the grand mean. |

What is the Sum of squares within? | Sum of squared differences between individual data and the group mean within each group. |

What is the Mean square between? | Sum of squares between divided by df. |

What is the Mean Square within? | Sum of squares within divided by df. |

What is a type 1 error? | Rejecting the null hypothesis when it is true. |

What is a type II error? | Accepting the null hypothesis when it is false. |

What is the difference between a planned vs. post-hoc test? | Post hoc tests:we do all tests - weaker power because, for ex, we calculate 0.05/6=.0083 in post hoc, as opposed to using planned comparison where we don't do all the tests and instead just use the p-value of .05, which has more power than .0083. |

What is a Bonferroni correction used for? | To avoid making an alpha error – count up total number of comparisons you’ll make (k), then divide .05 by k. Don’t use if there are more than 5 groups. It overcompensates. |

How to interpret correlation coefficient? | (p.138 |

What do correlations tell us? | Describe the relationship between two continuous variables, in terms of strength and direction. |

Read a scatter plot and generate one as done on p. 128 | Graphs, Legacy Dialogues, Simple Scatter, Define. DV to the Y-axis, IV to the X-axis. Shows us outliers, checks for homoscedasticity, shows us what direction its correlated, if it is. |

How do you find ID number of outlier? | Data Label Mode icon on Chart Editor. Double click, click on bullseye and clock on outlier. |

What did Cohen say about guidelines for interpreting correlation coefficients (p. 132)? | Small=.10 to .29 Medium=.3 to .49 Large=.5 to 1. |

How to interpret a Pearson vs. Spearman correlation? | If there’s a negative sign in front of the correlation coefficient value, there’s a negative correlation. The further the correlation coefficient is from zero, the stronger the correlation – .1 to .29 is small, .3 to .49 is medium, and .5 to 1 is large. |

What assumptions a Spearman correlation makes. | It’s linear? |

What is a partial correlation? What research question does it address, p. 142-143 | Allows us to control for an additional variable, usually because you suspect it’s influencing other variables. Research Q=after controlling this variable, is there still a relationship between two others? |

How to read the printouts and run result for partial correlation (p. 144) | Analyze, correlate, partial. If the correlation coefficient varies between the two sections, the third variable does have an effect. |

What are the three types of regression? ( p. 147-148) | 1) Standard multiple regression, all IVs entered into the equation simultaneously. Heirarchical or sequential, IVs entered in blocks. Stepwise, lets SPSS take control. |

What are assumptions about regression? | Sufficient sample size, no multicollinearity or singularity, outliers dealt with, ND, linear, homoscedasticity, |

Why are outliers a problem? | They can have a large effect on means and other parts of the test. |

What is multicollinearity and singularity? | Multicollinearity exists when the IVs are highly correlated. Singularity occurs when one IV is actually a combination of other IVs. |

What are interactions over time? (see N & S p. 94-96) | Lines that are not parallel indicate that there is an interaction. |

Correlation assumptions? | Interval or continuous level of measurement. Both pieces of information, X and Y, are from the same person (related pairs). Independence of observatiosn – observation 1 doesn’t influence observation 2. ND. Linear relationship. Homoscedasticity. |

Correlation research Q? | Is there a relationship between X and Y? |

Correlation null hypothesis? | There is no relationship. |

Correlation alternative hypothesis? | There is a relationship. |

What is the difference between Levene’s test of homogeneity of variance and Levene’s test of equality of variance? | There is no difference |

What does Leven’s test tell us?? | Whether or not the variances in the groups are the same – whether or not the assumption of homogeneity has been violated. |

Why would one use a robust ANOVA test like Welsch or Brown- Forsyth (start p. 246 Pallant)? | When the Levene test has a result lower than .05, telling us that the assumption of homogeneity has been violated. |

What does a post-hoc test tell us and why is this needed for an ANOVA and not for a t-test? | Used if the null hypothesis is rejected, to see which groups vary significantly. |

What is the difference between clinically significant and statistically significant? | Statistically significant difference doesn’t make any claims about the magnitude of the effect of the difference |

What is the definition of degrees of freedom (p.72) | Dfb (between groups) = number of groups minus 1. Dfw (within groups) = sample size minus number of groups. Dft (total) = Sample size minus one. |

What are two differences between a z-test and t-test? (p.71) | The t-test is used for two groups while the z-test is used for one. We cannot know the standard deviation for a t-test, whereas we know the standard deviation for a z-test. |

What are the rules about confidence intervals noted by Cunning and Finch on p. 74 of N&S? 1) | If the error bars don’t overlap, then the groups are sig at p=less than or equal to .01. 2) If amt of overlap=less than half of the CI, the sig. level=less than or equal to .05. 3) If overlap=more than half of the CI, dif is not statistically sig. |

What are the rules about Cohen’s effect size ( p. 74) | Effect size of .2 is considered to be small, .5 is moderate and .8 is large. |

Why don’t we run a lot of t-tests instead of doing an ANOVA? (P.77-78) | ANOVA more efficient, less noise. Doing many t-tests increases the chance of alpha error, have to maintain the integrity. We can also use Mean Square term as a better estimate of within-group variance. |

What is an eta square? (N & S p.87, Pallant p. 247) | Variable that shows us the strength of statistically strong relationships. It always yields a number between 0 and 1 and is interpreted as a proportion of the variance in the DV that can be attributed to the IV. |

What is the difference between a planned vs. post-hoc test? | Post hoc tests:we do all tests - weaker power because, for ex, we calculate 0.05/6=.0083 in post hoc, as opposed to using planned comparison where we don't do all the tests and instead just use the p-value of .05, which has more power than .0083. |

What did Cronbach recommend here for a reliability level ? | Cronbach alpha coefficient of a scale should be above .7 |

What is homoscedasticity of errors?; | If the errors have constant variance, the errors are called homoscedastic. |

Created by:
32404845