click below
click below
Normal Size Small Size show me how
Further stats
| Question | Answer |
|---|---|
| E(X) | Σx*p(X=x) |
| Var(X) | E(X^2)-E(X)^2. Same as average difference of term from mean, squared, and is standard deviation squared |
| E(a+bX) | a+bE(X) |
| Var(a+bX) | b^2Var(X) |
| E(X+Y) | E(X) + E(Y) |
| E(a+bX + c + dY) | (a+c) + bE(X) + dE(Y) |
| Var(X+Y) | Var(X) + Var(Y) |
| Signs of E and Var when combining variables | For E, + or - depending on sign inside. For Var, always add the variance even for Var(X-Y). |
| How to derive mean of uniform | All probabilities are 1/n so 1/n can be taken out of the Σ function. Then the result (n)(n+1)/2 can be times by 1/n to give (n+1)/2 |
| How to derive variance of uniform | E(X^2) can again take 1/n out of Σ, use result for Σr^2 then divide by n. E(X)^2 from before. |
| Situation for binomial | Fixed chance of success. Independent Fixed no trials Binary outcome (Success or failure) |
| Derivation for mean of binomial. | The expectation of a given trial is 1*p + 0*q = p. Using the The outcome, X, is equal to the sum of x1+x2+x3 where E(xn)=p, so E(X)=np by the rule that E(X+Y)=E(X)+E(Y) |
| Derivation for the Variance of binomial | Again, the variance of 1 trial is easy. E(X^2)-E(X)^2. == p-p^2 = p(1-p) = pq. The variance of X is the variance of n trials each with variance pq, so total is npq |
| Difference between Var(nX) and Var (X + X + X + X... n times) | Var nX is the variance of one sample times by n, so the variance will be n^2VarX. For the other, it is like taking n samples, each of which being random, so the total is less varied, Hence nVarX |
| Situation for Poisson | Constant average rate (over distance or time) in a fixed interval, independently of each other. Like phone calls or defectives. Also only suitable if mean approx= variance as model assumes same. |
| When are both binomial and Poisson appropriate | When bin is appropriate AND n is large AND p is small. np must be less than 5. |
| Sum of two Poisson | Also Poisson, with lambda=sum of other two as long as they were independent. |
| Geometric distribution | Repeated trials until success. Separate trials have to be independent, fixed probability of success and binary. |
| Calc geo probabilities | Calc can do for cumulative, but for general, p(X=x) is x-1 failures and 1 success, so is p(q^(x-1)) |
| Meaning of bivariate data | Two sets of data in which each datum from one set is paired with one from the other |
| Types of bivariate | Random on non-random. One is independently varied and the other is measured. Random on random. Two random variables with a proposed relationship are compared. |
| Significance of the type | For random on non random, the independent variable is considered to have negligible error, so no variance. On a scatter diagram, its data will lie on fixed lines. |
| When is PMCC valid for hypothesis testing? | Only for random-on-random as the p values assume two normally distributed data sets. Bivariate normally distributed data. |
| How to calculate PMCC on calculator | Data in 2 lists. CALC REG X ax+b |
| What is PMCC for | Linear association/ Correlation. |
| Effect size for pmcc | The effect one variable has on another. The higher abs(r) is, the greater the effect size. |