click below
click below
Normal Size Small Size show me how
SPED 621 LEC 7/8
| Term | Definition |
|---|---|
| Fidelity | Completing an assessment as designed, following intended methods and quality standards |
| Technical Adequacy | Documented evidence that an assessment tool is unbiased, reliable, and valid to support sound decisions. |
| Reliability and Validity | Two key properties of a test that determine its consistency and accuracy. |
| Reliability | The consistency of test scores across time, forms, and raters. |
| Reliability Factors | Scores should not be affected by chance, timing, test version, or rater differences. |
| Interrater Reliability | The consistency of scores given by different raters observing the same behavior. |
| Factors Affecting Reliability | Test length, item difficulty, examinee error, environmental error, and evaluator drift. |
| Fidelity and Reliability | Following test administration rules and scoring instructions increases interrater reliability. |
| Test-Retest Reliability | Administering the same test twice under similar conditions; strong correlation (r ≥ .70) shows reliability. |
| Validity | The extent to which a test measures what it is intended to measure. |
| Face Validity | When a test appears to measure what it claims to measure. |
| Content Validity | Ensures test items represent the intended domain without overlap or confusion. |
| Construct Validity | Checks how well a test reflects the theoretical concept it aims to measure. |
| Predictive Validity | Determines how well test results predict future performance or outcomes. |
| Reliability vs Validity | Both are equally important; valid results require reliable and accurate measurement. |