click below
click below
Normal Size Small Size show me how
PSYB70
Midterm 2
| Term | Definition |
|---|---|
| Construct validity | The extent to which the measured or manipulated variables of a study represent the theoretical constructs by which they were meant to represent. |
| Construct | Is a key concept, phenomenon, or attribute. |
| Conceptual definition | Articulates the defining features of a construct and how it is related to other constructs |
| Theories | Offer a coherent explanation or interpretation of one or more phenomenon |
| Hypothetico-deductive method | A cyclical process of theory development, starting with an observed phenomenon, then developing or using a theory to make a specific prediction of what should happen if that theory is correct. |
| Grounded theory | Researchers start with a set of observations (often generated through qualitative research findings) and develop a theory or an interpretation that is grounded in these data. |
| Qualitative research | Research that uses interviews, focus groups, ethnographies, case studies, content analysis, and observation to develop a deeper understanding of various phenomenon and to propose new theories. |
| Hypothesis testing | Using a theory to make a specific prediction of what should happen if that theory is correct, then using data (usually from quantitative research) to test that prediction |
| Quantitative research | measures or manipulates variables in categorical or numerical form in order to test and revise existing theories |
| Empirically-driven measurement | Measurement development should be guided by feedback from theories, research, experts, and stakeholders |
| Variable | Is an attribute that varies. |
| Operational definition | Articulates exactly how a construct is being measured, observed, or manipulated in a study |
| Self report (Advantage) | Easy, relatively fast - often the only way to measure private thoughts, opinions, and attitudes |
| Observation (Advantage) | Provides rich behavioural data - it is often high in ecological validity |
| Physiologically measures (Advantage) | Allow researchers to "peak into" the fascinating workings of the body and mind. |
| Validation study | An empirical study that focuses on testing the reliability and validity of a measurement tool for a specific utility or purpose |
| Reliability | Consistency |
| Validity | "accuracy" |
| Psychometric properties | Indicators of the reliability and validity of a particular measurement tool. |
| Face validity | The extent to which a measure is subjectively considered a plausible operationalization of the conceptual variable in question |
| Interrater reliability | Consistency across multiple observers (ICC, r) |
| Test-retest reliability | Consistency across multiple time points (ICC, r) |
| Internal reliability | Consistency across multiple items (Cronbach's alpha) |
| Content validity | The measure contains all the part that your theory says it should contain |
| Conceptual definition | Are key for assessing content validity |
| Multidimensional constructs | Some constructs are defined by multiple dimensions (subscales) |
| Qualitative approach | Was content development theoretically grounded. Empirically informed, and / or guided by feedback from experts or other stakeholders? |
| Quantitative approach | Has the researcher used statistical approaches (such as a factor analysis) to assess if the items 'cluster' together into predictable dimensions? |
| Factor analysis | A statistical technique used to examine if items can be reliably clustered into separate dimensions (called components or factors) |
| Factor loadings | indicate which items "load" together (often indicated by a factoring loading of >40) |
| Criterion validity | The extent to which people’s scores on a measure exhibit predictable correlations with other variables (known as criteria). |
| Concurrent validity | When a new measure is correlated with established tests of the same construct. |
| Converging operations | When different operations of the same construct are correlated (i.e., they converge). |
| known-groups paradigm | When a measure can reliably differentiate between people whose characteristics on that construct are already known. |
| Predictive validity | When a measure can reliably predict important and relevant future outcomes. |
| Convergent validity | When a measure is predictably correlated with conceptually similar constructs. |
| Discriminant validity | When a measure is predictably unrelated to conceptually distinct constructs, as well as indicators of measurement bias, such as demographic biases, socially desirable responding, reactivity, demand characteristics, etc.. |
| Covariance | As the IV variable changes, so does the DV. |
| Temporal precedence | Changes in the IV occur before changes in DV |
| No alternative explanations | No extraneous variables can explain the link |
| Threats to internal validity | A term used to describe aspects of a study that leave open the possibility of an alternative explanation |
| Selection effect/threat | Comparing groups based on pre-existing or non-random criterion may be biased because the groups may differ systematically from one another in ways other than the key independent variable |
| Without experimental manipulation | One cannot be certain that it was the the engagement activities and not one of these other pre-existing factors that caused the improvements |
| One-group post-test only design | A treatment or intervention is implemented and then the dependent variable is measured after the treatment, but there is no control group or comparison group. |
| One-group pretest-posttest only design | The dependent variable is measured before and after the implementation of a treatment, but there is no control group or comparison group. |
| History threat | Other events may have happened between the pretest and the posttest that explain the outcome |
| Maturation threat | The participant themselves may have changed between the pretest and posttest. |
| Testing/practice threat | People can improve just based on prior exposure to a particular testing procedure/style. |
| Instrumentation threat | The test itself might be different between the pretest and posttest. |
| Regression to the mean | Extremely high or low scores at pretest may be a statistical blip that stabilize at posttest. |
| Experimental manipulation | The researcher creates 2 or more (2+) experimental conditions to create comparison groups |
| Random assignment to conditions | The researcher randomly assigns participants to the IV conditions. |
| Experimental control | The researcher ensures that the only thing that differs between the experimental groups is the independent variable (no extraneous variables). |
| design confound | refers to any extraneous variables that systematically varies with the key variables under consideration (IVs and DVs) and therefore provide an alternative explanation for the results |
| Unsystematic variability | When the differences in the participants vary haphazardly or randomly across experimental conditions, or are unlikely to impact the DV, they are unlikely to confound the results |
| Systematic variability | When the differences in the participants vary systematically across the experimental conditions, and could impact the DV, they could bias the results. |
| Pilot study | A small-scale study conducted specifically for the purpose of testing the effectiveness of the experimental procedures |
| Manipulation check | Verifying the manipulation by including a measure of the construct the researcher is trying to manipulate |
| Attention/compliance checks | Ensure that participants paid attention to the instructions and actually followed them. |
| Construct validity checks | Examine if the IV creates the causal mechanism it is supposed to create (convergent validity) and not other factors (discriminant validity) |
| Ecological validity | The extent to which the experimental situation or setting is rooted in the real-world (field experiment) and/or in a laboratory environment (lab experiment) meant to mimic real-world settings. |
| Experimental realism | The extent to which the study captures the same mental and psychological processes (e.g. affective, cognitive, behavioural) as would occur in the real world (even if the setting itself does not resemble real-world context) |
| Conceptual replication | Re-testing the same theoretical idea or hypothesis, but using different populations, manipulations/variable operationalizations, and/or study designs. Achieves external validity by assessing the generalizability of the results. |
| Margin of error | The likely range of error around the differences between experimental conditions |
| Statistical significance | An effect is considered "statistically significant" if an effect can be detected (effect doesn't equal 0) |
| statistical power | The probability a study will detect a statistically significant result when an effect actually exists |
| Effect size | Considers the strength of the relationship/effect between two or more variables. |
| Direct replication | Directly replicating a study with the same population and the same methods. Helps to achieve stastical validity (by reducing the risk of Type 1 error). |
| Between-groups design | Each participant receives only one level of the independent variable. |
| Within-groups design | Each participant receives multiple levels of the independent variable. |
| Randomized posttest-only design | Participants are randomly assigned to independent variable groups and the dependent variable is measured once |
| Randomized pretest/posttest design | The DV is measured before (pretest) and after (posttest) the IV is manipulated |
| Matched-groups design | Participants are matched on important traits and then randomly assigned to conditions |
| Concurrent measures design | Each participant is exposed to multiple levels of the IV concurrently with a single attitudinal/behavioural choice serving as the DV. |
| Repeated measures design | Each participant is exposed to multiple IV conditions with the same dependent variable being measured after exposure to each condition |
| Counterbalancing | The levels of the IV are presented to participants in a random order |
| Full counterbalancing | All possible order conditions are represented |
| Latin square | Creates balanced order combinations equal to the number of levels (4 levels = 4 orders; 5 levels = 5 orders) |
| Random counterbalancing | When there is a very large number of IV levels, the researcher may present the levels in a completely random (unbalanced) order or may choose to present only a random selection of the IV levels. |
| order effect | Internal validity threat that occurs when participant's responses are affected by the condition order |
| carry over effect | A threat that arises when exposure to one level of the IV carries over to other levels of the IV. |
| One-way designs | examine the impact of one independent variable (with 2 or more levels) on a dependent variable |
| Factorial designs | Cross 2 or more independent variables/factors to examine if the interact to influence the DV. |