click below
click below
Normal Size Small Size show me how
RIB
| Question | Answer |
|---|---|
| What is ontology in research philosophy? | The study of the nature of reality. → It asks whether social phenomena exist independently or are created through social interaction. |
| What is epistemology? | The theory of knowledge — how we know what we know. → It concerns what counts as valid evidence. |
| What is positivism? | A philosophy assuming reality is objective and measurable. → Knowledge comes from observation and empirical data. |
| What is interpretivism? | A philosophy emphasizing understanding human meaning and context. → Focuses on subjective interpretation. |
| What is objectivism? | The belief that social phenomena exist independently of social actors. → Reality is external to our perceptions. |
| What is constructionism? | The view that social reality is produced through social interaction. → Reality is continually created and revised. |
| What is deductive reasoning? | Starting from theory to test hypotheses with data. → Typical of quantitative research. |
| What is inductive reasoning? | Developing theory from observed data. → Typical of qualitative research. |
| What is a concept in research? | A label for a phenomenon studied by researchers. → Example: motivation, satisfaction. |
| What is an indicator? | A measure used to capture a concept. → Example: hours worked = indicator of workload. |
| What is operationalization? | Turning abstract concepts into measurable variables. → Links theory to data. |
| What is measurement error? | The difference between true value and observed value. → Can be systematic or random. |
| What is systematic error? | A consistent bias in measurement. → Example: miscalibrated scale always off by +2kg. |
| What is random error? | Unpredictable fluctuations in measurement. → Reduces reliability, not validity. |
| What is internal reliability? | Consistency within a scale or test. → Assessed using Cronbach’s alpha. |
| What is inter-rater reliability? | Agreement between different observers or coders. → Ensures objectivity. |
| What is test-retest reliability? | Stability of scores over time. → Same results across repeated testing. |
| What does Cronbach’s alpha measure? | Internal consistency of a scale. → Higher α indicates more reliable items. |
| What is transparency in research? | Clear documentation of all research steps. → Allows replication and evaluation. |
| What is a sampling frame? | The list or source from which a sample is drawn. → Must represent the population. |
| What is sampling bias? | Systematic error due to non-representative sample. → Threatens external validity. |
| What is response bias? | When participants answer untruthfully or inaccurately. → Common in surveys. |
| What is non-response bias? | When non-participants differ systematically from participants. → Affects generalizability. |
| What is the population in research? | The entire group a researcher wants to draw conclusions about. → The sample represents this group. |
| What is parameter vs statistic? | Parameter = population value; Statistic = sample estimate. → Statistics infer parameters. |
| What is descriptive statistics used for? | Summarizing and organizing data. → Example: mean, median, SD. |
| What is inferential statistics used for? | Making generalizations about a population based on a sample. → Example: t-tests, ANOVA, regression. |
| What does a confidence interval (CI) represent? | A range likely to contain the true population parameter. → Typically 95% confidence level. |
| What does a narrow confidence interval mean? | Greater precision of the estimate. → Smaller sampling error. |
| What is an effect size? | The magnitude of an observed relationship or difference. → Indicates practical significance. |
| What is normal distribution? | Symmetrical, bell-shaped data distribution. → Many statistical tests assume this. |
| What is the central limit theorem? | Sampling distributions approximate normality as sample size increases. → Justifies parametric tests. |
| What is a p-value? | Probability of observing the result if H₀ were true. → p < .05 means statistically significant. |
| What is the null hypothesis (H₀)? | States no effect or difference exists. → Tested against the alternative (H₁). |
| What is the alternative hypothesis (H₁)? | States that there is an effect or relationship. → Supported if p < α. |
| What is an alpha level (α)? | The threshold for significance, usually .05. → Chance of making a Type I error. |
| What is a confidence level? | The probability that a confidence interval contains the true value. → 1 – α. |
| What does variance measure? | The average squared deviation from the mean. → Indicates data spread. |
| What does standard deviation measure? | The average deviation from the mean in original units. → √variance. |
| What is a z-score? | Number of standard deviations a value is from the mean. → Standardized measure for comparison. |
| What is heteroskedasticity? | Non-constant variance of residuals. → Violates regression assumptions. |
| What is multivariate analysis? | Statistical analysis involving multiple variables simultaneously. → Example: multiple regression. |
| What is degrees of freedom (df)? | The number of independent pieces of information in a calculation. → Used in statistical tests. |
| What is the purpose of randomization in experiments? | To evenly distribute confounders across groups. → Increases internal validity. |
| What is a manipulated variable? | An IV that is deliberately controlled by the researcher. → Example: treatment vs control. |
| What is an observed variable? | Measured, not manipulated. → Example: age, gender. |
| What is the Hawthorne effect? | Participants change behavior because they know they’re being observed. → Threatens validity. |
| What is social desirability bias? | Participants give answers that seem socially acceptable. → Affects self-report accuracy. |
| What does triangulation achieve? | Confirms findings through multiple data sources or methods. → Increases credibility. |
| What is data saturation? | The point when new data no longer yield new insights. → Signals adequate sample size in qualitative research. |
| What is member checking? | Asking participants to verify interpretations. → Enhances credibility. |
| What is prolonged engagement? | Spending extended time in the research setting. → Builds trust and deeper understanding. |
| What is reflexive journaling? | Recording researcher reflections during fieldwork. → Tracks potential biases. |
| What is thematic saturation? | When no new themes emerge during analysis. → Marks completeness. |
| What is a theme in qualitative analysis? | A recurring pattern representing an important idea in the data. → Derived from coded material. |
| What is open coding? | Initial stage of coding where concepts are identified. → Part of grounded theory. |
| What is axial coding? | Connecting categories and subcategories after open coding. → Builds conceptual structure. |
| What is selective coding? | Integrating core categories into a theory. → Final step of grounded theory. |
| What is ethical reflexivity? | Continuously considering ethical implications during research. → Goes beyond formal consent. |
| What is confidentiality in research ethics? | Protecting participant information from disclosure. → Fundamental ethical duty. |
| What is anonymity? | Participants cannot be identified from their data. → Often used in sensitive research. |
| What is data integrity? | Ensuring accuracy and consistency of collected data. → Central to research ethics. |
| What is plagiarism in research? | Using others’ ideas or words without proper credit. → Violates academic honesty. |
| What is data falsification? | Manipulating or fabricating results. → Severe ethical violation. |
| What is the difference between credibility and transferability? | Credibility = accuracy within context; Transferability = applicability across contexts. |
| What is the difference between dependability and confirmability? | Dependability = consistency of findings; Confirmability = objectivity of analysis. |
| What is a mixed-methods approach? | Combining quantitative and qualitative methods. → Balances breadth and depth. |
| What is data triangulation? | Using different data sources for validation. → Strengthens findings. |
| What is methodological triangulation? | Using multiple methods (e.g., interviews + surveys). → Enhances robustness. |
| What is theoretical triangulation? | Interpreting data through multiple theoretical lenses. → Adds analytical depth. |
| What does “breadth vs depth” mean in research? | Quantitative offers breadth (many cases), qualitative offers depth (rich detail). |
| What is static vs process orientation? | Quantitative = static measurement; Qualitative = ongoing processes. |
| What is contextualization in qualitative research? | Understanding phenomena within their social and cultural settings. |
| What does thick description enable? | Readers to determine transferability by providing detailed context. |
| What is data reduction? | Simplifying and organizing data during analysis. → Core to qualitative coding. |
| What is reflexivity important for? | Recognizing researcher influence on findings. → Ensures transparency. |
| What does reliability refer to in research? | The consistency and stability of measurements over time. → Reliable measures yield similar results under consistent conditions. |
| What does validity assess? | Whether a measure or study accurately captures what it intends to measure. → It’s about the truthfulness of conclusions. |
| What is internal validity? | The degree to which causal conclusions are justified. → Ensures the IV truly causes the DV. |
| What is external validity? | The extent to which findings can be generalized beyond the study. → Linked to sample representativeness. |
| What is construct validity? | Whether measures and manipulations accurately reflect the theoretical construct. → Checks alignment between concept and measurement. |
| What is face validity? | Whether a test appears to measure what it should. → Based on surface-level judgment. |
| What is predictive validity? | How well a measure predicts future outcomes. → Example: job test scores predicting performance. |
| What is concurrent validity? | Whether a measure correlates with other measures taken at the same time. → Evidence for consistency across tools. |
| What does replication ensure? | That findings can be reproduced under similar conditions. → Increases trust and scientific rigor. |
| What’s the difference between a research design and a method? | Design = overall plan; Method = data collection technique. → Example: survey = method, cross-sectional = design. |
| What is an experimental design? | A design with controlled manipulation and random assignment. → High internal validity, lower external validity. |
| What is a cross-sectional design? | A study measuring variables at one point in time. → Good for correlation, weak for causation. |
| What is a longitudinal design? | Repeated observations of the same variables over time. → Useful for studying change and causality. |
| What is attrition? | Participant dropout in longitudinal studies. → Threatens validity. |
| What is a case study design? | In-depth examination of one or few cases. → Provides rich data, but low generalizability. |
| What is a comparative design? | A study comparing two or more cases. → Highlights contextual differences and similarities. |
| What is a nominal variable? | A categorical variable with no order. → Example: gender, nationality. |
| What is an ordinal variable? | A variable with ordered categories but unequal intervals. → Example: satisfaction level. |
| What is an interval variable? | Numeric variable with equal intervals but no true zero. → Example: temperature in Celsius. |
| What is a ratio variable? | Numeric variable with equal intervals and a true zero. → Example: age, income. |
| What are central tendency measures? | Mean, median, mode. → Describe the center of a data distribution. |
| What are univariate statistics? | Statistics describing a single variable. → Example: mean, SD, range. |
| What are bivariate statistics? | Statistics describing relationships between two variables. → Example: correlation. |
| What does “correlation ≠ causation” mean? | Variables may relate without one causing the other. → A third variable may explain the relationship. |
| What is the correlation coefficient range? | Between –1 and +1. → Indicates strength and direction of the linear relationship. |
| What is regression used for? | To predict and explain relationships between variables. → Quantifies how IVs affect the DV. |
| What is the basic regression formula? | DV = b₀ + b₁*IV + ε. → Predicts DV based on IV plus error. |
| What does “b₁” represent in regression? | The slope coefficient. → Shows change in DV for one-unit change in IV. |
| What does “ε” represent in regression? | The error term. → Captures unexplained variation. |
| What does the adjusted R² tell us? | The proportion of variance in DV explained by IVs. → Adjusted for number of predictors. |
| What is multicollinearity? | When IVs are too highly correlated. → Makes coefficients unstable. |
| What is homoskedasticity? | Equal variance of residuals across IV levels. → Violation leads to biased results. |
| What is the normality assumption in regression? | Residuals (not data) should follow a normal distribution. → Needed for valid inference. |
| What is independence of observations? | Each data point must be independent of others. → Violations bias significance tests. |
| What is a Type I error? | False positive. → Rejecting a true null hypothesis. |
| What is a Type II error? | False negative. → Failing to reject a false null hypothesis. |
| What is statistical power? | Probability of detecting a true effect. → 1 - β; increases with larger samples. |
| What three factors affect statistical power? | Alpha, sample size, effect size. → Larger sample and effect = higher power. |
| What is moderation? | When the IV–DV relationship depends on a third variable. → Tests “when” or “for whom” the effect occurs. |
| What is mediation? | When an IV influences a DV through another variable. → Explains “how” or “why” an effect occurs. |
| What is the total effect in mediation analysis? | c = total effect of X on Y. → Sum of direct and indirect effects. |
| What is the indirect effect in mediation? | a*b = mediation path. → Product of IV→Mediator and Mediator→DV. |
| What are control variables? | Variables held constant to isolate the IV’s effect. → Help reduce confounding. |
| What are confounders? | Variables related to both IV and DV that distort true effects. → Must be controlled for. |
| What is ANOVA used for? | To compare means across 3+ groups. → Tests whether group differences are statistically significant. |
| What is the F-test in ANOVA? | A ratio of between-group to within-group variance. → Large F = greater likelihood of real group differences. |
| What is between-group variance? | Variation due to experimental manipulation. → Captures group mean differences. |
| What is within-group variance? | Variation within each group due to individual differences. → Represents random error. |
| What is a one-way ANOVA? | ANOVA with one factor (IV) having multiple levels. → Example: 3 training programs. |
| What is a two-way ANOVA? | ANOVA with two factors. → Can test interaction effects. |
| What is a mixed ANOVA? | Includes both between-subjects and within-subjects factors. → Tests group and time effects simultaneously. |
| What assumptions underlie ANOVA? | Normality, homogeneity of variance, independent observations, continuous DV. → Violations affect reliability. |
| What is a “factor” in ANOVA? | A categorical independent variable. → Each factor has levels. |
| What is a “level” in ANOVA? | The number of groups within a factor. → Example: factor “department” with levels HR, IT, Sales. |
| What is sampling error? | The difference between sample estimate and true population value. → Reduced by larger sample size. |
| What is a probability sample? | A sample where each member has a known, non-zero chance of selection. → Increases generalizability. |
| What is a simple random sample? | Each population member has equal selection chance. → Gold standard for representativeness. |
| What is systematic sampling? | Selecting every nth element from a list. → Slightly structured randomness. |
| What is stratified sampling? | Dividing population into subgroups and sampling equally from each. → Ensures representation. |
| What is cluster sampling? | Randomly selecting groups (clusters) instead of individuals. → Efficient for large populations. |
| What is non-probability sampling? | Sampling not based on random selection. → Limits generalizability. |
| What is convenience sampling? | Using readily available participants. → Easy but biased. |
| What is quota sampling? | Sampling to match population proportions. → Often used in market research. |
| Why is representativeness important? | Ensures findings generalize to the population. → Increases external validity. |
| What is qualitative research focused on? | Understanding meaning, context, and experiences. → Uses words, not numbers. |
| What is the theoretical approach of qualitative research? | Inductive. → Builds theory from data, not tests existing theory. |
| What is a definitive concept? | Clearly defined and standardized term. → Example: GDP, age. |
| What is a sensitizing concept? | Broad, flexible guide to explore a phenomenon. → Keeps analysis open-ended. |
| What is credibility in qualitative research? | Accuracy of representation of participants’ views. → Parallels internal validity. |
| What is dependability? | Stability of findings over time and contexts. → Parallels reliability. |
| What is confirmability? | Objectivity of findings. → Parallels researcher neutrality. |
| What is transferability? | Applicability of findings to other contexts. → Parallels external validity. |
| What is triangulation? | Using multiple data sources or methods. → Strengthens validity through cross-verification. |
| What is reflexivity? | Researcher’s reflection on their own bias and role. → Enhances transparency. |
| What is thick description? | Providing detailed contextual information. → Allows readers to assess transferability. |
| What is purposive sampling? | Selecting participants with relevant experience or knowledge. → Common in qualitative research. |
| What is snowball sampling? | Asking participants to recruit others. → Useful for hidden or hard-to-reach groups. |
| What are the main steps in qualitative data analysis? | Familiarization, coding, theme development, iteration. → Repeated data engagement. |
| What is thematic analysis? | Identifying recurring patterns and meanings in data. → Most common qualitative method. |
| What are common cues for finding themes? | Repetitions, metaphors, similarities/differences, missing data. → Signal underlying meanings. |
| What is grounded theory? | Developing theory directly from data using iterative analysis. → Inductive and cyclical process. |
| What is a “category” in grounded theory? | A higher-level concept grouping similar ideas. → More abstract than a single concept. |
| What is a criticism of grounded theory? | Risk of losing context when fragmenting data. → Also vague on what “counts” as theory. |
| What is qualitative coding? | Assigning labels to segments of text to categorize meaning. → Foundation for theme building. |
| Why record and transcribe interviews? | To preserve raw data for accurate analysis. → Enables coding and verification. |
| What are the four trustworthiness criteria in qualitative research? | Credibility, dependability, confirmability, transferability. → Replace validity/reliability. |
| What is research ethics concerned with? | Protecting participants and ensuring integrity. → Avoids harm, deception, and privacy violations. |
| What is informed consent? | Participants’ voluntary agreement to participate after full information. → Central ethical principle. |
| When might informed consent be impractical? | In covert or archival research. → 🔴 Research specific cases for examples. |
| Why should deception be avoided? | It undermines trust and violates autonomy. → Only justified if minimal risk and necessary. |
| What is utilitarianism in ethics? | Evaluating actions by their consequences. → Maximizes overall good. |
| What is universalism in ethics? | Judging actions by adherence to moral rules. → Focuses on duty, not outcomes. |
| What is an unobtrusive measure? | A method that doesn’t involve researcher interference. → Example: content analysis, archives. |
| What is inter-coder reliability? | Agreement between multiple coders on data interpretation. → Ensures consistency. |
| What distinguishes quantitative from qualitative research? | Quantitative = numbers, testing, breadth; Qualitative = words, context, depth. |
| What is a Likert scale? | A rating scale measuring attitudes. → Treated as interval, though ordinal by nature. |
| What is descriptive statistics used for? | Summarizing and describing data features. → Example: mean, SD, range. |
| What is inferential statistics used for? | Making predictions or inferences about populations. → Based on sample data. |
| What does a p-value indicate? | Probability of observing results if null hypothesis were true. → Low p (<.05) = significant result. |
| What does “statistical significance ≠ practical significance” mean? | A result may be statistically true but have trivial real-world impact. |
| What does the “standard” in standard deviation mean? | It expresses variance in original measurement units. → Allows intuitive interpretation. |
| When use a one-tailed hypothesis? | When expecting a specific direction of effect. → E.g., “greater than” or “less than.” |
| What is the general linear model (GLM)? | Framework expressing data = model + error. → Basis for regression and ANOVA. |
| What is the main limitation of qualitative research? | Limited generalizability due to small, context-specific samples. |
| What is the main limitation of quantitative research? | Overreliance on measurement and loss of human context. |
| What are key similarities between qual and quant research? | Both collect/analyze data, aim for validity, and follow systematic methods. |