click below
click below
Normal Size Small Size show me how
COMM 561 Final
| Term | Definition | |
|---|---|---|
| Plato's Allegory of the Cave | Prisoners chained in a cave, seeing only shadows on a wall and believing them to be reality. The physical world we perceive is like shadows on a cave wall, an imperfect reflection of true reality. | |
| science | a way of learning and knowing about the world | |
| two realities | experiential reality and agreement reality | |
| experiential reality | the things you know as a function of your direct experience | |
| agreement reality | things you consider real because you’ve been told they’re real, and everyone else seems to agree they are real | |
| human reasoning | the process of thinking, forming judgments, and drawing conclusions, often using both logical and probabilistic approaches. | |
| native human inquiry | Prone to publication bias, uneven study quality, and inconsistent measures. Example: A meta-analysis on social media use is weakened when studies define the construct differently. | |
| causal reasoning | the process of determining a relationship between a cause and its effect, essentially asking "how" or "why" something happened | |
| probabilistic reasoning | a framework for making decisions and drawing conclusions under conditions of uncertainty, relying on probability theory rather than absolute certainty | |
| tradition | can both inform and constrain human reasoning by providing a shared conceptual framework and a basis for establishing standards of justification | |
| authority | reasoning from authority, a logical fallacy and a practical method of acquiring knowledge where an argument is supported by the opinion of a credible source | |
| errors in personal human inquiry | inaccurate observation overgeneralization selective observation made-up information ex post facto hypothesizing illogical reasoning ego involvement in understanding premature closure of inquiry mystification | |
| inaccurate observation | an inaccurate observation | |
| overgeneralization | drawing general, negative conclusions from a single experience or limited evidence, often using words like "always" or "never" | |
| selective observation | when we see only those patterns that we want to see or when we assume that only the patterns we have experienced directly exist | |
| made-up information | broadly defined as false or inaccurate information | |
| ex post facto hypothesizing | creating a hypothesis that explains results after they have already been observed | |
| illogical reasoning | thinking or arguing with flawed logic, invalid premises, or insufficient evidence, which leads to conclusions that are unsound, false, or incoherent | |
| ego involvement in understanding | state where understanding and learning are influenced by personal stakes, often involving a concern for how one's self-image, status, or competence is perceived by others | |
| premature closure of inquiry | a bias or error where an investigation or assessment is halted prematurely because a plausible explanation or solution is found, leading to incomplete data analysis or a misdiagnosis | |
| mystification | how reality is distorted, masked, or obscured through symbolic representations or cognitive frameworks | |
| research | the systematic effort to secure answers to questions | |
| basic research | research completed to learn about relationships among variables, regardless of any immediate commercial product or service | |
| applied research | research completed to develop a product or solve an immediate practical problem | |
| types of social research | descriptive exploratory explanatory | |
| descriptive | observe and describe the characteristics of a population, phenomenon, or situation without manipulating variables or establishing cause-and-effect relationships | |
| exploratory | a flexible research approach for thoroughly investigating poorly defined problems, understanding scope, and generating initial hypotheses to guide future, more structured studies | |
| explanatory | a research method that seeks to identify and understand the causes and effects between variables, answering "why" and "how" questions about a phenomenon | |
| overall scientific enterprise | the entire system of human activities aimed at generating and applying scientific knowledge | |
| problem or inquiry | an inquiry is a systematic process of investigating questions through evidence collection to reach informed conclusions, while a problem is a situation or obstacle that needs to be overcome or a goal to be achieved | |
| theory | a formal explanation for observed phenomena, based on extensive evidence, facts, and tested hypotheses | |
| research methods | research plan, sampling, observation, and data processing | |
| results/statistics | findings reported in a study, while "statistics" is the field and methodology for collecting, analyzing, and interpreting data to understand patterns and make decisions | |
| discussion | author interprets, analyzes, and explains the significance of the findings in relation to the research questions and existing literature | |
| communication | the imparting or exchanging of information or news | |
| message | a set of verbal and nonverbal cues that communicators exchange | |
| verbal cues | the words people use in communication | |
| nonverbal cues | communication elements beyond the words themselves | |
| qualitative methods | methods that described observations in predominantly non-numerical terms | |
| quantitative methods | inquiries in which observations are expressed predominantly in numerical terms | |
| variable | a characteristic or property that can vary (assigned either values or attributes) | |
| attribute | a characteristic or quality that describes an object | |
| constant | a characteristic to which only one number may be assigned, does not change | |
| hypothesis | An expectation about events based on generalizations of the assumed relationship between variables | |
| nondirectional hypothesis | H₁: µ₁ ≠ µ₂ | |
| directional hypothesis | H₁: μ₁ > μ₂ | |
| null hypothesis | H0: μ1 = μ2 | |
| independent variable | the variable that is hypothesized to cause, or lead to, variation in another variable | |
| dependent variable | the variable whose variation is hypothesized to depend on or be influenced by the other variable | |
| meta-analysis | combines quantitative results from many studies to reveal overall sizes of effect that exist among variables | |
| abstract calculus | the logical structure of relationships, calculus refers to any deductive system | |
| conditional syllogism | a logical argument with two premises and a conclusion, where the major premise is an "if-then" statement | |
| constructs | a complex, synthesized idea, theory, or concept formed from simpler elements | |
| rules of correspondence | define the connection between a theoretical model and the observable world | |
| description | detailed account, either in words or pictures, that presents the characteristics and features of a person, place, thing, or event | |
| explanation | a statement of the facts or reasons that make something clear and understandable | |
| prediction | a statement about what is expected to happen in the future, often based on existing knowledge, data, or experience | |
| control | an element that remains unchanged or unaffected by other variables | |
| conceptual definitions | focus on the broader understanding of a concept | |
| operational definitions | specifies how a concept, variable, or construct will be measured in a scientific study | |
| manipulated independent variables | "Stimulus Variables" - Introduced | Controlled |
| measured/assigned variables | Not introduced/controlled, but are carefully observed and measured | |
| direct classification | Variables operationally defined concepts by simple identification or classification of observable characteristics of information | |
| intersubjectivity | The degree to which different researches with essentially different beliefs draw essentially the same interpretations of the meaning of observations | |
| attenuation | wh en relationships in research get lost in the background noise | |
| levels of measurement | See below | |
| nominal | categories with no inherent order and no inherent quantitative properties (e.g., eye color) | |
| ordinal | categories with a meaningful order but inconsistent spacing (e.g., satisfaction surveys) | |
| interval | have order and equal spacing but no true zero point (e.g., IQ scores) | |
| ratio | possess all these qualities (order, equal intervals) plus a true zero (e.g., height or weight) | |
| categorical data | represents characteristics or qualities of a subject that can be grouped into distinct categories, like "male" or "female," rather than quantities | |
| continuous variables | variable that can take on any value within a given range, rather than being restricted to a fixed number of distinct values (e.g. height of a plant) | |
| "mediator | it explains the relationship between the independent variable (IV) and the the mechanism for why the IV brings about the DV | |
| moderator | a qualitative (e.g. sex, race, class) or quantitative (e.g., level of reward) variable that affects the direction and/or strength of the relation between an independent or predictor variable and a dependent or criterion variable | |
| crucial experiment | test designed to decide between two or more competing theories or hypotheses by producing a result that strongly supports one theory and makes the others improbable | |
| exogenous variables | a variable in a causal model or causal system whose value is independent from the states of other variables in the system | |
| endogenous variables | a variable in a causal model or causal system whose value is determined by the states of other variables in the system. | |
| deductive | a form of argument in which a valid conclusion necessarily follows from premises, uses syllogisms | |
| inductive | the process by which we conclude what is true of certain individuals is true of a class, what is true of part is true of the whole class, or what is true at certain times will be true in similar circumstances at all times | |
| advantages/ disadvantages of the approaches | Deductive reasoning is efficient but passive and flawed if premises are wrong. Inductive reasoning builds deeper, active learning but is slower and risks incorrect generalizations. | |
| falsification of a theory | theories must be capable of being falsified if they are incorrect | |
| tentativeness of a theory | theories must be tentative - demands scholars recognize a theory's answers are provisional | |
| reliability | the consistency and dependability of a measurement or instrument | |
| false precision | when a researcher claims more precision in analyzing data then was present in the original data | |
| fallacy of misplaced precision | when numerical data is presented with a level of exactness that is not justified by the actual data or situation, giving the false impression of accuracy and certainty | |
| reliability coefficient | a statistical index, usually ranging from 0 to 1, that indicates the consistency of a measurement or test by estimating the proportion of true score variance to total variance | |
| test-retest reliability | giving the measurement twice and reporting consistency between scores | |
| alternate forms (parallel forms) reliability | constructing different forms of the same test from a common pool of measurement items, giving each of the different forms to the SAME group of people and determining the degree of consistency between them | |
| split-half reliability | divides a test in 2 parts, scores them separately, and checks consistency between the 2 scores (sometimes called the odd-even reliability) | |
| item to total reliability | computes the correlation of items with the total test (e.g. answering first question right on the exam might mean you do better than someone who answers first question wrong) | |
| intercoder reliability | measures the extent to which two or more independent coders agree on the codes applied to the same data, ensuring the consistency and trustworthiness of qualitative analysis | |
| validity | the degree to which a test, measurement, or study accurately measures what it is intended to measure | |
| face validity | involves researchers looking at the content of the measurement items and advancing an argument that on its face the measure seems to identify what it claims | |
| expert jury validity | involves having a group of experts in the subject matter examine the measurement and judge its merits | |
| criterion validity | consists of methods that assess a measures worth by examining its relation to some outside criterion. | |
| concurrent validity | correlates a new measure with a previously validated measure of the same thing. | |
| predictive validity | the degree to which a measure predicts known groups in which the constructs must exist | |
| construct validity | requires that a new measure be administered to subject along with at least 2 other measures (one should be a valid measure of a construct that is known conceptually to be related to the new measure)" | |
| Cronbach's coefficient alpha | when you want to see consistency with which people react to items on a scale... AND when there is no "correct" or "incorrect" answers. | |
| Thurstone equal appearing interval scales | composed of statements related to some topic issues: time consuming, tedious, and expensive | |
| Likert scales | presents statements that reflect clear positions on an issue then asks subjects to indicate their response on a 5-point scale | |
| Guttman scalogram | involves a series of statements dealing with one topic and arranged according to level of intensity | |
| semantic differential-type scales | involves using parts of a "bipolar" adjectives that could be used to indicate their relationships | |
| premise | a statement in a logical argument that is the foundation for others drawn from it | |
| subject tracings | keywords that list other subject headings under which the book is also listed. | |
| keywords | terms under which information about the topic may be found | |
| argument | a process of advancing conclusions based on reasons | evidence |
| Einstein syndrome | when inexperienced researchers believe they can ignore past work and use entirely new ideas and methods | |
| annotations | brief comments of your own about the work... additional explanations, comments, evaluations, or criticisms to help understand material" | |
| plagiarism | the false assumption of authorship toward the wrongful act of taking the product of another person's mind, and presenting it as one's own | |
| search engines | finding sources on the internet, it can rummage through internet resources for key words and content areas | |
| exhaustive literature review | research surveys that include all material related to the subject | |
| exemplary literature review | surveys the most important contributions | |
| explication | requires you to complete a lit review that makes an issue clear and comprehensive | |
| internal organizers | making active use of phrases that preview, summarize, and transitions between main points... can underscore your content | |
| heuristic | a shortcut or rule of thumb for problem-solving | |
| research prospectus | a complete proposal for a research activity to be completed at future date | |
| ecological fallacy | when a researcher uses data from groups of people to draw conclusion about individuals | |
| the Delphi fallacy | use of vague predictions as research claims | |
| the Jeanne Dixon fallacy | making multiple predictions and claiming partial support | |
| the Patchwork quilt fallacy | making no predictions but offering explanations after the fact | |
| evidence | the information that scholars use to support claims | |
| factual evidence | consists of descriptions and characterizations of things (events of past, new samples, etc.) | |
| opinions | interpretations of the meaning of collections of facts | |
| reports | accounts of what took place whether by participants or by outside observers | |
| primary sources | provide information from individuals who have first hand experience with the events reported | |
| secondary sources | provide information obtained from individuals who do not have firsthand experience with the events | |
| statistics | quantitative reports based on observation in a sample | |
| parameters | numbers that describe the population | |
| expert opinions | opinions from people who are experts in their field | |
| argument from definition | reasoning that submits that things do or do not belong in a certain class of things | |
| argument from example and generalization | taking some particular cases and arguing that what is true of the instances is generally true in the population of events | |
| argument from analogy | compares two things known to be alike in one or more features and suggests that they will be alike in other features as well | |
| literal analogy | compares something to an event or object that really exists | |
| figurative analogy | compares something to a hypothetical situation | |
| causal argument | reasoning that a given factor is responsible for producing certain other results | |
| syllogism | set of 2 premises that result in a conclusion | |
| categorical syllogism | syllogisms that starts with a categorical statement | (ex- Major premise: all men are mortal.... Minor Premise: Socrates is a man.... Conclusion: Socrates is mortal...) |
| disjunctive syllogism | makes an "either-or" statement in the major premise | |
| conditional syllogism | major premise makes an "if-then" statement | |
| nomothetic research | designed to find general law-like patterns that apply to many instances | |
| idiographic research | designed to develop a full understanding of a particular event or individual | |
| text | what we believe are the actual messages or words of a communicator (ex- TV shows, political debates, and influential movies) | |
| rhetorical criticism | the use of standards of excellence to interpret and evaluate communication | |
| rhetoric | Study of available means of persuasion | |
| criticism | Standards of excellence are announced for application | |
| impressionistic criticism | the statement of opinion (or personal impression) made by reviewers, such as those who "review" movies for newspapers and magazines | |
| invention | types and sources of ideas | |
| ethos | credibility of a speaker, "ethical appeal” | |
| pathos | the use of emotional or motivational appeals by a speaker, “pathetic appeal” | |
| logos | logical appeals, including evidence and reasoning used by speaker (Greek word with no translation) | |
| narrative fidelity | the consistency of new accounts with others stories people have heard | |
| content analysis | a systematic method to analyze the content and treatment of communication, which usually results in the development of objectives and quantitative information | |
| coding units | categories used to count the communication forms in the examples chosen | |
| utterance | what a person actually says in conversation | |
| interactional analysis | interactional analysis is a research method for systematically coding and interpreting communication behaviors in a specific interaction | |
| relational analysis | relational analysis is a broader term that examines how people's feelings and perceptions influence and are influenced by their relationships | |
| interaction analysis | focuses on ways of tracking individuals acts of communications | |
| relational control analysis | tracks message sequences to determine the relative patters of position and control in the relationship (statements categorized as "one-up", "one-down", and "one-across" | |
| fieldwork | Collecting data in natural settings. | Collecting data in natural settings. |
| Purposes: | Study of people in their natural daily lives; used when self-reports aren’t feasible, settings are unexplored, or researchers aim to develop grounded theory. | |
| participant observation | inquires in which the researcher takes the role of an active agent in the situation under study | |
| naturalistic studies | nonexperimental inquires completed as subjects are involved in the natural course of their lives | |
| full participant observation | research is characterized by the investigator's gathering data while taking part in the activities of a group - while concealing his or her research identity. | |
| participant as observer research | the group to be studied is made aware of the researcher role | |
| accretions | deposits of material left by some action | |
| urban archaeology | area of study that looks at deposits | |
| erosions | the wear or use of objects | |
| life-history studies | biography of a person which has been obtained through interview and guided conversations | |
| grounded theory | a set of explanations that has immediate relevance to a specific field of setting under investigation | |
| analytic induction | when a researcher begins with some very tentative hypotheses that they may apply in fieldwork | |
| ethnography | research in which the investigator participates, overtly or covertly, in people's lives for an extended period of time, collecting whatever data are available to describe behavior. | |
| ethnomethodology | study of the mundane and ordinary activities of everyday life, concentrating on the methods used by people to report their commonsense practical action to others in acceptable, rational terms “the new ethnography | |
| time-budgeting studies | inquires in which the researcher asks the subjects to keep a detailed diary over a given period of time | |
| community studies | involves a researcher or team in studying a whole community of people, usually a small town or village, or possible a part of a larger town | |
| case study | intense inquires about single events, people, or social unites | |
| negative case study | researcher attempts to obtain a case that has a potential to negate a generally accepted view | |
| content analysis | systematic method to analyze the content and treatment of communication, which usually results in the development of objectives and quantitative information. | |
| myths of content analysis | 1. Content analysis is limited to simple analyses 2. Anyone can do content analyses, it does not take any special preparation 3. The term content analysis applies to ALL examination of messages 4. Content analysis is for academic use only" | |
| rhetorical analysis | examines how a text uses communication strategies to persuade an audience to achieve a specific purpose | |
| narrative analysis | goal is to understand relationships between a text and social reality | |
| discourse analysis | engages in characteristics of manifest language and word use - description of topics in media texts - through consistency and connection of words to theme analysis of content and the establishment of central terms. | |
| structuralist or semiotic analysis | focuses on deep meanings of messages | |
| interpretive analysis | focuses on formation of theory from the observation of messages and the coding of those messages | |
| conversation analysis | technique for analyzing naturally occurring conversations, used by social scientists in psych, comm, and sociology | |
| critical analysis | conducted in tradition with cultural studies, has been widely used method for analysis of media messaged | |
| normative analysis | a prescriptive approach to issues dealing with what "should be" rather than "what is" | |
| computers and qualitative message analysis | computer adjuncts have been developed to support the tasks of these various qualitative methods | |
| content analysis features | objectivity-intersubjectivity a priori design reliability validity generalizability replicability hypothesis testing based on theory | |
| objectivity-intersubjectivity | provide a description or explanation of a phenomenon in a way that avoids biases of the investigator(s) | |
| a priori design | a plan that is conceptualized and defined using reason and theory before any data has been collected or empirical testing has occurred | |
| reliability | the internal consistency of the methods of a content analysis | |
| validity | the content analysis actually measures what is is claimed | |
| generalizability | extent to which findings may be applied to other cases, usually to a larger set that is the defined population from which a study's sample has been drawn | |
| replicability | involves repeating a study with different cases or in a different context, checking to see if similar results are obtained each time. | |
| hypothesis testing based on theory | uses sample data to assess a theory-based claim about a population by setting up a null hypothesis (the default or opposite of the theory) and an alternative hypothesis (the actual theory to be tested) | |
| unit of analysis | the element on which data are analyzed and for which findings are reported | |
| manifest content | the elements that are physically present and countable | |
| latent content | unobserved concepts that can not be measured directly but can be represented or measured by one or more indicators | |
| substance characteristics | focuses on the physical properties (observable without chemical change, like color or density) and chemical properties (observed during chemical change, like flammability) that define and identify pure substances and mixtures | |
| formal features | the formal, systematic process of conducting research itself, which is characterized by a structured, objective, and replicable methodology, and the analysis of "formal features" within a text or artwork, focusing on intrinsic elements like diction, image | |
| human coding vs. computer coding | Human coding refers to tasks performed by people to communicate with and control computers, while computer coding (or programming) is the technical process of writing instructions for a computer to execute, using structured languages with strict rules of | |
| steps of the typical process of content-analytic research | developing clear research questions, selecting relevant content, defining the units and categories for analysis, developing a coding scheme, systematically coding the content according to the scheme, and finally, analyzing the results to draw conclusions | |
| type of content analysis | descriptive inferential psychometric clinical content analysis thematic content analysis predictive | |
| descriptive | describe a particular message pool in almost archival fashion | |
| inferential | where researchers wish to make conclusions about sources or receivers solely from content analysis studies | |
| psychometric | process of evaluating psychological tests and measurements by analyzing their quantitative properties, such as item difficulty and reliability, to ensure the test is fair, accurate, and valid for measuring the intended psychological attributes like knowle | |
| clinical content analysis | seeks to provide clinical diagnosis for an individual through analysis of messages generated by that individual | |
| thematic content analysis | to measure a psychosocial trait or state through message analysis | |
| predictive | the prediction of some outcome or effect of the messages under examination | |
| "unit | "1. Serves as the basis for identifying the population and drawing a sample 2. The component on which variables are measured 3. Serves as the basis for reporting analyses" | |
| etic | available in advance, rather than determined during the analysis. Must be discovered, not predicted | |
| emic | denoting subjective knowledge or experience | |
| defining the population | deciding the entire group of individuals, items, or events from which a researcher wants to draw conclusions or make generalizations about | |
| unitizing reliability | the consistency of observers in identifying boundaries and segments within a data continuum (like a transcript or recording) for analysis | |
| cross-sectional vs. longitudinal populations | a cross-sectional population is studied at a single point in time to provide a snapshot of the population's characteristics at that moment, while a longitudinal population involves tracking the same individuals or group over an extended period to observe | |
| availability-based vs. exposure-based populations | the key distinction is in the criteria used for inclusion: availability-based populations are who is accessible, while exposure-based populations are who has experienced a specific risk factor | |
| archives | a collection of messages, usually well indexed | |
| sampling | process of selecting a subset of cases for study from the larger population | |
| probability sampling | for generalizable to some population of messages | |
| "simple random sampling | 1. Pulling cases ""out of a hat" 2. Sampling frame numbered, using a table w/ random table and using random number generator | |
| systemic sampling | selecting every "xth" case, either from sampling frame list or in some flow of occurrence over time | |
| cluster sampling | includes any probability sampling in which a group or set of messages is sampled together, usually for logistic reasons | |
| stratified sampling | this type of probability sampling is where the sampling frame is segmented according to categories on some variables of prime interest to the researcher | |
| multistage sampling | any probability sampling technique in which 2 or more sampling steps are used | |
| nonprobability sampling | those generated by non random techniques, are undesirable and should be used only when no other option exist | |
| convenience sampling | relies on selection of readily available cases | |
| purposive or judgment sampling | researcher making a decision as to what cases he or she deems appropriate | |
| quota sampling | like a nonprobability stratified sample, but key variable categories are identified and then a certain number of cases from each category are included in the study. | |
| sample size | the number of participants or data points selected from a larger population to represent it statistically in a study | |
| universal variables | characteristics that differentiate all stimuli | |
| defining the medium | trying to itemize the characteristics of something that makes it that thing | |
| null hypothesis | logical opposite of a predicted research hypothesis, it is a hypothesis that includes a prediction of no relationship between or among variables | |
| research hypothesis | formal testing of research hypothesis is a deductive process progressing from theory to derived hypothesis to the testing of that hypotheses, the results of which reflect on the original theory | |
| conceptual definition | relating to or based on mental concepts | |
| directional hypothesis | specifying the directionality of the relationships among variables | |
| nondirectional hypothesis | posits differences but not the direction of the differences | |
| measurement theory (or classical test theory) | an observed test score is a combination of a "true score" and "error score" (X = T + E), where the true score represents what the person actually knows or possesses, and the error score represents random fluctuations or inconsistencies in the measurement | |
| random error vs. nonrandom error | Random errors are unpredictable, chance variations in measurements that occur for different reasons in each trial, affecting precision. Non-random (or systematic) errors are consistent, predictable biases that affect all measurements in the same way or pr | |
| bias | a disproportionate weight in favor of or against an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair | |
| validity | the consistency of a measure with a criterion | |
| reliability | the internal consistency of a measure | |
| accuracy | extent to which a measuring procedure is free of bias (nonrandom error) | |
| precision | the fineness of distinction made between categories or levels of a measure | |
| replicability | involves repeating a study with different cases or in a different context, checking to see if similar results are obtained each time | |
| face validity | involves researchers looking at the content of the measurement items and advancing an argument that on its face the measure seems to identify what it claims | |
| criterion validity | consists of methods that assess a measure's worth by examining its relation to some outside criterion. | |
| content validity | the degree to which an assessment, like a test, completely and accurately measures all aspects of the specific concept or domain it's intended to assess | |
| construct validity | requires that a new measure be administered to subject along with at least 2 other measures | |
| "convergent vs. discriminant | convergent validity demonstrates that a measure strongly correlates with other measures of the same or similar constructs discriminant validity shows that it does not correlate strongly with measures of different, theoretically unrelated constructs" | |
| coder debriefing | a structured discussion, often in healthcare, that occurs after a patient's "code" (resuscitation event) or a simulation, where the team reviews what happened to identify successes, areas for improvement, and to enhance teamwork, patient safety, and overa | |
| coding scheme | a defined system for assigning symbols or codes to data, enabling conversion into a specific format (like binary) for storage, transmission, or analysis | |
| nominal, ordinal, interval, ratio variables | Nominal data are categories with no inherent order (e.g., eye color). Ordinal data have a natural order but inconsistent intervals (e.g., education levels). Interval data have order and equal intervals but no true zero (e.g., temperature). Ratio data poss | |
| unit of the data collection | the individual entity (person, household, organization, etc.) from which data is gathered in a study | |
| CATA | computer coding | |
| dictionaries | a set of word, phrased, parts, of speech, or any other word-based indicators that is used as the basis for a search | |
| human coding | the use of people as coders, with each using a standard codebook and coding form, to read, view, or otherwise decode the target content and record the objective and careful observations on preestablished variables | |
| intercoder reliability | amount of agreement or correspondence on a measured variable among 2 or more coders or raters | |
| intracoder reliability | assesses whether coders can agree on the delineation of units of data collection when that is part of the coding protocol | |
| unitizing reliability | a measure of how consistently researchers can identify the boundaries and segments of content to be analyzed, known as "units" | |
| agreement vs. covariation | agreement assumes the measurements are meant to be the same, while covariation does not | |
| chance-corrected agreement | makes assumption that some portion of coders' agreement is due to chance and adjusts for that | |
| "threats to reliability | 1. Poorly executed coding scheme 2. Inadequate coder training 3. Coder fatigue and Coder drift 4. Presence of a rouge coder" | |
| codebook | A written set of coding categories, definitions, and decision rules that guides coders so they classify content consistently. | |
| coder training | Structured practice where coders are taught how to apply the codebook, practice on sample texts, and receive feedback until reliable coding is achieved. | |
| blind coding | a process where researchers analyze data without knowing the identity of the data's source to reduce bias | |
| index | A composite measure created by combining multiple indicators into a single score representing a broader concept. | |
| reliability for manifest vs. latent content | Reliability is typically higher for manifest content (objective, surface-level features) and lower for latent content (interpretive, meaning-based features) because latent judgments require more subjective judgment. | |
| manifest | Observable, surface-level content that is directly countable (e.g., word frequency, presence of a feature). | |
| latent | Underlying meanings, themes, or interpretations that require coder judgment (e.g., tone, sentiment). | |
| pilot reliabilities | done on a randomly selected subsample of the total sample message pool before the study begins in earnest. This pilot testing should not be analyzed as a part of the final dataset. | |
| final reliabilities | done on another randomly selected subsample during the full data collection, to fairly represent the coders’ performance throughout the study. This final reliability assessment is the one reported in your write-up | |
| percent agreement | A simple reliability measure showing the percentage of times coders made the same coding decision. | |
| Holsti's method | Measures agreement between coders who may not code the exact same data. When coders analyze the same sections, it works like percent agreement, but it still fails to adjust for chance agreement. | |
| Scott's pi | An intercoder reliability statistic that corrects percent agreement by accounting for agreement expected by chance. | |
| Cohen's kappa | A widely used intercoder reliability measure that corrects for chance agreement and is appropriate for two coders. | |
| Krippendorff's alpha | A flexible reliability statistic that handles multiple coders, missing data, and different levels of measurement; widely recommended in content analysis. | |
| Gwet's AC1 | An alternative chance-corrected reliability coefficient that avoids kappa’s paradox (low kappa despite high agreement). | |
| Spearman's rho | A rank-order correlation used to assess association between ordinal-coded data or the consistency of rankings between coders. | |
| Intraclass correlation coefficient (ICC) | A reliability measure used for interval or ratio data that evaluates consistency or absolute agreement across multiple coders | |
| Pearson correlation coefficient (r) | A measure of linear association between two interval/ratio variables; sometimes used to compare coder scores but not ideal for categorical coding | |
| Lin's concordance correlation coefficient (CCC) | A statistic assessing agreement by combining measures of precision (correlation) and accuracy (closeness to a 45-degree perfect agreement line). | |
| attenuation | Reduction in the observed relationship between variables caused by measurement error or low reliability. | |
| nominal level measurement | Categorizes data into discrete groups with no numerical order (e.g., gender, political party). | |
| ordinal level measurement | Ranks data in ordered categories without equal spacing between them (e.g., Likert rankings, class year). | |
| internal level measurement | Measures variables with equal spacing between values but no true zero (e.g., temperature in °F, Likert-type scales treated as interval). | |
| ratio level measurement | Contains equal intervals and a meaningful zero point, allowing all mathematical operations (e.g., height, response time). | |
| categorical data | Data placed into categories; includes nominal and ordinal levels. | |
| continuous variables | Variables with values on an interval or ratio scale that can take on any number within a range. | |
| falsification of a theory | The principle that theories must be testable and structured so evidence can show them to be false. | |
| tentativeness of a theory | The idea that theories are always provisional and may be revised with new evidence. | |
| reliability | Consistency or stability of a measurement instrument over time or across observers. | |
| false precision | Reporting overly detailed numerical values that imply more accuracy than the measurement supports. | |
| fallacy of misplaced precision | Assuming precise numerical information is meaningful when the underlying construct or data are not that precise. | |
| reliability coefficient | A numerical estimate (0–1) expressing the consistency of a measurement. | |
| test-retest reliability | Stability of a measure over time, assessed by correlating scores from two administrations. | |
| alternate forms (parallel forms) reliability | Consistency determined by administering equivalent versions of the same measure to the same people. | |
| split-half reliability | Internal consistency estimate obtained by correlating two halves of a test (e.g., odd vs. even items). | |
| Cronbach's coefficient alpha | A measure of internal consistency showing how well a set of items measures a single construct. | |
| validity | The degree to which an instrument measures what it claims to measure. | |
| face validity | The extent to which a measure appears to assess the intended construct based on surface judgment. | |
| expert jury validity | Validity established through evaluation by subject matter experts who judge whether items reflect the construct. | |
| criterion validity | Whether a measure correlates with an external criterion known to indicate the construct. | |
| concurrent validity | A type of criterion validity assessed by correlating a measure with a criterion at the same time. | |
| predictive validity | Criterion validity demonstrated when a measure predicts future behaviors or outcomes. | |
| Construct validity | The extent to which a measure represents the theoretical construct and relates to other variables as predicted. | |
| Thurstone equal appearing interval scales | Scaling method where judges rate how favorable items are, creating items spaced equally along an attitude continuum. | |
| Likert scales | Multi-item scales measuring attitudes using agreement-level responses (e.g., strongly disagree → strongly agree). | |
| item to total reliability | Correlation of each item with the total scale score to assess whether items fit well with the overall construct. | |
| intercoder reliability | Agreement among coders in content analysis, verifying consistency of content classification. | |
| Guttman scalogram | A cumulative scale in which agreeing with a strong item implies agreement with all weaker items. | |
| semantic differential-type scales | Scales using bipolar adjective pairs (e.g., good–bad) to measure attitudes on multiple evaluative dimensions. | |
| empirical | Based on observation or data rather than theory alone. | |
| experiment | Research design where the researcher manipulates variables to establish causal relationships. | |
| survey | A method of systematically collecting self-report data from respondents using questionnaires or interviews. | |
| questionnaires | Structured instruments used to gather responses in surveys. | |
| validity | (Repeated term) Accuracy of a measure in capturing the construct. | |
| reliability | (Repeated term) Consistency/stability of measurement. | |
| check questions | Items inserted to detect inconsistent or inattentive responding. | |
| polarity rotation | Reversing the direction of some scale items to disrupt response patterns or response sets. | |
| response set | A habitual pattern of answering (e.g., always agreeing) regardless of item content. | |
| anonymity | Condition in which the researcher cannot identify participants from their data. | |
| confidentiality | Condition in which participant identities are known but protected from disclosure. | |
| focus groups | Group interviews where participants discuss topics, allowing researchers to study shared meanings and interactions. | |
| funnel questions | Interview structure that starts with broad questions and narrows to more specific ones. | |
| structured interviews | Interviews using standardized, predetermined questions asked in a fixed order. | |
| unstructured interviews | Open-ended, flexible interviews where questions evolve during conversation. | |
| network analysis | Method for studying relationships among individuals or groups by mapping connections. | |
| nodes | The individuals, groups, or units represented in a network. | |
| semantic networks | Maps of how concepts or words are linked in meaning within communication content. | |
| survey nonresponse | When selected participants do not complete the survey; reduces data quality and representativeness. | Ignoring emailed survey link. |
| response rates | % of people contacted who actually complete the survey. | 100 contacted → 40 complete → 40% response rate. |
| refusal rates | % of people who explicitly decline participation. | “No thanks, I’m not interested.” |
| cooperation rates | % of people who engage after being reached (not including unreachable participants). | 60 reachable, 45 agree → 75% cooperation rate. |
| survey | A research method for collecting standardized info from individuals via questionnaires/interviews. | Polls, customer satisfaction surveys, academic questionnaires. |
| Survey research | Systematic collection of data from a sample to generalize to a population. | Using Qualtrics to study political attitudes. |
| respondent | The individual who provides survey data. | Anyone who completes the questionnaire. |
| Push poll | Biased “survey” designed to persuade rather than measure. | “Would you support Candidate X if you knew they were corrupt?” |
| questionnaire | Structured set of questions used to collect data. | Online or paper form. |
| Direct questions | Ask about the topic straightforwardly or explicitly. | “Do you support raising tuition?” |
| Indirect questions | Ask about sensitive topics more subtly to reduce bias. | “How do you think most students feel about raising tuition?” |
| Questions vs statements | Questions ask directly; statements ask participants to rate agreement. | “I feel stressed in class” (statement). |
| Open-ended questions | Allow respondents to answer in their own words. | “Describe your experience with online learning.” |
| Close-ended questions | Provide set response options. | Multiple choice, checkboxes, Likert scales. |
| Double-barreled questions | Ask two things at once. | “How satisfied are you with your advisor and classes?” |
| Leading questions | Suggest a “correct” or preferred answer. | “Don’t you agree the new policy is unfair?” |
| bias | Systematic error that skews results (wording, sampling, response). | Leading questions, unbalanced scales. |
| Response symmetry and balance | Response options should reflect equal numbers of positive/negative choices. | Poor: “Very satisfied / Satisfied / Neutral / Very dissatisfied” (missing “dissatisfied”). |
| Contingency questions | Questions answered only by those who meet certain criteria. | “Do you smoke?” If yes → “How many per week?” |
| Matrix questions | Sets of questions with same response options arranged in a grid. | Rating 10 statements on a 1–7 scale. |
| response-sets | Patterned responding unrelated to content. | Clicking “Strongly agree” for every item. |
| check questions | Items inserted to verify attention or consistency. | “Select ‘Disagree’ for this statement.” |
| polarity rotation | Switching scale direction to reduce response-sets. | Alternating between positive → negative phrasing. |
| univariate | Describes one variable at a time. | Mean stress score, distribution of age. |
| central tendency | Statistical measures identifying the “center” of a distribution. | Mean, median, mode. |
| mean | Arithmetic average. | (5 + 6 + 7) / 3 = 6 |
| median | Middle value when data is ordered. | Values: 2, 3, 5, 7, 8 → median = 5 |
| mode | Most frequent value. | Scores: 1, 1, 2, 3 → mode = 1 |
| range | Highest minus lowest value. | Max = 20, Min = 10 → Range = 10 |
| variance | Average squared deviation from the mean. | Larger variance = more spread. |
| Standard deviation | Square root of variance; average distance from the mean. | Higher SD = greater variability. |
| shape | Form of data distribution (normal, skewed, bimodal). | Skewed right = long tail on right. |
| deviation square | Squared difference between each score and mean. | (X – mean)² for each participant. |
| outlier | Value far outside the typical range. | Most scores: 10–15; one score = 90. |
| histogram | A bar graph showing the frequency distribution of a variable. | Plotting how many students rate online lectures from 1–7. |
| skew | Asymmetry in a distribution. | Most people rate a speaker highly → tail is on the low side. |
| Positive skew | Tail extends to the right; many low scores, few high. | Most respondents report low conflict, but a few report very high conflict. |
| Negative skew | Tail extends to the left; many high scores, few low. | Most people rate relational satisfaction as high, with a few very low scorers. |
| bimodal | Two peaks (two common score clusters). | Responses to political messaging might cluster in two ideological groups. |
| Shapes of distribution | Refers to whether data are normal, skewed, bimodal, etc. | Understanding shape impacts which statistics you choose. |
| mesokurtic | Normal “medium peaked” distribution. | Typical distribution of communication anxiety scores. |
| leptokurtic | Very peaked with heavy tails. | Most students score similarly on a listening exam, but a few extreme scores. |
| platykurtic | Flat distribution with light tails. | Very diverse social media attitudes with little consensus. |
| Normal distribution | Symmetrical, bell-shaped distribution with predictable probabilities. | Communication competence scores in a large, random sample often resemble this. |
| Standard normal distribution | Normal distribution transformed to mean = 0, SD = 1. | Z-scores of emotional regulation scores. |
| Z scores | Standardized value showing how far a score is from the mean (in SD units). | A student’s listening score is +1.5 SD above the class mean. |
| Pearson product-moment correlation | Measures linear relationship between two variables (−1 to +1). | Relationship between message harshness and perceived hostility (r = .62). |
| "Linear vs. nonlinear | Linear = consistent direction. | |
| Nonlinear = curved (“U-shaped,” “inverted U”). | Stress vs. communication performance may be nonlinear (too little or too much stress is bad)." | |
| sampling | Selecting a subset of a population for study. | |
| sample | People you actually study. | |
| population | Group you want to generalize findings to (e.g., all college students). | |
| statistic | Value computed from a sample (e.g., sample mean). | |
| parameter | Corresponding true value in the population. | |
| bias | Systematic error in sampling or measurement. | |
| Sampling error | Only surveying students who attend class. | |
| Forms of sampling | Difference between sample statistic and population parameter due to chance. | |
| random | Everyone has an equal chance. | |
| Simple random sampling | Full random selection (e.g., random number generator). | |
| Stratified random sampling | Divide population into groups → sample proportionally. | Proportionate sampling of communication majors and non-majors. |
| Cluster sampling | Randomly select groups/clusters first. | Select certain classes, then survey all students within them. |
| systematic | Select every nth person. | |
| Convenience sampling | Survey whoever is easily available. | |
| Quota sampling | Fill specific demographic quotas, but not randomly. | |
| Purposive sampling | Select respondents based on criteria (e.g., crisis communicators, influencers). | |
| Snowball sampling | Participants recruit others. | |
| Probability | Likelihood of an outcome occurring. | |
| Confidence interval | Range of values likely to contain the population parameter. | |
| Margin of error | Amount of sampling error tolerated. | |
| Standard error | Standard deviation of the sampling distribution. | |
| binomial | Distribution of two possible outcomes (e.g., “agree”/“disagree”). | |
| Standard error of a binomial | SE formula for dichotomous variables (p vs. q). | |
| Hypothesis testing | Process of determining whether observed results differ from chance. | |
| Null hypothesis | States “no effect/no relationship.” | |
| Type I error | False positive, Rejecting H0 when it’s true. | |
| Alpha risk | Probability of Type I error (.05 is typical). | |
| Type II error | False negative, failing to reject H0 when an effect actually exists | |
| Beta risk | Probability of Type II error. | |
| "John Stuart Mill's criteria for making causal interferences | 1. Covariation (variables change together) 2. Temporal order (cause precedes effect) 3. Elimination of alternatives" | |
| Spurious relationship | Two variables appear related but are caused by a third variable. | |
| Necessary condition | Must be present for an effect to occur. | |
| Sufficient condition | If present, guarantees the effect. | |
| empirical | Based on observation and data. | |
| experiment | Manipulate an IV → measure effect on DV. | |
| confounding | An uncontrolled variable that explains results. | |
| Internal validity | Confidence that IV caused DV. | |
| control | Procedures used to reduce confounding. | |
| Control approaches | ||
| Elimination and removal | remove confounds entirely. | |
| Holding constant | keep variables the same (e.g., same message length). | |
| matching | pair participants on characteristics. | |
| blocking | treat a confound as a factor in the design. | |
| randomization | random assignment; gold standard. | |
| Statistical control | control via regression, ANCOVA. | |
| Research designs | ||
| True experiment or randomized experiment | Randomized; highest internal validity. | |
| quasi-experiment | No random assignment; moderate internal validity. | |
| non-experiment | Correlational or observational. | |
| Signal enhancing designs | Increase the strength of the treatment/manipulation. | |
| Noise reducing designs | Reduce variability/error in DV. | |
| Factorial design | ||
| factors | independent variables | |
| levels | conditions of each factor | |
| Main effects | effect of each IV independently | |
| Interactive effects | IVs combine to affect DV | |
| Description of experiments with two or more independent variables | ||
| 2X2 | 2-by-2 design or 2 X 2 design = two IVs with two levels each | |
| 3X3 | 3-by-3 design or 3 X 3 design = two IVs with three levels each | |
| 2X2X2 | 2-by-2-by-2 design or 2 X 2 X 2 design = three IVs with two levels each | |
| 2X3 | 2-by-3 design or 2 X 3 design = two IVs, one IV with two levels and the other with three levels | |
| Spector | Associated with measurement theory used in communication. | |
| Measurement error | Difference between observed score and true score. | |
| reliability | Consistency of measurement. | |
| test-retest | Same measure over time → consistent results. | |
| Equivalent forms | Two different versions yield similar scores. | |
| validity | Accuracy of measurement (measures what it claims). | |
| generalizability | Extent findings apply to larger populations. | |
| Internal validity | Confidence the IV caused the DV (repeated from above). | |
| External validity | Generalizable beyond the study. | |
| Sources of invalidity | participants change due to measurement | |
| Instrument reactivity | unreliable instruments | |
| history | outside events | |
| Unreliable instruments | ||
| Differential subject loss | attrition not equal | |
| Bias in assignment of subjects | nonrandom groups | |
| External invalidity | ||
| Reactivity of instrumentation | observer effects | |
| Hawthorne effects | participants change because they know they’re being studied | |
| Invalidity of instruments | poor measurements | |
| Confounding characteristics of particular samples | unique groups | |
| "Design notation | R = random assignment | |
| X = treatment | ||
| O = observation" | ||
| One-group designs | O X O | |
| "Pretest-posttest design | R O X O | |
| R O O" | ||
| Interrupted time series | ||
| Correlational designs | ||
| Cross-sectional design | ||
| Two-group design | ||
| Multiple-group posttest design | ||
| Multiple-group pretest posttest design | ||
| Ex post facto design | ||
| Multiple-group time series design | ||
| Factorial design | ||
| Solomon four-group design | ||
| Experiment | A research design where the researcher manipulates an independent variable and observes its effect on a dependent variable while controlling extraneous factors. | Testing whether message framing (gain vs loss) changes compliance in health communication. |
| Confounding | When an outside variable varies along with the IV and influences the DV, making it unclear what caused the effect. | Participants in one message-condition happen to be older than in the other. |
| Control | Procedures used to eliminate or reduce alternative explanations for observed effects. | Holding message length constant across conditions. |
| Counterbalancing | Alternating the order of conditions to reduce order or fatigue effects. | Half the participants see message A then B; the other half see B then A. |
| Matching | Pairing participants with similar characteristics and assigning them to different groups. | Matching dyads based on age or communication skill. |
| Blocking | Grouping participants by a variable and analyzing that variable as a factor. | Blocking participants by gender when testing ad effectiveness. |
| Randomization | Assigning participants to conditions by chance. | Randomly assigning people to “supportive message” or “critical message” groups. |
| Experimental Invalidity | Any threat that weakens confidence in causal conclusions from an experiment. | History, maturation, demand characteristics, etc. |
| Internal Invalidity | When observed effects may be due to factors other than the IV. | Participants in one condition have more prior knowledge. |
| External Invalidity | When results fail to generalize beyond the study sample or context. | Lab experiment may not generalize to real online conversations. |
| Factors | Independent variables in an experiment. | Message type, channel, emotional tone. |
| Levels | The different categories or values of a factor. | “High fear,” “moderate fear,” “low fear.” |
| Main Effects | The isolated effect of one factor regardless of other factors. | Emotional tone affects message recall regardless of channel. |
| Interaction Effects | When the effect of one factor depends on the level of another factor. | Fear appeals increase compliance only when the source is credible. |
| Ordinal Interactions | The direction of the effect stays the same, but its strength changes across levels. | High fear works better than low fear across all source types, but especially with high credibility. |
| Disordinal Interactions | The direction of the effect reverses depending on the level of another factor. | High fear works best with experts but low fear works best with peers. |
| Manipulation Check | A test within the study to confirm that the IV was perceived as intended. | Asking participants if they thought the message was “high fear.” |
| Research | A systematic process of gathering evidence to answer communication questions. | Studying how texting affects relational closeness. |
| Reasons for Conducting Research | To describe, explain, predict, or control communication phenomena. | Describing media use patterns; predicting effects of messages. |
| Overgeneralization | Drawing conclusions from too little evidence. | Assuming all students dislike Zoom after talking to two classmates. |
| Selective Observations | Seeing only what confirms preexisting beliefs. | Noticing only rude comments online when studying civility. |
| Inaccurate Observations | Misinterpreting or incorrectly recalling communication events. | Misremembering who initiated conflict in a conversation. |
| Illogical Reasoning | Reaching conclusions based on faulty logic. | “I got ghosted once, so everyone ghosting means the medium is toxic.” |
| Basic Research | Conducted to build or test theory without practical goals. | Studying how interpersonal conflict unfolds turn-by-turn. |
| Applied Research | Aimed at solving real-world communication problems. | Testing which message increases vaccination intentions. |
| Communication | The process of creating and interpreting messages within social contexts. | Interpersonal talk, media messages, organizational emails. |
| Message | Any symbolic representation used to convey meaning. | A text, speech, emoji, or ad. |
| Verbal Cues | Spoken or written language elements. | Word choice, phrasing, grammar. |
| Nonverbal Cues | Meaning conveyed without words. | Facial expression, emojis, tone. |
| Qualitative Methods | Methods that examine communication in depth, focusing on meaning. | Interviewing people about conflict experiences. |
| Quantitative Methods | Methods that measure variables numerically and analyze patterns. | Surveys measuring communication competence. |
| Survey Methods | Collecting standardized self-report data from respondents. | Measuring attitudes toward political ads. |
| Descriptive or Observational Surveys | Measure what exists without manipulating variables. | Tracking social media usage. |
| Content Analysis | Systematic coding of communication messages. | Analyzing news coverage tone. |
| Opinion Surveys | Assess individuals’ attitudes or beliefs. | Polling viewers about a TV debate. |
| Experimental Methods | Manipulating variables to test causal effects. | Changing message style to examine persuasion. |
| Serendipity | Unexpected findings that open new research paths. | Discovering an unpredicted relational effect while testing something else. |
| Arguments | Logical claims supported with evidence. | Building a case for a hypothesis. |
| Research Standards of Excellence | Criteria such as clarity, rigor, ethics, and validity. | Designing unbiased surveys. |
| Ethical Challenges | Risks of harm or deception in communication studies. | Experiments involving emotional messages. |
| IRB | Institutional Review Board; evaluates ethical safety of studies. | Approving a study involving sensitive interviews. |
| Consent | Participants agree after understanding study procedures. | Signing a disclosure form before a survey. |
| Confidentiality | Researcher knows identity but protects it from disclosure. | Storing names separately from responses. |
| Anonymity | Researcher cannot link responses to identities. | Anonymous online survey. |
| Problems (Research Problems) | Questions or issues a study seeks to address. | “Do supportive messages reduce stress after conflict?” |
| Variable | A characteristic that can vary across individuals or messages. | Message length, emotional tone, credibility. |
| Constant | A characteristic that does not vary in a study. | Using only 30-second messages. |
| Hypothesis | A specific, testable prediction about relationships between variables. | “Higher fear appeals increase intentions.” |
| Working Hypothesis | Initial assumption guiding early study planning. | Expecting positive association before precise testing. |
| Nondirectional Hypothesis | Predicts a relationship but not its direction. | “Message tone will affect trust.” |
| Directional hypothesis | A prediction that specifies the expected direction of the relationship between variables. | “More social media use will lead to higher anxiety.” |
| Null hypothesis | A statement that there is no relationship or difference between variables. | “Social media use has no effect on anxiety.” |
| Independent variable | The variable that is manipulated or categorized to examine its effect. | Amount of time spent on Instagram. |
| Dependent variable | The outcome that is measured in response to the independent variable. | Participants’ anxiety scores. |
| Crucial experiments | Experiments designed to clearly support one theory or explanation over competing ones. | A test that determines whether attention or memory primarily drives message recall. |
| Criteria for sound problem statements | Clear, specific, researchable, significant, and grounded in existing literature. | A precise statement explaining what phenomenon needs to be studied and why. |
| Requirements of hypotheses | Must be testable, falsifiable, specific, and grounded in theory. | A hypothesis linking texting frequency to relationship satisfaction. |
| Science | A systematic method of gathering knowledge through observation, testing, and evidence. | Conducting controlled studies of communication behavior. |
| Theory | An organized set of concepts and explanations that describe, predict, or explain phenomena. | Social Information Processing Theory explaining online impression formation. |
| Pseudoscience | Claims that appear scientific but lack empirical testing, falsifiability, or rigor. | Horoscopes predicting communication compatibility. |
| Scientific method | A structured process: problem identification, hypothesis formation, testing, analysis, and conclusion. | Designing a study to test how tone affects message interpretation. |
| Abstract calculus | Reasoning with abstract concepts rather than concrete events. | Using constructs like “identity” or “self-esteem” to build communication models. |
| Constructs | Abstract concepts studied in research but not directly observable. | “Self-disclosure” or “media trust.” |
| Rules correspondence | How well theoretical statements match empirical measures. | Ensuring your “communication apprehension” scale reflects the concept accurately. |
| Definitions | Statements that clarify what a term means. | Defining “online harassment” before measuring it. |
| Conceptual definitions | Theoretical explanations of what a concept means. | “Anxiety is a negative emotional state marked by worry.” |
| Daily definitions | Informal, everyday meaning of a concept. | Defining “stress” as “feeling overwhelmed.” |
| Poetic definitions | Figurative, expressive explanations. | “Anxiety is a storm inside the mind.” |
| Scholarly definitions | Academic, precise, literature-based definitions. | “Stress is a psychophysiological response to perceived demands exceeding coping resources.” |
| Circularity | A faulty definition that uses the term itself in the definition. | “Influence is when someone influences another person.” |
| Operational definitions | How a concept is measured or manipulated in a study. | Measuring stress via a 7-item stress scale. |
| Forms of operational definitions | Measurements (scales), observations, manipulations, categorization. | Manipulating message tone (friendly vs. hostile). |
| Manipulated independent variables | Variables the researcher actively changes. | Sending participants either a supportive or critical message. |
| Measured/assigned variables | Variables observed or classified rather than manipulated. | Gender, personality scores, or political ideology. |
| Direct classification variables | Variables sorted into categories without manipulation. | Categorizing participants as “frequent” vs. “infrequent” texters. |
| Intersubjectivity | Agreement among multiple observers that a measurement or observation is consistent. | Coders rating a message as sarcastic with high agreement. |
| Repeatability | The ability to replicate findings using the same methods. | A study on message tone yielding similar results in another lab. |
| Standards for operational definitions | Must be reliable, valid, precise, and replicable. | A depression scale consistently measuring depression across samples. |
| Criticism of conceptual definitions | They can be vague, abstract, or difficult to test empirically. | “Motivation” can be defined so broadly that it’s hard to measure. |
| Inductive reasoning | Drawing general conclusions from specific observations. | Observing multiple hostile comments and proposing a theory of online aggression. |
| Deductive reasoning | Starting with theory and using it to generate predictions. | Using attachment theory to predict texting behaviors. |
| Requirements of theory | Should be testable, falsifiable, parsimonious, logically consistent, and useful. | A communication theory that offers clear, testable predictions. |
| Falsification | The ability for a theory to be proven wrong through testing. | A theory predicting “all” messages lead to the same interpretation can be disproven. |
| Tentativeness | Scientific knowledge is provisional and may change with new evidence. | Modifying a theory when new data contradicts older findings. |
| Functions of theory | Describe, explain, predict, and control phenomena. | A messaging theory helping identify what communication strategies reduce conflict. |
| Description | Identifying and detailing communication behaviors. | Documenting how often partners text daily. |
| Explanation | Clarifying why a behavior occurs. | Explaining why people use emojis to soften tone. |
| Prediction | Forecasting outcomes based on theory. | Predicting that supportive messages increase relationship satisfaction. |
| Control | Using theories to shape or guide outcomes. | Designing interventions to reduce misinformation sharing. |
| Argument | A set of statements used to support a claim through logic and evidence. | Writing a research paper defending a communication theory. |
| Abstract | A brief summary of a research article’s purpose, methods, results, and conclusions. | A 150-word paragraph summarizing your study on social media stress. |
| Plagiarism | Using another’s work without proper credit. | Copying definitions from a textbook without citation. |
| Exhaustive literature reviews | Reviews that cover all relevant research on a topic comprehensively. | A dissertation chapter summarizing every major study on media anxiety. |
| Exemplary literature reviews | Reviews that are thorough, analytical, well-structured, and evaluative. | A review that connects themes, critiques methods, and synthesizes theories. |
| Explication | Clarifying and refining concepts by breaking them into precise components. | Explaining the subdimensions of “self-disclosure” in online communication. |
| Research prospectus | A brief plan that explains what you want to study, why it matters, and how you will study it. | A student outlines a project examining how relational uncertainty affects texting conflicts. |
| Evidence | Information that supports a claim or argument in communication research. | Using survey data to show that supportive messages reduce stress. |
| Ecological fallacy | Incorrectly assuming that group-level patterns apply to every individual. | Seeing that a class does well on average and assuming every student understands the material. |
| Delphi fallacy | Believing an idea is true because experts agree, even when data is weak. | A panel of professionals predicts a media trend, but no empirical support exists. |
| Jeanne Dixon fallacy | Remembering correct predictions but ignoring incorrect ones. | A commentator remembers when they “predicted” a political debate outcome but ignores the times they were wrong. |
| Patchwork quilt fallacy | Treating unrelated cases as if they form a meaningful pattern. | Linking a few isolated viral TikTok posts to claim a communication trend exists. |
| Factual evidence | Verifiable information based on observation or measurement. | Reporting the percentage of viewers who watched a news broadcast. |
| Primary sources | Original research or first-hand data. | A published experiment on message framing. |
| Secondary sources | Summaries or reviews of primary research. | A textbook chapter explaining uncertainty reduction theory. |
| Reports | Descriptive summaries of events or conditions without interpretation. | A report showing the number of complaints sent to a customer service center. |
| Statistics | Numerical summaries used to describe communication patterns. | Reporting the average length of emails in a workplace study. |
| Parameters | Numbers that describe an entire population. | The true average time all employees spend in meetings. |
| Opinions | Personal judgments or beliefs not based on empirical evidence. | Saying “people prefer short texts” without data. |
| Expert opinions | Judgments from qualified professionals based on their knowledge. | A PR specialist describing best practices for crisis messaging. |
| Inductive reasoning | Moving from specific observations to general conclusions. | Coding several conflict conversations and forming a theory about escalation patterns. |
| Deductive reasoning | Testing a general idea with specific observations. | Starting with politeness theory and checking how often speakers soften criticism. |
| Causal argument | Claims that one communication factor produces change in another. | Arguing that supportive messages lower stress after exposure to hostile comments. |
| Syllogism | A structured argument with a major premise, minor premise, and conclusion. | Major: Supportive messages reduce stress. Minor: This message is supportive. Conclusion: It reduces stress. |
| Categorical syllogism | A syllogism using categories or groups. | “All deceptive messages omit information; this message omits information; therefore it is deceptive.” |
| Disjunctive syllogism | Uses “either/or” statements. | “Either the message is sarcastic or sincere; it is not sarcastic; so it is sincere.” |
| Conditional syllogism | Uses “if/then” reasoning. | “If a message offers empathy, the receiver feels supported; this message offers empathy; the receiver feels supported.” |
| Necessary condition | A condition that must be present for something to occur. | For persuasion, attention is necessary. No attention = no persuasion. |
| Sufficient condition | A condition that guarantees an outcome. | A credible source may be sufficient to produce trust in some settings. |
| Measurement | Assigning numbers to communication behaviors or attitudes. | Rating message clarity on a 1–7 scale. |
| Measurement error | Any difference between the true value and the measured value. | A participant misreads a survey item and picks the wrong response. |
| Attenuation | Weakening of relationships due to measurement error. | Poorly worded items reduce the correlation between media use and anxiety. |
| Signal-to-noise ratio | Amount of meaningful information compared to error. | Clear, well-designed items increase signal; unclear ones add noise. |
| Nominal level measurement | Categories with no numerical meaning. | Coding messages as supportive, neutral, or hostile. |
| Ordinal level measurement | Ranking without equal spacing. | Ordering messages from least to most polite. |
| Interval level measurement | Equal spacing but no true zero. | A 1–7 Likert scale on topic involvement. |
| Ratio level measurement | Equal spacing with a true zero. | Counting seconds of silence in a conversation. |
| Continuous data | Data with many possible values. | Measuring reaction time after seeing a message. |
| Categorical data | Data in distinct categories. | Coding a post as “informational” or “emotional.” |
| Reliability | Consistency of a measure. | A politeness scale produces similar results when retesting. |
| Reliability coefficient | Numerical estimate of measurement consistency. | Cronbach’s alpha = .88 shows good internal consistency. |
| Test-retest reliability | Stability of scores over time. | Participants complete a media literacy scale two weeks apart. |
| Alternate forms reliability | Consistency between two versions of a measure. | Two versions of a social presence scale produce similar scores. |
| Split-half reliability | Consistency between halves of a measure. | Odd items correlate with even items on a deception scale. |
| Intercoder reliability | Agreement among coders analyzing communication content. | Two coders classify political ads and match 90% of the time. |
| Item-to-total reliability | Correlation between each item and the full scale. | One weak item on an empathy scale shows a low item-total correlation. |
| Validity | How well a measure reflects what it claims to measure. | A credibility scale actually measures source credibility, not liking. |
| Face validity | Whether items appear to measure the construct. | “This scale looks like it measures conflict avoidance.” |
| Expert jury validity | Experts judge whether items fit the construct. | Scholars review a new parasocial interaction scale. |
| Criterion validity | Relationship between the measure and an external standard. | A loneliness measure correlates with time spent alone. |
| Concurrent validity | Correlates with another measure at the same time. | Two immediacy scales correlate strongly. |
| Predictive validity | Predicts future outcomes. | Conflict style predicts later relational satisfaction. |
| Construct validity | Overall evidence that a measure represents the concept. | Politeness, face, and respect items work together coherently. |
| Semantic differential | Bipolar adjective scale. | Rating a message as warm–cold or friendly–unfriendly. |
| Likert scale | Agreement-based scale. | “I feel supported by this message” (1–7). |
| Guttman scalogram | Items arranged in increasing intensity. | Agreeing to stronger items implies agreement with weaker ones. |
| Thurstone scale | Items selected based on expert-judged intensity. | Experts rate hostility levels to build a hostility scale. |
| Negative case analysis | Looking for data that contradicts a pattern. | A study on supportive messages includes examples that do not help recipients. |
| Population | The full group a researcher cares about. | All college students who use Instagram for news. |
| Sampling | Selecting a smaller group from the population. | Recruiting 300 students from several campuses. |
| Data | Information collected during research. | Survey answers about online conflict. |
| Statistic | A number describing a sample. | The sample mean level of message clarity. |
| Parameter | A number describing a population. | The true average clarity rating among all students. |
| Bias | Systematic error in sampling or measurement. | Only surveying students who volunteer results in selection bias. |
| Representative sample | Sample that reflects the population’s characteristics. | A sample matched to the population on age, gender, and media use. |
| Sampling error | Random differences between the sample and population. | The sample reports 30% Instagram news use, but the real rate is 27%. |
| Sampling error | The difference between sample results and the true population values due to chance. | A sample of students overestimates average texting frequency compared to the whole campus. |
| Standard deviation | A measure of how spread out scores are around the mean. | Variation in how many TikToks participants watch daily. |
| Probability sampling | Sampling methods where each member of the population has a known chance of being selected. | Using random selection to choose participants for a media effects study. |
| Random sampling | Every member of a population has an equal chance of selection. | Randomly selecting phone numbers to study call habits. |
| Stratified random sampling | Population is divided into strata, and participants are randomly selected from each group. | Sampling equal numbers of first-year and senior students to compare online disclosure. |
| Cluster sampling | Groups (clusters) are randomly selected instead of individuals. | Randomly choosing entire classrooms to study social media habits. |
| Systematic or periodic sampling | Selecting every nth unit from a list. | Surveying every 10th student entering the student union about their preferred messaging apps. |
| Accidental or convenience sampling | Selecting whoever is easiest to reach. | Surveying students in the library about texting anxiety. |
| Quota sampling | Nonrandom selection but with quotas for key categories. | Ensuring half the sample uses Instagram and half uses TikTok. |
| Purposive or known group sampling | Selecting participants based on specific characteristics. | Recruiting people who frequently engage in online debates. |
| Snowball sampling | Participants recruit additional participants, useful for hard-to-reach groups. | Asking cyberbullying victims to refer others for a study. |
| Descriptive statistics | Statistics that summarize data. | Reporting the average number of daily texts sent. |
| Inferential statistics | Methods used to make conclusions about populations based on samples. | Testing whether message tone affects perceived credibility. |
| Central tendency | Measures describing the typical score in a distribution. | Reporting the typical number of emojis used per message. |
| Arithmetic mean | The average score of all values. | Average number of YouTube videos watched per day. |
| Unbiased estimator | A statistic whose expected value equals the population parameter. | Using sample variance to estimate true population variance. |
| Median | The middle score in an ordered list. | The median number of hours spent on social media per day. |
| Mode | The most frequently occurring value. | The most common messaging app used by participants. |
| Bimodal | A distribution with two modes. | A study finds participants mostly prefer either texting or Snapchat, with few choosing other platforms. |
| Measures of variability or dispersion | Statistics that show how spread out data are. | Examining variation in conflict frequency in online relationships. |
| Range | Difference between highest and lowest scores. | Range of “hours on TikTok” in the sample: 0–8 hours. |
| Variance | Average squared deviation from the mean. | Variance in participants’ communication apprehension scores. |
| Standard deviation | The square root of variance; shows typical spread. | Daily screen time varies with an SD of 1.8 hours. |
| Skewness | A measure of asymmetry in a distribution. | Social media use positively skewed because a few people use it excessively. |
| Kurtosis | Describes how peaked or flat a distribution is. | Message length distributions that are highly peaked or flat. |
| Platykurtic | Flat distribution; light tails. | Wide variation in how often people check notifications. |
| Mesokurtic | Normal distribution; moderate tails. | Typical distribution of daily texts sent. |
| Leptokurtic | Peaked distribution; heavy tails. | Most participants send about the same number of messages, but a few send extremely many. |
| Standard normal curve | A normal distribution with mean 0 and SD 1. | Z-score distributions of media anxiety scores. |
| Probability distribution | Shows the likelihood of each possible value of a variable. | Probability of different levels of self-disclosure in messages. |
| Data distributions | Patterns showing how data values are arranged. | Histogram of how often people post on Instagram weekly. |
| z scores | Standardized scores representing how many SDs a value is from the mean. | A participant’s TikTok usage is +2 SDs above the mean. |
| Correlation | Statistical measure of the relationship between two variables. | More texting is correlated with closer friendships. |
| Direct relationship | Variables increase or decrease together. | As supportive messages increase, satisfaction increases. |
| Inverse relationship | One variable increases as the other decreases. | More hostile comments lead to less willingness to respond. |
| Positive relationship | Both variables move in the same direction. | More memes shared → higher relational closeness. |
| Negative relationship | Variables move in opposite directions. | More screen time → lower sleep quality. |
| Curvilinear relationship | Relationship changes direction at different levels. | Moderate self-disclosure increases liking, but excessive disclosure decreases it. |
| Guidelines to interpret correlations | r ≈ .10 small, .30 medium, .50 large (context-dependent). | r = .45 between texting frequency and closeness = strong in communication research. |
| Coefficient of determination (r²) | Proportion of variance explained by the correlation. | r = .50 → r² = .25: 25% of closeness is explained by texting frequency. |
| Pearson product moment correlation | Correlation for interval/ratio variables. | Relationship between hours online and stress level. |
| Spearman rank order correlation | Correlation for ranked/ordinal data. | Rank-ordering participants by communication apprehension. |
| Outlier | A value far from others in the dataset. | One respondent reports sending 10,000 texts per day. |
| Probability | The likelihood an event occurs. | Probability a random person prefers TikTok over Instagram. |
| Probability distribution | How probabilities are assigned to all possible outcomes. | Distribution of likelihood of different message lengths. |
| Research hypothesis | A prediction about relationships or differences between variables. | “Supportive messages will increase perceived trust.” |
| Null hypothesis | States that no effect or relationship exists. | “Supportive messages have no effect on trust.” |
| Critical region | Area of the distribution where the null is rejected. | The most extreme 5% of message sentiment scores. |
| Significant difference or relationship | A result unlikely due to chance (p < .05). | Friendly tone significantly boosts message credibility. |
| Critical value | Cutoff value determining whether results fall in the critical region. | t must exceed ±2.00 to be significant. |
| Standard error of the mean | The SD of sample means; measure of sampling precision. | SEM shows how precisely we estimate average social media use. |
| Type I error | Rejecting the null hypothesis when it is actually true. | Concluding that emoji use changes trust when it actually does not. |
| Type II error | Failing to detect an effect that actually exists. | A study concludes a supportive message doesn’t reduce stress when it actually does. |
| Alpha risk | The probability of making a Type I error. | Setting α = .05 means accepting a 5% chance of falsely claiming a message strategy works. |
| Beta risk | The probability of making a Type II error. | High β means a study might miss the effect of media exposure on fear. |
| Power | The ability to detect an effect when it exists (1 – β). | A large sample gives enough power to detect small differences in message clarity. |
| Test statistic | A number used to decide whether to reject the null hypothesis. | A t-test value comparing message warmth ratings between groups. |
| One-tailed tests | Tests predicting the direction of an effect. | Hypothesis: a polite message will produce more trust than a blunt message. |
| Two-tailed tests | Tests predicting any difference, not a direction. | Hypothesis: message style influences trust, but direction is not predicted. |
| Parametric tests | Statistical tests that assume normal distribution and equal variances. | Using ANOVA to compare persuasion across message conditions. |
| Assumptions underlying parametric tests | Normal distribution, equal variances, interval/ratio data. | Checking if message valence scores are normally distributed before a t-test. |
| Ceiling or floor effects | Scores cluster at the high or low end, limiting variability. | A politeness scale is too easy, and everyone scores near the maximum. |
| Subjects by treatment interactions | When participants respond differently to the same treatment. | Some people benefit from humor in messages; others find it annoying. |
| Correlation | A relationship between two variables. | Time spent online correlates with perceived social support. |
| Effect size | The strength of a relationship or difference. | Supportive messages have a medium effect size on reducing stress. |
| Analysis of variance (ANOVA) | Tests whether means differ across three or more groups. | Comparing persuasion from emotional, logical, and neutral messages. |
| Main effects | The effect of one independent variable, ignoring the other. | Emotional messages increase persuasion across all audience types. |
| Interaction effects | When the effect of one variable depends on another. | Humor increases persuasion only for low-involvement audiences. |
| Nonparametric tests | Tests that do not assume normality. | Using chi-square to analyze categories of conflict strategies. |
| Multicollinearity | When predictors are strongly correlated, making analysis unstable. | Source credibility and message quality overlap too much in a regression. |
| Chi-square test | A test comparing expected vs. observed frequencies. | Examining whether men and women differ in apology types. |
| Factor analysis | Identifies clusters of related items forming underlying dimensions. | A scale reveals two factors: emotional support and informational support. |
| Nomothetic research | Seeks general patterns across many cases. | Studying media effects using large-sample surveys. |
| Idiographic research | Seeks detailed understanding of one case. | An in-depth interview study about one online community. |
| Text | Any communication artifact that can be analyzed. | A speech, meme, ad, or political tweet. |
| Rhetorical criticism | Analyzing messages to understand how they persuade or shape meaning. | Interpreting a campaign speech using rhetorical theory. |
| Rhetoric | Strategic use of symbols to influence an audience. | A climate activist’s speech aimed at motivating action. |
| Impressionistic criticism | Personal, subjective evaluations of a text. | Saying a speech “felt sincere and inspiring” without systematic analysis. |
| Straw man fallacy | Misrepresenting an argument to attack it more easily. | Saying “opponents want no regulation at all” when they only want moderate change. |
| Neo-Aristotelian criticism | Evaluating rhetoric through the five classical canons. | Analyzing a TED talk’s invention, arrangement, and delivery. |
| Invention | The ideas and arguments a speaker uses. | Choosing evidence to support a claim about media bias. |
| Ethos | Credibility and trustworthiness. | A doctor discussing vaccine information. |
| Pathos | Emotional appeals. | A charity ad showing families impacted by disaster. |
| Logos | Logical reasoning. | Presenting statistics on texting-and-driving accidents. |
| Arrangement | How ideas are organized. | Opening with a story, then evidence, then a call to action. |
| Style | Language choices and rhetorical devices. | Using metaphors to frame climate change as a “ticking clock.” |
| Delivery | The physical presentation of the message. | A speaker’s pacing, eye contact, and vocal tone. |
| Memory | Mastery of the material and ability to present fluidly. | A speaker recalls their speech without notes. |
| Identification | Building a sense of shared identity with an audience. | A speaker highlights common values: “We all want safer communities.” |
| Burke’s dramatistic pentad | A tool analyzing motives using five elements. | Analyzing a crisis response using act, scene, agent, agency, purpose. |
| Act | What happened. | “The company delayed reporting the data breach.” |
| Scene | The context in which it happened. | “During a period of rising cybersecurity threats…” |
| Agent | Who performed the act. | “Company leadership concealed the breach.” |
| Agency | The means used. | “They used internal memos instead of public statements.” |
| Purpose | Why the act occurred. | “To prevent a loss of public trust.” |
| Fisher’s narrative paradigm | Communication is storytelling; good stories persuade. | A politician’s story resonates because it feels authentic. |
| Narrative fidelity | Whether a story matches lived experience and values. | A testimony about hardship feels true to audience experience. |
| Utterance | A single spoken or written unit of communication. | One sentence in a political debate. |
| Discourse analysis | Studying language use in context. | Examining how people soften criticism in email. |
| Fieldwork | Collecting data in natural settings. | Observing communication in a community center. |
| Participant observation | Researcher interacts while observing. | Joining a fan community to study their communication norms. |
| Naturalistic studies | Studying communication in real-life settings. | Observing small group meetings instead of using a lab. |
| Full participant observation | Researcher fully participates and conceals their role. | Joining a workplace chat channel without disclosing researcher status. |
| Participant-as-observer research | Researcher participates and reveals their role. | Facilitating workshops while noting interaction patterns. |
| Etic approach | Studying culture from an outsider’s viewpoint. | Analyzing communication norms in a community you are not part of. |
| Emic approach | Studying culture from an insider’s viewpoint. | Using community members’ perspectives to explain group conflict. |
| Accretions | Physical traces left behind. | Old flyers showing how a student group communicated over time. |
| Erosions | Wear-and-tear traces showing use patterns. | Worn-out seats in areas where people gather to talk. |
| Grounded theory | Building theory from data instead of starting with hypotheses. | Coding interviews to develop a model of online support exchange. |
| Ethnography | Detailed study of communication in a culture or group. | A year-long study of communication in activist organizations. |
| Time-budgeting studies | Tracking how people spend their time. | Logging how often teens check social media each hour. |
| Community studies | Examining communication practices in a community. | Studying how a neighborhood communicates during crises. |
| Case studies | In-depth analyses of a single case. | Analyzing one viral hashtag movement. |
| Negative case study | Examining cases that contradict the main pattern. | A support group that thrives without typical leadership structure. |
| Coding | Categorizing qualitative or quantitative data. | Coding text messages for emotional tone. |
| Open coding | First step of coding; identifying initial concepts. | Labeling segments as “support,” “conflict,” or “advice.” |
| Focus group | Guided group discussion used to gather attitudes. | A group discusses reactions to a health campaign message. |
| Content analysis | Systematic coding of texts to identify patterns. | Analyzing news articles for frames used in immigration coverage. |
| Coding units | The elements being coded. | Each tweet is one coding unit. |
| Latent content | The underlying meaning. | Coding sarcasm in online posts. |
| Manifest content | The surface-level content. | Counting mentions of specific keywords. |
| Reliability | Consistency of measurement. | Coders consistently label messages as supportive. |
| Validity | Accuracy of the measure. | A scale truly measures communication apprehension. |
| Empirical | Based on observation and evidence. | Collecting data on how students use discussion boards. |
| Experiment | Manipulating variables to test cause and effect. | Testing whether supportive messages reduce stress after conflict. |
| Survey | Asking participants questions to measure attitudes or behaviors. | A survey on media use and loneliness. |
| Questionnaires | Structured sets of survey questions. | Asking respondents to rate message clarity on a scale. |
| Check questions | Items used to detect inattention or inconsistency. | “Select ‘strongly agree’ for this item.” |
| Reliability | The consistency or stability of a measurement across time, items, or observers. | A communication apprehension scale gives similar results when taken two weeks apart. |
| Check questions | Items inserted to verify respondent attention or honesty. | “Select ‘strongly agree’ for this item” to detect careless responding in a media survey. |
| Polarity rotation | Reversing the direction of response options to prevent patterned responding. | Alternating between positively and negatively worded items in a trust scale. |
| Response set | A habitual way of answering regardless of content. | Participants repeatedly choosing “agree” on all items in a social media attitudes survey. |
| Anonymity | No identifying information is collected; responses cannot be linked to participants. | Online survey of cyberbullying experiences with no email or name recorded. |
| Confidentiality | Researcher knows identities but does not disclose them. | Interviewing influencers about harassment and keeping their identities protected. |
| Funnel questions | Questions that start broad and become increasingly specific. | Interviewing about “general online habits” before asking about “TikTok body image effects.” |
| Structured interviews | Interviews using fixed, standardized questions. | Asking each participant identical questions about message credibility. |
| Unstructured interviews | Open-ended, conversational interviews guided by participant responses. | Letting participants discuss their experiences with online conflict in their own way. |
| Experiment | A controlled study where variables are manipulated to establish causality. | Testing how different message tones (friendly vs. hostile) affect perceived warmth. |
| Confounding | When another variable unintentionally influences the results. | Message length differs between tone conditions, making results unclear. |
| Control | Holding variables constant to isolate the independent variable’s effect. | Keeping message length identical across tone conditions. |
| Counterbalancing | Varying the order of conditions to control order effects. | Seeing a supportive message first vs. hostile message first. |
| Matching | Equating participants on key characteristics across groups. | Matching groups based on prior social media use before testing message effects. |
| Blocking | Creating subgroups based on a characteristic and randomly assigning within each. | Blocking by gender before assigning participants to message tone conditions. |
| Randomization | Assigning participants to conditions by chance. | Randomly assigning participants to emoji vs. no-emoji messaging conditions. |
| Experimental invalidity | Any factor that threatens the accuracy of experimental conclusions. | Participants guessing the study purpose and altering responses. |
| Internal invalidity | The extent to which results are due to the IV and not confounds. | If message length differs by condition, internal validity is reduced. |
| External invalidity | The extent to which results generalize beyond the study. | Lab results on message tone may not generalize to real texting behavior. |
| One-shot case study | Treatment → measurement with no comparison group or pretest. | Show a persuasive ad, then measure attitude. |
| One-group pretest-posttest | Pretest → treatment → posttest but no control group. | Measure attitudes → show a political ad → measure attitudes again. |
| Static-group comparisons | Two groups: one receives treatment, the other does not; no randomization. | One class sees an anti-cyberbullying message; another class does not. |
| Pretest-posttest control group design | Random assignment with both pretest and posttest. | Test trust before and after seeing supportive messages, with a control group. |
| Solomon four-group design | Combines pretest/posttest and posttest-only groups to control testing effects. | Four groups receive combinations of pretests and supportive/neutral messages. |
| Posttest-only control group design | Random assignment, treatment vs. control, posttest only. | Participants get a supportive message or neutral message, then rate trust. |
| Factorial designs | Experiments with two or more independent variables. | Testing message tone (friendly/hostile) × emoji use (yes/no). |
| Factors | Independent variables in a factorial design. | Tone and emoji use. |
| Levels | The categories or conditions within each factor. | Tone has two levels: friendly and hostile. |
| Main effects | The independent effect of each factor on the DV. | Friendly messages increase trust regardless of emoji use. |
| Interaction effects | Effects that occur when one factor depends on another factor’s level. | Emojis increase trust only when tone is friendly. |
| Ordinal interactions | Interaction where effects have the same direction but different magnitude. | Friendly messages always increase trust, but especially when emojis are used. |
| Disordinal interactions | Interaction where effects reverse direction. | Emojis increase trust in friendly messages but decrease trust in hostile messages. |
| Manipulation check | A test to verify the independent variable worked as intended. | Asking participants whether the message seemed “friendly” or “hostile.” |
| Pilot studies | Small, preliminary studies used to refine procedures. | Testing whether participants correctly interpret the message stimuli. |
| Scott’s pi | A measure of intercoder reliability correcting for chance. | Two coders categorize comments as supportive/critical with Scott’s pi = .85. |
| Krippendorff’s alpha | A reliability statistic usable with any number of coders and data types. | Alpha = .90 for coding sarcasm in tweets. |
| Cohen’s kappa | Intercoder reliability for two coders correcting for chance. | Two coders rating message tone reach κ = .82. |
| Meta-analysis | A statistical technique that synthesizes results across many studies. | Combining 40 studies on cyberbullying and depression. |
| Primary analysis | Analyzing data that you directly collected. | Running statistics on your own experiment on emoji use. |
| Secondary analysis | Reanalyzing existing data collected by others. | Using Pew Research datasets to examine TikTok usage patterns. |
| Narrative reviews | Qualitative summaries of past research without statistical synthesis. | Writing a literature review on online self-disclosure. |
| Assumptions of meta-analysis | Requires independence of studies, comparable measures, and adequate reporting. | Each study on message tone must treat participants as separate samples. |
| Independence | Each dataset must be statistically separate from others. | Two studies using the same participant pool violate independence. |
| Effect size | A quantitative measure of the strength of a relationship. r and d | Reporting that message warmth has a medium effect on trust. |
| r | Correlation-based effect size. | r = .40 between texting frequency and closeness. |
| d | Difference-based effect size (mean differences). | d = .60 for supportive vs. neutral message effects. |
| Publication bias | Tendency for studies with significant results to be published more often. | Journals publish “message tone affects credibility,” but not null results. |
| File drawer effect | Non-significant studies remain unpublished and “in the drawer.” | Researchers do not publish studies finding no link between screen time and stress. |
| Fail-safe number | Number of hypothetical null studies needed to eliminate a significant meta-analysis result. | A meta-analysis reports that it would take 150 null studies to negate effects of cyberbullying on anxiety. |
| Advantages of meta-analysis | Increases statistical power, summarizes large bodies of research, identifies patterns. | Clarifying the overall effect of online support on well-being across dozens of studies. |
| Disadvantages of meta-analysis | Vulnerable to publication bias, varying study quality, and non-equivalent measures. | A meta-analysis of “social media use” suffers because studies define the construct differently. |