Save
Busy. Please wait.
Log in with Clever
or

show password
Forgot Password?

Don't have an account?  Sign up 
Sign up using Clever
or

Username is available taken
show password


Make sure to remember your password. If you forget it there is no way for StudyStack to send you a reset link. You would need to create a new account.
Your email address is only used to allow you to reset your password. See our Privacy Policy and Terms of Service.


Already a StudyStack user? Log In

Reset Password
Enter the associated with your account, and we'll email you a link to reset your password.
focusNode
Didn't know it?
click below
 
Knew it?
click below
Don't Know
Remaining cards (0)
Know
0:00
Embed Code - If you would like this activity on your web page, copy the script below and paste it into your web page.

  Normal Size     Small Size show me how

PSY 32 Exp Psy Exam1

Flashcards for the first exam of Experimental Psychology at Tufts University.

QuestionAnswer
The Four Canons of Science Determinism, Empiricism, Parsimony, Testability
"Four non-scientific ways of knowing about the world" observation, logic, intuition, authority
"Four non-scientific ways of knowing about the world," their order for science observation, logic, intuition, authority
Three goals of science Prediction, description, explanation
What is the ultimate goal of science? To UNDERSTAND phenomena.
Three methods of hypothesis testing Validation, falsification, qualification
Operational definitions "Definitions of theoretical constructs in terms of conrete, observable procedures." (If you view it in terms of "the weight of smoke," it can be "making the unobservable observable."
Why do we need operational definitions? For one thing, so that our theories are testable. Or, to make the unobservable observable.
Difference between validity and reliability Validity refers to the accuracy/correctness of a statement (a proposition, research claim, investigation, research topic, etc.), while reliability refers to the "consistency of repeatability of a measure or observation."
Three main types of validity Internal validity, external validity, construct validity
Three main types of reliability Internal consistency reliability, test-retest reliability, interrater reliability.
Construct validity "The extent to which the independent and dependent variables in a study truly represent the abstract, hypothetical variables of interest to the researcher." A good link between the operational definitions and the abstract concept.
Three steps to follow to achieve construct valdity. 1. Clearly define the construct of interest. Define prototypical features of the construct. (Even on multiple levels.) 2. Develop items that assess those features. 3. Assess the match between the construct and the items used to represent it.
Construct validity, step three elaboration Content, convergent, and discriminant validity.
Content validity "Extent to which the tasks adequately sample the target domain." "Does the measure include all relevant aspects of the concept? Are there irrelevant items?"
Convergent validity "extent to which the procedures or measures are related to some criterion." "the criterion is a variable that measures the same or related constructs"
Discriminant validity "The measure should be less related to measures of other constructs. Showing, for example, that your PTSD scale is less correlated with a measure of intelligence than with another measure of PTSD would demonstrate discriminant validity."
Internal consistency reliability "consistency of results across different items in the same test." "basic idea: calculate correlations between some or all of the items."
Test-retest reliability "consistency of results across different measurement occasions." Can be a retest of the same group of people, even.
Interrater reliability "consistency of results across different raters/observers"
Internal consistency reliability, methods of calculating split-half reliability, Cronbach's alpha
split-half reliability randomly select and average half of the items measuring a variable, and run a correlation test that compares that average to the average of the other half
Cronbach's alpha "Conceptually, it's the average of all possible split half reliabilities."
Content validity, who decides? The researcher, perhaps with consultation from experts in the domain. (paraphrase)
Convergent validity vs internal consistency reliability Convergent validity: between different multi-item measures. Internal consistency reliability: between different items in the same measure.
Three important rules to follow when wording questionnaire items 1. "Avoid ambiguity, confusion, and vagueness." 2. "Avoid double-barreled items." 3. "Avoid the social desirability bias."
Three important rules to follow when wording questionnaire items, rule 1 elaboration Be specific. Make sure that participants all have the same precise interpretation of an item. So, when measuring exercise, set time period and type of exercise, for example.
"Why might it sometimes be advantageous to have multiple items that assess the same construct?" Increase reliability. Reduces noise caused by certain features of a question causing irrelevant responses. Also, not every item may work. Further, having multiple measures that appear on the surface to measure different things strengthens face validity.
Measurement error Imprecision that exists when measuring a characteristic or behavior. Contributes to low reliability. Does not impact mean. Causes scores around mean to spread out.
"What are three important issues to consider when designing a numerical rating scale?" The number of scale points, the anchors, and equal appearing intervals.
What is a correlational study? Measures the association between two constructs of interest. Does not use an experiment.
What does the magnitude of an r-value tell you? The strength of the association.
Other wording rules 1 Keep it simple, use informal language, avoid negations, avoid binary forced-choice non-yes-no questions, avoid questions that do not yield variance
Other wording rules 2 Make sure your questions are relevant to everyone in your study, mix it up, establish a judgmental context, ease into socially sensitive questions, ask sensitive questions sensitively, guarantee participants' anonymity
Translating notions to numbers: two phases participants go through Judgment phase, response translation phase
Translating notions to numbers: two phases participants go through, first phase Judgment phase: Are participants thinking about the same question that the researcher was thinking about?
Translating notions to numbers: two phases participants go through, second phase Response translation phase: Can participants translate their internal states to some kind of value on a scale?
First important issue to consider when designing a numerical rating scale Need to decide number of point that won't overwhelm or restrict participants. Also need to choose if scale is unipolar or bipolar.
Second important issue to consider when designing a numerical rating scale Need to set anchors so that numbers have a clearer meaning.
Third important issue to consider when designing a numerical rating scale Equal appearing intervals: the psychological difference implied by a single unit difference on a rating scale remains constant across the entire range of the scale, i.e. 2 and 3 have the same psychological difference as 5 and 6.
Third important issue to consider when designing a numerical rating scale, part 2 Set anchors at proper places and this will make intervals equal.
What does the sign of r tell you? Whether one variable decreases while the other increases (and vice versa), and whether one variable increases while the other increases (and decreases while the other decreases).
When you observe a significant correlation between two variables, what are the three accounts of causality? A. Variable 1 causes Variable 2, B. Variable 2 causes Variable 1, or C. Variable 3 (or any number of other variables) causes Variables 1 and 2.
When you observe a significant correlation between two variables, what are the three accounts of causality? (name the two issues) bidirectionality problem, third variable problem
Name each kind of third variable problem. Person confound, environmental confound, operational confound
Describe the first kind of third variable problem. (On the test, provide an example, too.) person confound: An individual difference variable (e.g. some personality variable or demographic characteristic) that serves as a third variable.
Describe the second kind of third variable problem. (On the test, provide an example, too.) environmental confound: Some situation that serves as a third variable.
Describe the third kind of third variable problem. (On the test, provide an example, too.) operational confound: When an operational definition is incorrect in that it also measures something other than the target concept.
Nominal scale Only value is whether or not something belongs to a group. Labels are arbitary. No order and no defined distance.
Ordinal Scale Has a defined order, but labels are arbitrary. Distance between numbers not necessarily equal.
Interval Scale Distance between numbers equal. Ordered.
Ratio Scale Same as interval, but has a true zero point.
What are case studies and why are they science? Case studies are systematic, careful records of an individual or group. Case studies are scientific because they can help evaluate scientific theories, help to develop them, they explain what happened in terms of prior science, and can involve measures.
What are case studies and why are they science? Part 2 According to Popper, as long as they can be falsified, they are science.
What is a random sample? A random sample is a group of participants chosen by assigning every person in the population a number and then using a random number table or generator to randomly choose people from the population to be participants.
Why is a random sample useful? Random chosen groups of people tend to reflect the population in characteristics while being much smaller and easier to study than the entire population.
Contrast archival research and observational research Observational research involves recording the behavior of people in their natural environment. Archival research is studying records made by someone else that are otherwise the same.
Compare archival research and observational research Both types of research avoid ethical problems by not involving any manipulations of harmful or potentially harmful factors. They also increase external validity by observing people in natural environments.
What is an alternative hypothesis? The experimental hypothesis, that there is a relationship between the variables being tested. It can be either directional or not.
What is a null hypothesis? The hypothesis that there is no relationship between the variables being tested. For a directional hypothesis, it can also be either that or that the relationship between the variables is in the opposite direction.
What is a type I error? How does it differ from a type II error? A Type I error would be rejecting the null hypothesis when it is actually correct. The probability of this is the alpha level. A Type II error is the opposite: failing to reject the null hypothesis when it is wrong. Low alpha levels increase this. (Kinda)
What is a "significance level," or alpha? When p is below alpha, you reject the null hypothesis. Alpha is the probability of a Type I error. p is the probability of finding a result as or more extreme than yours.
If a researcher sets the significance level to .05, what does that mean? That means that there is a 5% chance of a Type I error.
If the result of a statistical test indciates that p < .05, what does that mean? That means that for alpha = .05, the null hypothesis is rejected.
What are the major sections of an APA paper? (And in what order are they presented?) Title page, abstract, introduction, method, results, discussion, references.
What is the general purpose of each section of an APA paper? Not easily put on a flashcard. Multiple will follow.
More purposes of science Prediction, explanation,
Introduction (purpose and function) Invitation to the reader: get them interested and up to speed; make it clear what the reader can expect. Three primary goals: 1. Establishes importance of the topic. 2. Reviews existing literature. 3. Explains your hypothesis
Introduction (shape) Like a funnel, from the general to specific. You want to interest the reader and connect your study to important things in the world, then justify your study. The justification feeds into the brief description of your study and hypothesis, then method sec
Establishing the importance of your study (Introduction) Prevalence, relevance, precedence.
What are the three main parts of a Method section and Participants, stimuli, procedures.
What is the purpose of a method section? Tells the reader exactly how the study was conducted • Enables replication • Enables evaluation
What kind of information should be reported in the Method section? what kind of information should be reported?
Method section headings Starts with centered heading; boldface • Each subsection has a left-justified heading; boldface • Subsections are at the discretion of the researcher although the three main ones are pretty standard.
What kind of information should be reported in the Results section? • Summarize the collected data • Remind the reader what your hypothesis was • Present the finding in words • Was it statistically significant? Report the statistic. • Descriptive statistics (M, SD) • Confidence intervals • Effect size
A good Discussion section does what? What "shape" should it take and why? Shape: “Reverse funnel”: specific to general (opposite of the Intro). It goes from restating the result and how it supports to the hypothesis, to connecting back to the justification given in the intro: new questions and old limitations addressed
A good Discussion section does what? What "shape" should it take and why? part 2 compare back to the general literature, then state findings different from the literature and why the method led to that, which findings are most valid, acknowledge research limitations, propose future studies,
A good Discussion section does what? What "shape" should it take and why? part 3 implications: why study is important, what new things have been learned. (can be theoretical and practical) End with summary paragraph that summarizes it all. Include implications again that and why you did the research in the first place. (also T and P)
Intro Lit Review Part 1 • your goal: make the reader feel that there is a need for your research • describe relevant background • What have we already learned about the topic based on the studies that have already been published on this topic?
Intro Lit Review Part 2 •set the stage for your study; point out… • gaps in knowledge (if your study addresses) • problems (if your study addresses) • either way, need to make it clear why it is important to fill the gap in knowledge or to address the problem
Intro Hypothesis Explanation • What did you predict and why? • Should be obvious based on the literature review • Before you tell the reader about your hypothesis, you need to briefly describe what you did so that the hypothesis makes sense
Intro Before Lit Review start generally: introduce the topic and explain why it is important and interesting to do research in this domain.
What is the proper way to format a reference for a journal article in APA style? Always comma before & in reference section, never so in parenthetical citation. "and" when authors outside of parenthesis in paper. Italicize volume and journal in references section.
Title page and title purpose Tells people what your study is about, lets people look up your study, tells them who wrote it and how to contact them, explains running head a bit so that they can remember later from whichever page what study they are reading.
anchors Adjectives that lend meaning to numbers on a scale
census A body of data collected from every (or virtually every) member of a population of interest.
construct Abstract, hypothetical variables as opposed to an operational definition of those variables.
descriptive statistics Statistics used to summarize or describe a set of observations.
determinism The doctrine that the universe is orderly in the sense that all events have meaningful, systematic causes.
empiricism Making observations to figure out the orderly principles of the universe.
hypothesis Predictions about specific events derived from one or more theories.
inferential statistics Statistics used to make inferences, that is, to interpret or draw general conclusions about a set of observations.
in-text citation Parenthetical year or parenthetical citation.
population A specific group of people.
reference A citation of a source that allows you to find that source.
replication When the same study is done again in the same or in a similar way.
running head The heading that runs at the top of every page of an APA paper, and that is specified in the heading on the first page.
sample A group of people that is chosen to represent a population.
science Four types of knowledge, four canons of science, experiments, other types of studies.
testability Whether or not theories are able to be confirmed or disconfirmed
theory A general statement about the relation between two or more variables. "Good theories" follow the canons of science. The difference between laws and theories is that theories have boundary conditions, whereas laws apply more broadly.
Abstract • Arguably the most important part of a paper • A one-paragraph summary of the entire paper • brief statement of problem • participants • essential features of the method • basic results • conclusions/implications
Guidelines for abstracts accurate • representative of the article; no extras • nonevaluative • don’t comment on the body of the manuscript • coherent and readable • be clear • active voice • concise • include 4-5 most important points • embed key words
An observed score is made up of which two things? Actual score and measurement error.
Created by: ncommons
Popular Psychology sets

 

 



Voices

Use these flashcards to help memorize information. Look at the large card and try to recall what is on the other side. Then click the card to flip it. If you knew the answer, click the green Know box. Otherwise, click the red Don't know box.

When you've placed seven or more cards in the Don't know box, click "retry" to try those cards again.

If you've accidentally put the card in the wrong box, just click on the card to take it out of the box.

You can also use your keyboard to move the cards as follows:

If you are logged in to your account, this website will remember which cards you know and don't know so that they are in the same box the next time you log in.

When you need a break, try one of the other activities listed below the flashcards like Matching, Snowman, or Hungry Bug. Although it may feel like you're playing a game, your brain is still making more connections with the information to help you out.

To see how well you know the information, try the Quiz or Test activity.

Pass complete!
"Know" box contains:
Time elapsed:
Retries:
restart all cards