Save
Busy. Please wait.
Log in with Clever
or

show password
Forgot Password?

Don't have an account?  Sign up 
Sign up using Clever
or

Username is available taken
show password


Make sure to remember your password. If you forget it there is no way for StudyStack to send you a reset link. You would need to create a new account.
Your email address is only used to allow you to reset your password. See our Privacy Policy and Terms of Service.


Already a StudyStack user? Log In

Reset Password
Enter the associated with your account, and we'll email you a link to reset your password.
focusNode
Didn't know it?
click below
 
Knew it?
click below
Don't Know
Remaining cards (0)
Know
0:00
Embed Code - If you would like this activity on your web page, copy the script below and paste it into your web page.

  Normal Size     Small Size show me how

I/O Personnel Psych

LA Tech, Psych. 516, Test 1, chapter 7

QuestionAnswer
methods of Validation (1) what a test or other procedure measures (i.e., the hypothesized underlying trait or construct) (2) how well it measures (i.e., the relationship between scores from the procedure and some external criterion measure)
Content-Related Evidence • Inferences about validity based on content-related evidence are concerned with whether or not a measurement procedure contains a fair sample of the universe of situations it is supposed to represent.
Three assumptions underlie the use of content-related evidence: 1.area of concern to user can be meaningful, definable universe of responses 2.sample is drawn from universe n purposeful, meaningful way 3.sample & sampling process can be defined w/precision to let user judge how well it typifies perf. in the universe
Content-Validity Index (CVI) equal # of incumbents and supervisors are presented with a set of test items and asked to indicate whether or not each item 1.essential for the job 2.useful, but not essential 3.not relevant to job. The average rating from each person comprises the ___.
Positive contribution of content-related evidence of validity 1.improved domain sampling & job analysis procedures 2.better behavior measurement 3.role of expert judgment in confirming the fairness of sampling and scoring procedures and in determining the degree of overlap between separately derived content domains.
Criterion validity an indication of future behavior (i.e., predicting future behavior or performance)
Two types of criterion studies: predictive study, concurrent study
predictive study oriented toward the future and involves a time interval during which events take place (e.g., using an ACT score to predict GPA or “is it likely that Laura will be able to do the job?”)
concurrent study oriented toward the present and reflects only the status quo at a particular time (e.g., using existing managers scores to see if you match them or “can Laura do the job now?”)
steps to a Predictive study 1.Measure candidate for job 2.select w/o results of meas. (hire all- see if scores discriminate between high& low performers) 3.get measurements of criterion perfor. at later date. 4.Assess strength of relationship between predictor & criterion.
Measures of criterion need to be relevant to the job (e.g., even if you have a great measure of job performance, i.e., general intelligence, if the criterion is something like how many cups of water an employee drinks, then the test is not good.)
Range Restriction if measurements are given to employees only after screening or only if they score high in an assessment, then they are later measured against a criterion. the measure and criterion may b weak predictor: the sample may be biased
Construct-Related Evidence Do the test questions assess the construct in question (e.g., do intelligence questions on the ACT or SAT really measure intelligence or are they measuring academic achievement instead?)
Convergent validation Does this new test of intelligence that I just made correlate well with the SAT and ACT (e.g., Does someone who scores a 25 on the ACT also score the equivalent on the SAT and my new test?)
Discriminant validation Does this new test of depression NOT correlate with happiness measures (e.g., As people’s score increase on my new depression score, do they also decrease on a happiness scale?)
Meta-analyses process of taking multiple studies of same construct or measure and seeing total effect. W/ a single study, results may b ^ or lower that what is actual, but taking an aggregate sample, the ^ & lows should cancel out & the ^ accurate effect should emerge.
validation investigative process of gathering or evaluating the necessary data.
criterion related evidence testing hypothesis that test scores are related to performance on some criterion measure. The criterion is a score or a rating that is either available at the time of predictor measurement or will become available at a later time. (predictive/concurrent)
Created by: cjd021
Popular Psychology sets

 

 



Voices

Use these flashcards to help memorize information. Look at the large card and try to recall what is on the other side. Then click the card to flip it. If you knew the answer, click the green Know box. Otherwise, click the red Don't know box.

When you've placed seven or more cards in the Don't know box, click "retry" to try those cards again.

If you've accidentally put the card in the wrong box, just click on the card to take it out of the box.

You can also use your keyboard to move the cards as follows:

If you are logged in to your account, this website will remember which cards you know and don't know so that they are in the same box the next time you log in.

When you need a break, try one of the other activities listed below the flashcards like Matching, Snowman, or Hungry Bug. Although it may feel like you're playing a game, your brain is still making more connections with the information to help you out.

To see how well you know the information, try the Quiz or Test activity.

Pass complete!
"Know" box contains:
Time elapsed:
Retries:
restart all cards