click below
click below
Normal Size Small Size show me how
PSYC*3070 Mid #2
Human Resource Psychology Midterm #2
Question | Answer |
---|---|
What is the employer's goal (in the selection process)? | SELECT an applicant who possesses KSAO's required to successfully perform the job being filled. |
What does it mean, selection is about prediction? | Forecasting who is likely to succeed in jobs based on available data |
What are the two major elements to building a sound foundation with respect to the use of predictors? | 1- Selection system must operate within a legal context 2- System must be based on solid empirical support; Organization must be able to demonstrate RELIABILITY and VALIDITY of their selection systems |
What does the act of measuring produce? | An observed score |
What is an observed score made up of? | 1- The "true score", + 2- Some measurement error |
What is meant by 'the act of measuring'? | Putting a numerical value on something |
What is an example of 'the act of measuring'? | Giving a questionnaire, or using an interview |
Reliability is... | The degree to which observed scores are free from random measurement errors |
Measurement error is random and defined as... | The theoretical difference between an individual's 'true score' and his/her observed score |
Name 4 ways to estimate reliability | 1- Test-Retest Reliability 2- Equivalent/Parallel Forms Reliability 3- Internal Consistency (split 1/2 and Cronbach's Alpha) 4- Inter-rater agreement |
What is Test-Retest Reliability? | Conduct the interview once, score candidates Conduct interview at later time with the same candidates, score again Calculate correlation between the two sets of scores |
What are the issues with Test-Retest Reliability? | 1- Practice/Carryover effects can increase reliability 2- Learning the interim (or attribute changes) can decrease reliability 3- What time interval is appropriate? |
What is an equivalent/parallel forms reliability interview? | Rather than ask the same questions, develop an alternative form of the same question (some construct measure; same level of difficulty) |
What does KSAO stand for? | Knowledge, Skills, Abilities, and Other attributes |
What are the issues with Equivalent/Parallel Form Reliability? | 1- Alternate forms may not be truly parallel 2- Carryover effects related to test-take strategies 3- Can be difficult/expensive to develop parallel forms |
Internal consistency is... | The extent to which the test questions have homogeneous content (do they measure the same thing?) |
What is split-half reliability? | We conduct the interview once, and correlate the first 2 questions with the last 2 questions, to see if they are internally consistent |
Problems with split half reliability? | 1- Manner in which test is split is arbitrary -- each different split 1/2 will yield a different reliability coefficient |
What is the solution to the issue of split-half reliability? | Chronbach's Alpha |
What is Chronbach's Alpha? | Rather than select just one way of splitting the test, Chronbach's alpha represents the average of all possible split halves (corrected for test length) |
What does the Chronbach's alpha's represented average do? | This average estimates internal consistency |
What is internal consistency? | The degree to which all of the questions in a set are measuring the same thing |
What is Inter-Rater Reliability? | The correlation between judgments made by two (or more) different judges |
High levels of reliability are most necessary when... | 1- Tests are used to make final decisions about people 2- Individuals are sorted into many different categories on the basis of relatively small individual differences |
Factors Affecting Reliability can be organized into what three board categories? | 1- Temporary individual characteristics (feeling sick) 2- Lack of standardization (some interviewed in a quiet room whereas others have a lot of noise) 3- Chance (candidate just happens to have a perfect example for a particular interview question) |
Define Validity: | Refers to the correctness of the inferences that are drawn from a set of measurements or other specified procedures Degree to which accumulated evidence and theory support SPECIFIC INTERPRETATIONS OF TEST SCORES in the context of the test's proposed use |
In selection, correlations are... | Validity Coefficients |
How would I assess the test-retest reliability of a measure? | Give a test, and then a week later test again. Correlate and determine variability (true scores vs. error) |
What are some factors that might influence the test-retest reliability of a measure? | Practice -- knowing what to expect changes results decreases reliability Carryover (same answers before) increases reliability |
How would I assess the parallel form reliability of a test? | Same as Test/Retest -- match difficulty |
What is the main difference between CONCURRENT criterion-related validity and PREDICTIVE criterion-related validity? | Concurrent: at the same time; with current employees Predictive: separate times |
How do you determine if a predictor is valuable? | Use coefficient of determination (r squared) |
Another way to understand validity... | "Interpretations of test scores" |
What does the value in coefficient of determination represent? | The proportion of variability in one variable that is associated with variability in another |
What are the factors affecting validity coefficients? | 1- Range Restriction 2- Measurement Error 3- Sampling Error |
Explain Range Restriction: | If you only accept those in top score -- you don't have the full range, and therefore have a smaller correlation |
Reliability serves as a limit for validity... What does this mean? (Measurement Error) | Reliability puts an upper limit on validity coefficient (smaller than or equal to reliability) If low reliability, low validity |
Example of sampling error: | Estimates of validity within a population may vary considerably between samples; estimates from small samples are likely to be quite variable |
A selection procedure is justified when... | It shows that is representatively samples significant parts of the job (ex. typing for a typist) |
Content validity represents... | A judgement regarding the degree to which a test provides an adequate sample of a particular content domain |
What is the basic procedure for assessing content validity? | 1-Describe the content domain (from Job Analysis) 2- Determine the areas of the content domain that are measured by each test item 3- Compare the structure of the test with the structure of the content domain |
Most assessments of content validity focus almost exclusively on... | The content of test items |
It is also important to consider these factors in response processes and content validity: | 1- How stimuli are presented to subject 2- How responses are recorded and evaluate 3- What is going through the respondent1s mind |
What is Face Validity? | A judgment concerning how relevant the test items appear to be |
Judgments about face validity are considered from who's perspective? | Test taker, not test user. |
Is a test that lacks face validity useful or irrelevant? | May be very useful and relevant |
What are constructs? | Constructs are abstract summaries of some regularity in nature They are related to or connected with concrete, observable entities or events |
Construct Validity is... | Continuous process of verifying the accuracy of inferences among concepts |
Convergent validity coefficients are... | Scores that purportedly measure some construct should be related to scores on other measures of that same construct |
Discriminant validity coefficients are... | Scores that measure some construct should be unrelated to scores that are not supposed to be measures of that construct |
Test Validity Strategies include... | 1- Criterion (does predictor correlate with job performance?) 2- Content (does the content of the predictor match the content of the job?) 3- Construct (Does the predictor actually measure the construct we are intending to measure?) |
What would inconsistencies between studies be due to? | Statistical artifacts such as: sampling error, range restriction, reliability of criterion measures |
Why doe we account for statistical artifacts? | In order for the validity between a predictor and a criterion to remain relatively stable within similar occupations |
What is validity generalization? | The application of validity evidence, obtained through a meta-analysis of data obtained from many situations, to other situations that are similar |
What is Bias? | Bias refers to systematic errors in measurement or inferences made from measurements that are related to different identifiable group membership characteristics (age, sex or race) |
What is the concept of fairness in relation to measurement? | Fairness here refers to the value judgments people make about the decisions or outcomes that are based on measurements and cannot be determined statistically |
What does the CPA code of professional ethics cover? | 1- Confidentiality 2- Competence 3- Psychological Assessments 4- Conflicts between ethics and organizational demands 5- Misuse of psychologists' work 6- Avoiding harm |
What is the first phase of selection and what does it involve? | Screening, and involves identifying candidates who meet the MINIMUM qualifications established for a position |
What are application forms used for? | A screening method that is completed by candidates to provide information about their KSAO's |
What is the difference between an application form and a biodata form? | Biodata or BIB is a questionnaire that asks about job related into on their personal background and life experiences. Biodata forms cover less verifiable into (personal interests, attitudes, values, etc.) |
What 8 dimensions are common to many biodata inventories? | 1- School Achievement 2- Higher education achievement 3- Drive 4- Leadership and group participation 5- Financial responsibility 6- Early family responsibility 7- Parental family adjustment 8- Situational stability |
What are some concerns regarding biodata? | 1- Legality 2- Invasiveness 3- Fakability 4- Generalizability -- requires large applicant pool and technical knowledge |
When should you use Biodata? | Particularly appropriate for organizations hiring large numbers of employees for similar kinds of jobs and where large numbers of applicants are competing for few positions. |
What are pros/cons of Letters of Recommendation? | Another screening method, that is most commonly used but is actually the least valid. There is a restricted range, and is only useful when it is negative. Telephone calls to former employers can be used to verify information on a resume. |
When would an organization need to develop its own employment tests? | When the KSAO's are unique |
What steps are used in choosing a test? | 1- Determine KSAO 2- Identify relevant test 3- Obtain info on several possible tests 4- Read technical documentation (reliable, valid) 5- Read ind. evaluations 6- Examine a sample 7- Determine skill level for purchase, use, scoring 8- Select test |
What is the use of a cognitive ability test? | Measure intelligence or intellectual ability |
What abilities would be included in cognitive ability tests? | Verbal and numerical ability, reasoning, memory, problem solving, processing information |
Why should we assess "g" or GMA (general mental ability) instead of specific abilities? | 1-Thought to be primary ability among those that make up intellectual capacity 2- GMA promotes effective learning, efficient and accurate problem solving, and clear communications 4-Single best predictor of job performance |
Define abilities: | Attributes that an applicant brings to the employment situation -- the enduring, general traits or characteristics on which people differ ex: finger dexterity or sensory ability |
Define skill: | Refers to an individual's degree of proficiency or competency on a given task -- based on both ability and practice ex: typing prficiency |
Define aptitude: | A specific, narrow ability or skill that may be used to predict job performance |
What is personality? | A set of characteristics or properties that influence, or help to explain, an individual's behaviour |
What is the Five Factor Model? | The "basic dimensions" of personality: neuroticism, extroversion, openness to experience, agreeableness, conscientiousness |
What are the advantages of measuring the Big 5? | Accumulated evidence and predict broad criteria (job performance) |
What are the advantages of measuring narrow traits? | May be more specific to what you are trying to measure |
How is personality measured? | Usually through self report inventories |
What are the strategies to control response inflation? | -Forced choice questionnaires -Faking warnings -Corrections -Use of interviews |
What is the use of integrity tests? | Designed to identify job applicants who will not engage in counterproductive behaviour |
What are the two types of integrity tests? | -Overt (transparent) -- attitudes toward theft (punitiveness, beliefs about frequency) and admissions -Personality -- based (mainly conscientiousness and emotional stability, dependability, risk taking |
What are the three main constructs of physical abilities testing? | 1- Strength 2- Endurance 3- Movement quality |
Work Samples and Simulations | Require candidate to produce behaviour related to performance under controlled conditions -Involved standardized instructions, conditions, equipment and scoring |
Pros of Work Samples and Simulations | -Minimal adverse impact -Good criterion-related validity |
Cons of Work Samples and Simulations | -Effective at predicted 'can do', but not potential -Usually individually administered, which is costly |
SJT - Situational Judgement Tests | Situational exercise designed to measure applicants' judgement of workplace situations (paper and pencil or video) Multiple choice |
Assessment Centers Characteristics | 1- Managers (select, promote, train) 2- Assessed in groups against performance of other groups 3- Assessor teams as raters 4- Variety of group exercises and inventories (in-basket; leaderless group discussion; role-play) |
List most to least valid: | Work Samples; General Cognitive Ability; Integrity Tests; Assessment Centers; Biodata; Conscientiousness |
Sources of Information about Testing | Mental Measurments Yearmook (MMY) |
Unstructed Interviews | Traditional method of interviewing that involves no constraints on questions, no requirements for standardization and a subjective assessment of the candidate |
Cons of unstructured interviews | Evaluations may be biased by personal views and preferences (similarity to self; first impression; visual cues; contrast effects; template of 'ideal' These reduce reliability and validity of unstructured interviews |
Two types of interviews: | Panel interviews (conducted by 2 or more interviewers together at one time) Serial interviews (series of interviews where applicant is interviewed separately by each of two or more interviewers) |
Types of structured interviews: | Situational (highly constructed interview in which hypothetical situations are described and applicants are asked what they would do |
Behavioural Description Interview | Is a structured interview in which applicant is asked to describe what he or she did in a given situation in the past |
Compensation | Refers to all forms of financial returns and tangible services and benefits to employees receive as part of an employment relationship |
9 Criteria for Developing Compensation Plan | *1-Internal vs external equity *2-Fixed vs. variable 3-Performance vs. membership 4-Job vs individual 5- Egalitarianism vs. elitism *6- Below-market vs. above *7-Monetary vs. non-monetary 8. Open vs. secret 9-Centralization vs decentralization dec |