WHo is able to complete this discussion?

Cited: Cascio, W. F., & Aguinis, H. (2019).
Applied psychology in talent management
(8th ed.). Retrieved from https://www.vitalsource.com

Chapter 12 Selection Methods

Personal History Data

Selection and placement decisions often begin with an examination of personal history data (i.e., biodata) typically found in application forms, biographical inventories, and résumés. Undoubtedly one of the most widely used selection procedures is the application form. Like tests, application forms can be used to sample past or present behavior briefly but reliably. Studies of the application forms used by 200 organizations indicated that questions generally focused on information that was job related and necessary for the employment decision (Lowell & DeLoach, 1982; Miller, 1980). However, over 95% of the applications included one or more legally indefensible questions. To avoid potential problems, consider omitting any question that

Might lead to an adverse impact on members of protected groups,
Does not appear job related or related to a bona fide occupational qualification, or
Might constitute an invasion of privacy (Miller, 1980).

What can applicants do when confronted by a question that they believe is irrelevant or an invasion of privacy? Some may choose not to respond. However, research indicates that employers tend to view such a nonresponse as an attempt to conceal facts that would reflect poorly on an applicant. Hence, applicants (especially those who have nothing to hide) are ill advised not to respond (Stone & Stone, 1987).

Psychometric principles can be used to quantify responses or observations, and the resulting numbers can be subjected to reliability and validity analyses in the same manner as scores collected using other types of measures. Statistical analyses of such group data are extremely useful in specifying the personal characteristics indicative of later job success.

Opinions vary regarding exactly what items should be classified as biographical, since such items may vary along a number of dimensions—for example, verifiable–unverifiable; historical–futuristic; actual behavior–hypothetical behavior; firsthand–secondhand; external–
internal; specific–general; and invasive–noninvasive (see Table 12.1). This is further complicated by the fact that “contemporary biodata questions are now often indistinguishable from personality items in content, response format, and scoring” (Schmitt & Kunce, 2002, p. 570). Nevertheless, the core attribute of biodata items is that they pertain to historical events that may have shaped a person’s behavior and identity (Mael, 1991).

Some observers have advocated that only historical and verifiable experiences, events, or situations be classified as biographical items. Using this approach, most items on an application form would be considered biographical (e.g., rank in high school graduating class, work history). By contrast, if only historical, verifiable items are included, then questions such as the following would not be asked: “Did you ever build a model airplane that flew?” Cureton (see Henry, 1965, p. 113) commented that this single item, although it cannot easily be verified for an individual, was almost as good a predictor of success in flight training during World War II as the entire Air Force Battery.
Weighted Application Blanks

A priori one might suspect that certain aspects of an individual’s total background (e.g., years of education, previous experience) should be related to later job success in a specific position. The weighted application blank (WAB) technique provides a means of identifying which of these aspects reliably distinguish groups of effective and ineffective employees. Weights are assigned in accordance with the predictive power of each item, so that a total score can be derived for each individual. A cutoff score then can be established, which, if used in selection, will eliminate the maximum number of potentially unsuccessful candidates. Hence, one use of the WAB technique is as a rapid screening device, but it may also be used in combination with other data to improve selection and placement decisions. The technique is appropriate in any organization having a relatively large number of employees doing similar kinds of work and for whom adequate records are available. It is particularly valuable for use with positions requiring long and costly training, with positions where turnover is abnormally high, or in employment situations where large numbers of applicants are seeking a few positions (England, 1971).

Weighting procedures are simple and straightforward (Owens, 1976), but, once weights have been developed in this manner, it is essential that they be cross-validated. Since WAB procedures represent raw empiricism in the extreme, many of the observed differences in weights may reflect not true differences, but only chance fluctuations.

Biographical Information Blanks

The biographical information blank (BIB) technique is closely related to the WAB technique. Like WABs, BIBs involve a self-report instrument; although items are exclusively in a multiple-choice format, typically a larger sample of items is included, and frequently items are included that are not normally covered in a WAB. Glennon, Albright, and Owens (1966) and Mitchell (1994) have published comprehensive catalogs of life history items covering various aspects of the applicant’s past (e.g., early life experiences, hobbies, health, social relations), as well as present values, attitudes, interests, opinions, and preferences. Although primary emphasis is on past behavior as a predictor of future behavior, BIBs frequently rely also on present behavior to predict future behavior. Usually BIBs are developed specifically to predict success in a particular type of work. One of the reasons they are so successful is that often they contain all the elements of consequence to the criterion (Asher, 1972). The mechanics of BIB development and item weighting are essentially the same as those used for WABs (Mumford & Owens, 1987; Mumford & Stokes, 1992).
Résumés

Résumés are a source of personal history data in most employee selection situations. Although résumés are now usually submitted electronically, as far back as 1975, the estimate was that about 1 billion paper résumés were screened each year (Brown & Campion, 1994). When examiners extract personal history data from a résumé, they are particularly prone to cognitive biases and heuristics because information is often limited to one or two pages. Specifically, applicants are likely to be placed into stereotype-based categories in a rather automatic fashion, and then attributes believed to be typical of the group are assigned to individual applicants—even if those beliefs are factually incorrect. Many so-called “paper people” or “vignette” studies (Aguinis & Bradley, 2014) have been conducted in which résumés of hypothetical applicants are presented to judges, who have to provide ratings regarding each applicant’s job suitability (Derous, Ryan, & Serlie, 2015).

Social categorization can take place on more than one category. For example, Derous et al. (2015) conducted an experiment in which 60 Dutch recruiters rated the job suitability of applicants whose résumés included information on ethnicity (Dutch, Arab) and gender (female, male). Results showed that ratings were influenced by applicants’ ethnicity (i.e., Arabs were rated more negatively) and gender (i.e., men were rated more negatively), raters’ prejudice (i.e., those with more negative attitudes toward a particular group rated members of those groups more negatively), and job characteristics (i.e., results were more pronounced when jobs included more client contact).

A recent innovation is the use of video résumés, which are recorded video and audio messages in which job applicants can present themselves to potential employers. Video résumés allow applicants to express themselves in a way that is not possible using the more traditional paper format. It is also possible to create multimedia résumés, in which job applicants also include animations and text (Hiemstra, Derous, Serlie, & Born, 2012). Not much research is yet available on video résumés; however, Hiemstra et al. (2012) conducted a study involving 445 unemployed job seekers who had received a two-day job-application training in the Netherlands and found that they perceived video résumés to be more fair compared to traditional paper résumés regardless of applicant ethnicity (i.e., Dutch, Turkish, Moroccan, Surinamese/Antillean, other non-Westerners, and other Western applicants).

Overall, given the many factors that influence raters’ evaluation of personal history data based on résumé screening, it is important to (a) train raters to make sure they focus on job-related factors and (b) assess interrater reliability (Brown & Campion, 1994). Another important concern is the extent to which applicants may distort the information they provide, hoping to increase their chances of receiving a job offer. We discuss this topic later in the chapter.
Credit History

The big data movement has provided organizations with personal history data that were unthinkable just a few years ago. For example, a survey of members of the Society for Human Resources Management revealed that about 50% of employers conduct credit background checks on at least some applicants (Bernerth, 2012). One type of personal history data, credit scores, seem to be an objective indicator of a job applicant’s conscientiousness and even integrity—two clearly desirable KSAs for many jobs. If an applicant fails to keep a promise to his or her financial institution, this may be an indicator that he or she will similarly fail to keep a promise at work. Also, perhaps individuals who are under financial duress may be more prone to engaging in counterproductive behaviors at work (e.g., theft) (Bernerth, 2012).

Using credit background checks for employment purposes is legally permissible in the United States under the Fair Credit Reporting Act if applicants provide written authorization (Bernerth, 2012). However, some states, including California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maryland, Nevada, Oregon, Vermont, and Washington, as well as Washington, D.C., have restricted the use of credit histories of applicants and employees. For example, Colorado’s Employment Opportunity Act (SB13-018) prohibits an employer’s use of consumer credit information for employment purposes if the information is unrelated to the job. Moreover, it requires an employer to disclose to an employee or applicant if the employer uses consumer credit information to take adverse action against the employee or applicant and the particular credit information upon which the employer relied. It also authorizes an aggrieved employee to sue for an injunction, damages, or both.

The regulations in these jurisdictions seem justified, given evidence that credit scores are related to several demographic variables that in many cases are unrelated to job performance. For example, Bernerth (2012) collected Fair Isaac Corporation (FICO) scores for 112 university employees and alumni and conducted a regression analysis using credit scores as the criterion and the following demographic variables as predictors: minority status (nonminority, minority); gender (male, female); marital status (never been divorced, divorced); educational attainment (high school degree/GED, some college, 2-year college degree, 4-year college degree, some graduate or professional education, graduate degree); and age. The five predictors combined accounted for 34% of variance in credit scores and the predictors (a) minority status (minority status associated with lower scores), (b) educational attainment (less education associated with lower scores), and (c) age (younger applicants received lower scores) had the strongest effects. Although educational attainment may be a job-related factor for some occupations and positions, the strong relation between ethnicity and credit scores guarantees that the use of this particular type of personal history data will result in adverse impact. In addition to legal issues, the use of credit scores has ethical connotations. Specifically, “critics of credit scores contend that using such information to make hiring decisions unfairly disadvantages individuals with low scores and traps them in a ‘vicious downward spiral’ where unemployment damages personal credit which, in turn, can hurt their job prospects” (Bernerth, 2012, p. 245). As is the case for all types of predictors, validity information is required—and this is particularly important in the presence of adverse impact.
Response Distortion in Personal History Data

Can job applicants intentionally distort personal history data? The answer is yes. For example, the “sweetening” of résumés is not uncommon, and one study reported that 20—25% of all résumés and job applications include at least one major fabrication (LoPresto, Mitcham, & Ripley, 1986). The extent of self-reported distortion was found to be even higher when data were collected using the randomized-response technique, which absolutely guarantees response anonymity and thereby allows for more honest self-reports (Donovan, Dwight, & Hurtz, 2003).

A study in which participants were instructed to “answer questions in such a way as to make you look as good an applicant as possible” and to “answer questions as honestly as possible” resulted in scores almost two standard deviations higher for the “fake good” condition (McFarland & Ryan, 2000). In fact, the difference between the “fake good” and the “honest” experimental conditions was larger for a biodata inventory than for other measures including personality traits such as extraversion, openness to experience, and agreeableness. In addition, individuals differed in the extent to which they were able to fake (as measured by the difference between individuals’ scores in the “fake good” and “honest” conditions). So, if they want to, individuals can distort their responses, but some people are more able than others to do so.

Fortunately, there are situational characteristics that an examiner can influence, which may make it less likely that job applicants will distort personal history information. The first such characteristic is the extent to which information can be verified. More objective and verifiable items are less amenable to distortion (Kluger & Colella, 1993). The concern with being caught seems to be an effective deterrent to faking. Second, option-keyed items are less amenable to distortion (Kluger, Reilly, & Russell, 1991). With this strategy, each item-response option (alternative) is analyzed separately and contributes to the score only if it correlates significantly with the criterion. Third, distortion is less likely if applicants are warned of the presence of a lie scale (Kluger & Colella, 1993) and if biodata are used in a non-evaluative, classification context (Fleishman, 1988). A fourth approach involves asking job applicants to elaborate on their answers. These elaborations require job applicants to describe more fully the manner in which their responses are true or to describe incidents to illustrate and support their answers (Schmitt & Kunce, 2002). For example, for the question “How many work groups have you led in the past 5 years?” the elaboration request can be “Briefly describe the work groups and projects you led” (Schmitt & Kunce, 2002, p. 586). The rationale for this approach is that requiring elaboration forces the applicant to remember more accurately and to minimize managing a favorable impression. The use of the elaboration approach led to a reduction in scores of about .6 standard deviation units in a study including 311 examinees taking a pilot form of a selection instrument for a federal civil service job (Schmitt & Kunce, 2002). Similarly, a study including more than 600 undergraduate students showed that those in the elaboration condition provided responses much lower than those in the non-elaboration condition (Schmitt, Oswald, Kim, Gillespie, & Ramsay, 2003).
Validity of Personal History Data

Properly cross-validated biodata have been developed for many occupations, including life insurance agents; law enforcement officers; service station managers; sales clerks; unskilled, clerical, office, production, and management employees; engineers; architects; research scientists; and Army officers. Criteria include turnover (by far the most common), absenteeism, rate of salary increase, performance ratings, number of publications, success in training, creativity ratings, sales volume, and employee theft.

Evidence indicates that the validity of personal history data as a predictor of future work behavior is quite good. For example, Reilly and Chao (1982) reviewed 58 studies that used biographical information as a predictor. Over all criteria and over all occupations, the average validity was .35. A subsequent meta-analysis of 44 such studies revealed an average validity of .37 (Hunter & Hunter, 1984). A later meta-analysis that included results from eight studies of salespeople’s performance that used supervisory ratings as the criterion found a mean validity coefficient (corrected for criterion unreliability) of .33 (Vinchur, Schippmann, Switzer, & Roth, 1998).

As a specific illustration of the predictive power of these types of data, consider a study that used a concurrent validity design including more than 300 employees in a clerical job. A rationally selected, empirically keyed, and cross-validated biodata inventory accounted for incremental variance in the criteria over that accounted for by measures of personality and general cognitive abilities (Mount, Witt, & Barrick, 2000). Specifically, biodata accounted for about 6% of incremental variance for quantity and quality of work, about 7% for interpersonal relationships, and about 9% for retention. As a result, we now have empirical support for the following statement by Owens (1976) from more than four decades ago:
Personal history data also broaden our understanding of what does and does not contribute to effective job performance. An examination of discriminating item responses can tell a great deal about what kinds of employees remain on a job and what kinds do not, what kinds sell much insurance and what kinds sell little, or what kinds are promoted slowly and what kinds are promoted rapidly. Insights obtained in this fashion may serve anyone from the initial interviewer to the manager who formulates employment policy. (p. 612)

A caution is in order, however. Commonly, biodata keys are developed on samples of job incumbents, and it is assumed that the results generalize to applicants. However, a large-scale field study that used more than 2,200 incumbents and 2,700 applicants found that 20% or fewer of the items that were valid in the incumbent sample were also valid in the applicant sample. Clearly motivation and job experience differ in the two samples. The implication: Match incumbent and applicant samples as closely as possible, and do not assume that predictive and concurrent validities are similar for the derivation and validation of BIB scoring keys (Stokes, Hogan, & Snell, 1993).
Bias and Adverse Impact

Since the passage of Title VII of the 1964 Civil Rights Act, personal history items have come under intense legal scrutiny. While not unfairly discriminatory per se, such items legitimately may be included in the selection process only if it can be shown that (a) they are job related and (b) they do not unfairly discriminate against either minority or nonminority subgroups.

In one study, Cascio (1976b) reported cross-validated validity coefficients of .58 (minorities) and .56 (nonminorities) for female clerical employees against a tenure criterion. When separate expectancy charts were constructed for the two groups, no significant differences in WAB scores for minorities and nonminorities on either predictor or criterion measures were found. Hence, the same scoring key could be used for both groups.

Results from several studies have concluded that biodata inventories are relatively free of adverse impact, particularly when items do not reflect cognitive abilities (Breaugh, 2009). However, a meta-analysis by Bobko and Roth (2013) emphasized that most results are based on concurrent validity designs using incumbent samples, which likely decrease observed ethnicity-based differences. They estimated that the black—white mean standardized difference is d = .31, which was based on biodata that included a large number of KSAs.

Unfortunately, other than the degree of cognitive abilities saturation, when differences exist, we often do not know why. This reinforces the idea of using a rational (as opposed to an entirely empirical) approach to developing biodata inventories, because it has the greatest potential for allowing us to understand the underlying constructs, how they relate to criteria of interest, and how to minimize between-group score differences. As noted by Stokes and Searcy (1999):

With increasing evidence that one does not necessarily sacrifice validity to use more rational procedures in development and scoring biodata forms, and with concerns for legal issues on the rise, the push for rational methods of developing and scoring biodata forms is likely to become more pronounced. (p. 84)

What Do Biodata Mean?

Criterion-related validity is not the only consideration in establishing job relatedness. Items that bear no rational relationship to the job in question (e.g., “applicant does not wear eyeglasses” as a predictor of theft) are unlikely to be acceptable to courts or regulatory agencies, especially if total scores produce adverse impact on a protected group. Nevertheless, external or empirical keying is the most popular scoring procedure and consists of focusing on the prediction of an external criterion using keying procedures at either the item or the item-option level (Stokes & Searcy, 1999). As defined by Mael (1991), “[T]he core attribute of biodata items is that the items pertain to historical events that may have shaped the person’s behavior and identity” (p. 763). Accordingly, as shown in Table 12.1, items measure behavioral intentions, self-descriptions of personality traits, and personal interests, among other constructs. Note, however, that biodata inventories resulting from a purely empirical approach do not help us understand what constructs are measured.

More prudent and reasonable is the rational approach, including job analysis information to deduce hypotheses concerning success on the job under study and to seek from existing, previously researched sources either items or factors that address these hypotheses (Stokes & Cooper, 2001). Essentially, we are asking the following questions: “What do biodata mean?” “Why do past behaviors and performance or life events predict non-identical future behaviors and performance?” (Breaugh, 2009; Dean & Russell, 2005). Thus, in a study of recruiters’ interpretations of biodata items from résumés and application forms, Brown and Campion (1994) found that recruiters deduced language and math abilities from education-related items, physical ability from sports-related items, and leadership and interpersonal attributes from items that reflected previous experience in positions of authority and participation in activities of a social nature. Nearly all items were thought to tell something about a candidate’s motivation. The next step is to identify hypotheses about the relationship of such abilities or attributes to success on the job in question. This rational approach has the advantage of enhancing both the utility of selection procedures and our understanding of how and why they work (cf. Mael & Ashforth, 1995). Moreover, it is probably the only legally defensible approach for the use of personal history data in employment selection.

The rational approach to developing biodata inventories has proven fruitful beyond employment testing contexts. For example, Douthitt, Eby, and Simon (1999) used this approach to develop a biodata inventory to assess people’s degree of receptiveness to dissimilar others (i.e., general openness to dissimilar others). As an illustration, for the item “How extensively have you traveled?” the rationale is that travel provides for direct exposure to dissimilar others and those who have traveled to more distant areas have been exposed to more differences than those who have not. Other items include “How racially (ethnically) integrated was your high school?” and “As a child, how often did your parent(s) (guardian(s)) encourage you to explore new situations or discover new experiences for yourself?” Results of a study including undergraduate students indicated that the rational approach paid off because there was strong preliminary evidence in support of the scale’s reliability and validity. However, even if the rational approach is used, the validity of biodata items can be affected by the life stage in which the item is anchored (Dean & Russell, 2005). In other words, framing an item around a specific, hypothesized developmental time (i.e., childhood versus past few years) is likely to help applicants provide more accurate responses by giving them a specific context to which to relate their response.
Recommendations and Reference Checks

Another source of personal history data is information provided by others in the form of recommendations and reference checks. Many prospective users ask a practical question: “Are recommendations and reference checks worth the amount of time and money it costs to process and consider them?” In general, four kinds of information are obtainable: (1) employment and educational history (including confirmation of degree and class standing or grade point average); (2) evaluation of the applicant’s character, personality, and interpersonal competence; (3) evaluation of the applicant’s job performance ability; and (4) willingness to rehire.

For a recommendation to make a meaningful contribution to the screening and selection process, however, certain preconditions must be satisfied. The recommender must have had an adequate opportunity to observe the applicant in job-relevant situations, he or she must be competent to make such evaluations, he or she must be willing to be open and candid, and the evaluations must be expressed so that the potential employer can interpret them in the manner intended (McCormick & Ilgen, 1985). Although the value of recommendations can be impaired by deficiencies in any one or more of the four preconditions, unwillingness to be candid is probably the most serious. However, to the extent that the truth of any unfavorable information cannot be demonstrated and it harms the reputation of the individual in question, providers of references may be guilty of defamation in their written (libel) or oral (slander) communications (Ryan & Lasek, 1991).

Written recommendations are considered by some to be of little value. For example, consider the opinions based on a survey of about 600 HR professionals with titles such as recruiting manager, employment lawyer, personnel consultant, and human resources specialist (Nicklin & Roch, 2009). About 80% of respondents agreed with the statement that “letter inflation is a problem that will never be entirely alleviated.” To a large extent, this opinion is justified, since the available evidence indicates that the average validity of recommendations is .14 (Reilly & Chao, 1982). A meta-analysis focused exclusively on academic performance found similar results: the average observed correlation with GPA in medical school was .13 (N = 916) and the correlation with clinical and internship performance was .12 (N = 1,120). The average correlation with GPA in college seems higher, r = .28 (N = 5,155) (Kuncel, Kochevar, & Ones, 2014). But, meta-regression analysis (Gonzalez-Mulé & Aguinis, in press) showed that letters of recommendation contributed only .003 additional proportion of variance to the prediction of grade point average in graduate school and only .011 to the prediction of faculty ratings of performance above and beyond undergraduate GPA and verbal and quantitative GRE exam scores. Results were slightly more encouraging regarding the proportion of additional variance explained in the prediction of degree attainment: .024.

One of the biggest problems, and possibly the main reason for their overall lack of value-added predictive power, is that such recommendations rarely include unfavorable information and, therefore, do not discriminate among candidates. In addition, the affective disposition of letter writers has an impact on letter length, which, in turn, has an impact on the favorability of the letter (Judge & Higgins, 1998). In many cases, therefore, the letter may be providing more information about the person who …

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH