Measurement Educational and Psychological

2 downloads 0 Views 308KB Size Report
Jun 30, 2011 - test preparation, testing costs, high school grade inflation, variability in ..... The practice of computing the average SAT scores at an institution ..... AP (Advanced Placement) courses in these three areas of study. ..... Calculus AB.
Educational and Psychological Measurement http://epm.sagepub.com/

An Alternative Presentation of Incremental Validity : Discrepant SAT and HSGPA Performance Krista D. Mattern, Emily J. Shaw and Jennifer L. Kobrin Educational and Psychological Measurement 2011 71: 638 DOI: 10.1177/0013164410383563 The online version of this article can be found at: http://epm.sagepub.com/content/71/4/638

Published by: http://www.sagepublications.com

Additional services and information for Educational and Psychological Measurement can be found at: Email Alerts: http://epm.sagepub.com/cgi/alerts Subscriptions: http://epm.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations: http://epm.sagepub.com/content/71/4/638.refs.html

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

An Alternative Presentation of Incremental Validity: Discrepant SAT and HSGPA Performance

Educational and Psychological Measurement 71(4) 638­–662 © The Author(s) 2011 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0013164410383563 http://epm.sagepub.com

Krista D. Mattern1, Emily J. Shaw1, and Jennifer L. Kobrin1

Abstract This study examined discrepant high school grade point average (HSGPA) and SAT performance as measured by the difference between a student’s standardized SAT composite score and standardized HSGPA. The SAT–HSGPA discrepancy measure was used to examine whether certain students are more likely to exhibit discrepant performance and in what direction. Additionally, the relationship between the SAT– HSGPA discrepancy measure and other academic indicators was examined. Finally, the relationship between the SAT–HSGPA discrepancy measure and the error term of three admission models (HSGPA only, SAT score only, and HSGPA and SAT scores) was examined. Results indicated that females, minority, low socioeconomic status, and nonnative English speakers were more likely to have higher HSGPAs relative to their SAT scores. Furthermore, using only HSGPA for admission overpredicted college performance for those students who had higher HSGPA as compared with SAT scores and underpredicted college performance for students with higher SAT scores as compared with HSGPA. The results underscore the utility of using both HSGPA and test scores for admission decisions. Keywords SAT, HSGPA, college admissions, differential prediction, test-optional admission policies College admission decisions are based on multiple pieces of information. Students are typically asked to submit high school transcript information, letters of recommendation, test scores, documentation of extracurricular activities, an essay, and other items, for 1

The College Board, Newtown, PA, USA

Corresponding Author: Krista Mattern, The College Board, 661 Penn Street, Suite B, Newtown, PA 18940, USA Email: [email protected]

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

639

Mattern et al.

consideration (Rigol, 2004). Admission professionals require all these components, not to make the process cumbersome for students but to be armed with the most comprehensive information to make the best decisions as to whether or not students will succeed and/or fit in at a particular institution. Academic criteria are usually given the largest weight of all student characteristics. A recent survey by the National Association for College Admission Counseling (NACAC) showed that of the various factors considered in admission, by far, the top four factors were (a) grades in college preparatory courses, (b) strength of the high school curriculum, (c) admission test scores, and (d) grades in all high school courses (NACAC, 2008). This is particularly interesting in light of the current debate and nuances associated with the use of admission test scores in admission decisions. In recent years, a number of institutions—primarily small, selective, private institutions— have announced that they will no longer consider test scores in admission decisions (Zwick, 2007), and many other institutions are likely debating implementing a testoptional admission policy. Included in the debate of such test-optional undergraduate admission policies are discussions of test bias and fairness, college access and diversity, test preparation, testing costs, high school grade inflation, variability in high school quality and grading standards, and the predictive validity of admission tests.

Benefits of a Test-Optional Policy Fairness and Test Bias Institutions that have implemented a test-optional admission policy typically cite several reasons for this decision. One common reason cited for eliminating test scores from the admission process is the belief that these tests are biased against underrepresented groups. The commonly held belief that standardized tests are biased toward underrepresented groups, particularly ethnic/racial minorities and students from lower socioeconomic families, exist despite the fact that the Commission on the Use of Standardized Tests in Admission recently concluded, “A substantial body of literature indicates that test bias has been largely mitigated in today’s admission tests due to extensive research and development of question items on both the SAT and ACT” (NACAC, 2008, p. 10). Such beliefs are likely rooted in the fact that a few culturally loaded items had appeared on the SAT decades ago that provided an unfair advantage to subgroups with more familiarity of the topic than the general population, including the often criticized regatta analogy item.1 However, as the NACAC report alludes to, all SAT items are pretested for differential item functioning (DIF). Any items exhibiting moderate DIF are excluded from operational forms to ensure measurement equivalence and those few items that are discovered to exhibit DIF after a full administration are excluded from scoring and/ or equating, where appropriate. Despite rigorous item pretesting, many still believe that standardized tests are biased due to the fact that performance on the test varies systematically by ethnic/racial and socioeconomic subgroups, resulting in adverse impact for certain groups during the admission process. As stated in the Standards for Educational and Psychological Testing,

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

640

Educational and Psychological Measurement 71(4)

the existence of subgroup differences does not necessitate that a test is biased (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 1999). Furthermore, it has been shown that essentially all cognitive measures, including state and national assessments as well as high school grades, result in subgroup differences by race/ethnicity or socioeconomic status (SES), for example, suggesting that real subgroup differences exist and are not the result of SAT bias (Kobrin, Sathy, & Shaw, 2006; Sackett, Kuncel, Arneson, Cooper, & Waters, 2009; Zwick, 2004). In response to these findings, research has suggested that subgroup differences on the SAT, and other cognitive tests, may not be a function, or solely a function, of true ability differences, but rather due to other psychological factors such as stereotype threat, which would result in test bias. The effect of stereotype threat has been consistently documented in laboratory studies (e.g., Aronson, Lustina, Good, & Keough, 1999; Quinn & Spencer, 2001; Steele & Aronson, 1995); though its generalizability to operational settings is less clear (Cullen, Hardison, & Sackett, 2004; Danaher & Crandall, 2008; Stricker & Ward, 2004, 2008). Though an ongoing debate persists on whether stereotype threat influences minority group performance in operational settings, it should be noted that research has consistently shown that the SAT overpredicts minority performance in college (i.e., Black students earn lower grades in college than what their SAT score would predict), which is in opposition to what stereotype threat theory would predict (Cullen et al., 2004; Mattern, Patterson, Shaw, Kobrin, & Barbuti, 2008; Young, 2001). Regardless of whether stereotype threat is a driving factor in the observed subgroup differences, the mere existence of subgroup differences on a cognitive test is a very serious educational problem and more research needs to be conducted to help understand and eliminate these gaps. Nevertheless, the use of SAT scores in the admission process has come under much scrutiny and criticism, which has led to another reason why institutions have adopted a test-optional policy—to increase the diversity of their student body.

Access and Diversity Many recognize that standardized tests are not biased, yet still support the elimination of test scores from their admission process with the goal to increase the diversity of their student body (Espenshade & Chung, 2009). In fact, nearly every institution that has instituted a test-optional policy has reported that this policy change is often accompanied with a substantial increase in the number of applicants and, in particular, the number of minority applicants received by an institution (Epstein, 2009; Syverson, 2007). On the other hand, there is not much empirical evidence on the effect that a test-optional policy change can have on the demographic makeup of the admitted class. To help answer that question, Espenshade and Chung (2009) simulated the diversity implications of a test-optional policy. Under the assumptions that a test-optional policy would (a) increase the percentage of applicants with a SAT score of less than 1,200 or an ACT score of less than 25 by 30% and (b) result in applicants with below-average test scores not submitting their scores, the percentage of Black students admitted increased by 1.6% at private institutions and by 1.0% at public institutions. For Hispanic students, the increase was by 1.3% at public institutions and 0.0% at private institutions.

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

641

Mattern et al.

Similar increases were found for students from working- and lower-class families. The cost of adopting a test-optional policy in terms of the academic performance (e.g., first-year grade point average [FYGPA]) of the admitted class was not examined, though the results suggest that the diversity benefits may be coupled with less able performance given the admitted class’s academic credentials that would likely have implications on student retention to the second year. Additionally, these findings are based on which applicants would be admitted under a test-optional policy; however, more research is needed to examine the diversity impact of test-optional policies on the enrolled class, which is a more complex phenomenon based on decision-making processes (i.e., whether or not the student was accepted or rejected by the institution) and the applicant’s behavior (i.e., whether or not the student decided to enroll at that institution).

Test Preparation Another concern surrounding standardized admission testing is the effect of coaching on test performance and ultimately college access. Since many professional test preparation companies charge hundreds of dollars for their services, it is believed that the privileged are unduly benefiting from coaching, which presents another access issue for students from low SES families. There have been numerous studies examining the effect of test preparation, and the results across these studies are remarkably consistent. Powers and Rock (1999), using a nationally representative sample of students, found an average coaching effect of 6 to 12 points on the SAT Verbal and 13 to 18 points on the SAT Math. Briggs (2001) found similar gains and concluded that the increases in test performance due to test preparation are nowhere near what it is stated by test preparation companies. Despite the minimal performance gains from test preparation (Briggs, 2001; Powers & Rock, 1999), a recent national survey found that one third of institutions believed that a student’s likelihood of admittance would greatly increase with a small increase in test score (10–20 points), holding all other things constant, suggesting that test preparation may give students an advantage in the admission process because admission staff are not properly interpreting the test scores (NACAC, 2009). To avoid this misuse of test scores, Camara’s (2009) chapter on college admissions testing stresses that the confidence interval along with a student’s SAT score should be considered to avoid treating the score as more precise and accurate than is warranted. Additionally, many institutions have adopted a holistic file review process that alleviates concerns associated with relying solely on a single test score.

Costs of Testing In addition to the costs associated with test preparation, there are also costs associated with registering for the exam and sending scores to prospective colleges and universities. These issues have all been raised as determents to higher education for students from low-SES families. Though the College Board does provide fee waivers to high school students who cannot afford to pay the test fees, testing costs are a concern that some institutions may consider when deciding their admission policies.

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

642

Educational and Psychological Measurement 71(4)

Costs of a Test-Optional Policy On the other side of the debate, critics of test-optional policies have raised concerns about high school grade inflation, differences in the rigor of coursework taken by students, differences in high school quality, and loss of predictive validity. Additionally, some have questioned the true motives of institutions that have recently gone testoptional (e.g., Diver, 2006; Epstein, 2009).

Grade Inflation The widespread prevalence of grade inflation, which can reduce the usefulness of high school grade point average (HSGPA) in discriminating among college applicants, has been well documented (Camara, Kimmel, Scheuneman, & Sawtell, 2003; Woodruff & Ziomek, 2004; Ziomek & Svec, 1995). In a recent study of college freshmen from 32 institutions across the United States, Shaw and Mattern (2009) found that 63% students had HSGPAs at or above an A minus, with a mean of 3.58. Camara et al. (2003) documented that students’ HSGPAs have steadily increased over the past 20 years, with no subsequent increase in SAT Verbal scores and a slight increase in SAT Math scores. Numerous reasons have been offered to explain this grade inflation, including pressure on teachers by students and parents to give high grades for average work, and internal pressure on teachers who fear that giving students bad grades may harm their self-images as learners (e.g., Brookhart, 1998). Regardless of the reasons, grade inflation presents a serious problem for admission staff when HSGPA is considered as a predictor of college success. This is because when most student averages are in the A range, there is a real loss of information at the top of the HSGPA scale, making it difficult to discriminate among top-performing students. Perhaps this is the primary reason that the NACAC (2008) annual survey on the state of college admission in the United States found that admission test scores are consistently rated as more important to admission decisions than HSGPA. Standardized admission tests often serve as a yardstick in the interpretation of HSGPA (Rigol, 2003), providing greater utility and meaning to high school grades. One dean of admission and financial aid discussed seeing upward of 30 valedictorians from one high school, noting the reticence among high schools in creating distinctions between students, while another dean of admission at an Ivy League university stated, “It’s [admission tests] the only thing we have to evaluate students that will help us tell how they compare to each other” (Pope, 2006).

Differences in Rigor of Coursework and High School Quality High schools can greatly vary in the academic performance of their students, the standards held by their teachers, the percentage of students going on to college, teacher qualifications, class size, and the availability of rigorous courses (Tam & Sukhatme, 2004). Therefore, a HSGPA of an A at one high school can translate into very different performance from an A at another high school, diminishing the validity and fairness of HSGPA as a predictor of college performance (Willingham, 2005). Grades in college preparatory

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

643

Mattern et al.

courses and strength of curricula are two of the most highly weighted factors in college admission decisions (NACAC, 2008). When examining the rigor of a student’s curriculum at High School X with the rigor of a different student’s curriculum at High School Y, it is important to know the availability of course offerings at each school. A student can only take advantage of rigorous courses if they are offered to them in the first place. Admissions staff often create a profile of high school quality, which includes information on the rigor of courses offered, average SAT or ACT scores, or even the known college performance of students from the high school at that university (Rigol, 2003). These profiles are used as context when considering applicants from particular high schools. What is clear from examining the role of high school grades and rigor of coursework in admissions is the great deal of manipulation (e.g., recalculation, comparative analysis) and background information (e.g., high school profile, average test scores at the high school) required to make the information meaningful and useful.

Questionable Motives: Artificially Increasing a College’s Ranking While many institutions that have implemented test-optional policies claim that the new policy will either increase diversity at their institution or they believe the SAT is not a useful predictor of college success, others speculate that the decision to go testoptional is actually linked to gaining publicity and gaming the U.S. News & World Report ranking system (Diver, 2006; Epstein, 2009). Such decisions by colleges and universities subsequently receive publicity in major newspapers (e.g., Lewin, 2006, 2008). Additionally, with a test-optional policy, primarily the students with high admission test scores will submit their scores, thereby falsely inflating the average SAT scores at the institution, which positively affects the institution’s rank in U.S. News & World Report. The practice of computing the average SAT scores at an institution with only a selective subsample of the enrolled class is more problematic than inflating the numbers. Providing inaccurate information to prospective students is misleading about the quality and rigor of an institution and may lead students to apply and attend schools that are not a good academic fit for them. Questions arise regarding whether such practices are setting some students up to fail or providing false expectations.

Loss of Predictive Validity There is evidence that eliminating standardized test scores from the admission process would be discarding information that is useful for making informed decisions about students. Tests such as the SAT or ACT are not only useful for the purpose of standardization but are also useful for measuring cognitive skills that are linked to educational outcomes. Willingham (2005) noted that combining test scores with high school grades provides the most useful information about a student. This is because high school grades yield information on students’ scholastic engagement, or “conative skills,” related to self-regulation, discipline, or habits of inquiry, in addition to providing some information on cognitive skills (Willingham, 2005, p. 133). Tests generally supplement the HSGPA measure with more in-depth information on students’ cognitive skills. Furthermore, both

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

644

Educational and Psychological Measurement 71(4)

HSGPA and admission tests have been shown to have strong correlations with FYGPA in college. Kobrin, Patterson, Shaw, Mattern, and Barbuti (2008) recently showed that the strongest combination of predictors of FYGPA was HSGPA and SAT scores, with a corrected correlation of 0.62.2 The incremental validity of the SAT over HSGPA was 0.08, indicating that there is additional information, unique to the SAT and not captured by HSGPA, that can aid in the prediction of FYGPA. Also, research on the differential prediction of the SAT with regard to FYGPA by relevant subgroups has found the SAT to be as or more predictive than HSGPA and resulted in less prediction error than HSGPA for all racial/ethnic minorities (Mattern et al., 2008; Young, 2001). These results taken together provide evidence that the SAT not only contributes unique information to the prediction of college success overall but also for relevant subgroups. Those regarding an increase of 0.08 in the percentage of variance of FYGPA accounted for by the SAT as trivial and, as such, a benefit rather than cost of going test-optional, may be misinterpreting certain statistics (i.e., change in r2) used to present incremental validity results. To combat this misinterpretation, Bridgeman, Pollack, and Burton (2004) examined the percentage of students who were successful (defined as achieving at least a 3.5 FYGPA) at five different levels of SAT score ranges, holding constant the academic intensity of high school courses taken by the student, HSGPA, and the selectivity of the college attended. Of the students with high HSGPAs (>3.70) and average course load rigor, approximately 14% of students with SAT scores between 800 and 1,000 were successful compared with 77% of those with SAT scores greater than 1,400. The difference between a 14% success rate and 77% success rate for students with similar high school records but different SAT scores is likely to be perceived as far less trivial than considering the additional variance explained. In a follow-up to this study, Bridgeman, Pollack, and Burton (2008) found similar results for grades in specific content areas (e.g., English, mathematics) thus controlling for differences in college course–taking patterns. This research underscores the fact that without knowing a student’s test score, an institution may not know which students need additional resources. As such, students with high HSGPAs but low test scores may inadvertently put themselves at an academic disadvantage if they decide not to submit their test scores. Wainer (2009) provided clear evidence of the consequences of test-optional admission policies on the accuracy of admission decisions, by examining the performance of students attending a test-optional college that had chosen not to submit SAT scores with their application. Wainer’s data showed that students who did not send their SAT scores to this institution performed about a standard deviation lower in their first-year courses than those who did submit their SAT scores (3.10 vs. 2.90). Furthermore, Wainer was able to obtain the SAT scores for those students who chose not to submit scores to the institution and found that these students scored about 120 points lower than those choosing to submit scores. Finally, the SAT scores for students who did not submit them would have accurately predicted their lower performance in college. The correlation between SAT scores and college grades was 12% higher for those who did not submit their SAT scores than for those who did submit their scores. Due to the fact that these findings are based on one college, results should be interpreted with caution.

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

645

Mattern et al.

Students with discrepant SAT–HSGPA performance. As another novel way of presenting the unique contribution of SAT scores, Kobrin, Camara, and Milewski (2002) examined students with discrepant SAT–HSGPA performance and how HSGPA and SAT scores relate to FYGPA for students with discrepant performance. Based on a sample of nearly 50,000 college freshmen in 1994 or 1995, students were divided into three groups based on the difference between their standardized HSGPA and SAT scores. Students with consistent HSGPA and SAT scores were placed in the nondiscrepant student group (NDS; N = 32,920); students with a standardized HSGPA that was one standard deviation above the standardized SAT score were placed in the high school discrepant group (HDS; N = 7,837); and students with a standardized SAT score that was more than one standard deviation greater than their standardized HSGPA were placed in the SAT discrepant group (SATD; N = 7,653). Similar to previous research (Baydar, 1990), Kobrin et al. (2002) found that all chi-square values associated with demographic differences between the three groups were significant with females, African American, Asian American, and Hispanic students more heavily represented in the HSD group than the other two groups. Compared with the other two groups, the HSD group also had a higher percentage of students who were not U.S. citizens and who spoke a language other than English. With regard to performance differences on academic indicators, the HSD group had the lowest SAT scores but had the highest mean HSGPA as compared with the other two groups (Kobrin et al., 2002). The mean FYGPA was similar for the three groups, suggesting that students with a much higher HSGPA combined with lower SAT scores will not, on average, perform better in college than students with high SAT scores in the presence of a lower HSGPA. Analyses of the validity of HSGPA and SAT for predicting FYGPA among the three groups showed that HSGPA accounted for a smaller amount of variance in FYGPA for the HSD group than for the other groups. Furthermore, SAT scores were a stronger correlate of FYGPA than was HSGPA for the HSD group. These findings reiterate the notion that in college admission, context is crucial and more information on a student is better than less information—particularly when it is related to determining whether or a not a student will be successful at a particular institution.

Current Study Extending previous research (Baydar, 1990; Kobrin et al., 2002), the current study examines the discrepancy of HSGPA and SAT performance among students taking the revised SAT3 introduced in March 2005. The authors investigated whether certain students are disproportionately more likely to exhibit discrepant performance and in what direction. Unlike previous research on discrepant SAT and HSGPA performance, this study examined the differential prediction of HSGPA and SAT scores and the relationship with the SAT–HSGPA discrepancy measure. Specifically, the residual term for various admission models was correlated with a student’s SAT–HSGPA discrepancy value to determine if there is a decrement in prediction accuracy associated with instituting a test-optional policy for students who perform more discrepantly. Specifically, if SAT scores were removed from consideration, would the performance of students who

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

646

Educational and Psychological Measurement 71(4)

have higher HSGPAs (in relation to their SAT scores) be overpredicted and the performance of students with higher SAT scores (in relation to their HSGPA) be underpredicted? This study also extends the literature by offering course rigor, both at the high school and college level, as one reason for why students may perform discrepantly.

Method Sample As part of a larger research endeavor, colleges and universities across the United States were contacted and asked to provide first-year data on their 2006 entering cohort. The 726 institutions that received at least 200 SAT score reports in 2005 served as the target population, and available information on these schools from the College Board’s Annual Survey of Colleges on various characteristics including control (public/private), region of the country, selectivity, and full-time undergraduate enrollment were used to form stratified target proportions on those characteristics for the target institutions to be recruited. To achieve a desired sample size of 75 to 100 institutions, participating institutions were offered a stipend. The data obtained from each institution included students’ first-year coursework and grades, FYGPA, and whether or not they returned for the second year of college. These data were matched to the College Board databases that included SAT scores. Selfreported HSGPA and demographic information were obtained from the student’s SAT Questionnaire responses. The SAT Questionnaire is a survey completed by students during registration for the SAT that covers topics such as course work completed in high school, HSGPA, demographic information, and college plans. The original sample consisted of individual-level data on 196,364 students from 110 colleges and universities from across the United States. Students in the sample who did not have scores on the revised SAT, which included students who took the prior version of the SAT, the ACT, or no standardized test (N = 32,297), a HSGPA (N = 7,984), or a FYGPA (N = 4,477) were excluded from the analyses. Additionally, a small number (< 1%) of cases were identified as having been improperly matched to the College Board records and were removed from the sample. The final sample included 150,377 students from 110 institutions. The reduced sample did not differ substantially from the total sample in terms of FYGPA (2.97 vs. 2.96, respectively) or retention (87.7% vs. 85.2%). The distribution of participating institutions by region, selectivity, size, and control is provided in Table 1. The sample is diverse with regard to these characteristics and is largely representative of the target population.

Measures SAT scores. Official SAT scores obtained from the College Board records were used in the analyses. The student’s most recent score was used in the analyses. The SAT is composed of three sections, Critical Reading (SAT-CR), Math (SAT-M), and Writing (SAT-W), and the score scale range for each section is 200 to 800. To identify discrepant

Downloaded from epm.sagepub.com at FORDHAM UNIV LIBRARY on June 30, 2011

647

Mattern et al.

Table 1. Percentage of Institutions by Key Variables: Comparison of Population to Sample Variable Region of the United States Midwest Mid-Atlantic New England South Southwest West Selectivity Admits under 50% Admits 50% to 75% Admits over 75% Size Small Medium to large Large Very large Control Public Private

Population (%)

Sample (%)

16 18 13 25 10 18

15 24 22 11 11 17

20 44 36

24 54 23

18 43 20 19

20 39 21 20

57 43

43 57

Note. Percentages may not sum to 100 due to rounding. With regard to institution size, small = 750 to 1,999 undergraduates; medium to large = 2,000 to 7,499 undergraduates; large = 7,500 to 14,999 undergraduates; and very large = 15,000 or more undergraduates.

performance, SAT composite score was used, which is the combination of all three sections with a score scale ranging from 660 to 2400 (M = 1693, SD = 255). HSGPA. Students reported their HSGPA on the SAT Questionnaire. HSGPA is on a 12-point scale with the following response options: A+ (97-100), A (93-96), A- (90-92), B+ (87-89), B (83-86), B- (80-82), C+ (77-79), C (73-76), C- (70-72), D+ (67-69), D (65-66), and E or F (