Education Working Paper Archive   

Keyword: Achievement
.pdf version adobe

Addressing Gaps in Research on First-Year Success: Gauging the Influence of High School Environment, Part-Time Instructors, and Diversity on Preparation and Persistence of First-Year University Students

Serge Herzog
University of Nevada, Reno

November 4, 2008


Effects of the high school environment, part-time university instructors, and classroom ethnic/racial diversity on first-year student preparation and enrollment persistence are estimated via hierarchical linear and logistic regression. After controlling for student socio-demographic characteristics and motivation to enter college, high school attributes bear little relevance to level of academic preparation at the start of the first year of study. In contrast, academic performance of low-income students at the end of the first year is negatively associated with several features of the high school environment. There is little evidence that student persistence is negatively affected by exposure to part-time instructors during the first year in college. Ethnic/racial diversity in the classroom appears to slightly enhance persistence of non-Asian minority students, but shows no positive relationship with cognitive growth. Unmet financial need marginally increases the dropout risk of students taking greater course loads net of socio-demographic background, academic preparation, first-year grades, on-campus residency, and type of aid received. Results are based on institutional matriculation records of 2,800 first-year students at a moderately selective public university and official high school accountability reports collected by the state’s department of education.


Although research on factors that promote or hinder academic success of college students abound, much of it focusing on learning gains and institutional retention of first-year students (St. John, 2006; Reason, Terenzini and Domingo, 2006; Seidman, 2005; Kuh et al. 2005; Pascarella and Terenzini, 2005; Braxton, 2000; Astin, 1993; Tinto, 1987), there is little empirical evidence on how characteristics of high schools influence preparation and success of students that go on to college. Similarly, there is a paucity of insight on how the growing use of part-time (adjunct) university instructors affects the learning and academic growth of students (Pascarella and Terenzini, 2005, pp. 110-119; AAUP, 2008; Jacoby, 2006). A third area of inquiry where findings to date remain inconclusive is how changes in the ethnic/racial composition of students relate to academic success and enrollment persistence. Over the past twenty years a substantial body of research has accumulated that suggests ethnic/racial diversity among college students yields significant educational benefits, including “steeper learning curves” and enhanced cognitive skills, and improved persistence (Brown, 2006, p. 334; Shaw 2005, p. 3-6; Milem, Chang, and Antonio 2005, pp. 6, 13, 18; ACE and AAUP 2000, pp. 4, 8; Chang, 1999). However, reflecting on three decades of studies on the connection between diversity and student learning, Pascarella and Terenzini (2005, p. 130) point out that “all the findings are based solely on student self-reports.” Moreover, the accumulated research fails to specifically measure educational benefits associated with ethnic/racial diversity in the classroom (Terenzini, Cabrera, Colbeck, Bjorklund, and Parente, 2001).

To improve our understanding of how high school features, part-time instructors, and ethnic/racial diversity of students may influence student success, this study estimates the level of academic preparation at college entry, first-year college grades, and enrollment persistence of students at a moderately selective research university. A review of the literature is followed by a description of the data, the analytical approach, and a discussion of results from the statistical models used to gauge the influence of the variables of interest.

It is axiomatic that knowledge-based economies in the 21st century can scarcely do without a highly educated workforce. It is estimated that 80 percent of the fastest-growing jobs will demand some post-secondary education (The Education Trust, 2005), and 60 percent of all jobs in the United States require advanced skills that necessitate training at the college level (Ramsey, 2008). However, many of today’s university entrants are insufficiently prepared to successfully master academic requirements at the college level. Many require remedial courses or, worse, drop out during the first year of study. The problem of marginally prepared students has been highlighted in recent European studies (Hasler, 2008; Lisbon Council, 2006; Woodhead, 2002) and is particularly acute in the United States, where a slew of reports and surveys have focused on this issue (Soares and Mazzeo, 2008; Hess, 2008; Achieve, Inc., 2008; Bottoms and Young, 2008; Murray, 2008; Biswas, 2007; Walters, 2006).

The American College Testing service (ACT) reported that in 2004 only 22% of high school graduates met university readiness benchmarks in English, math, and science, and only 56% of tested students completed the recommended core curriculum for college entry in 2005 (ACT, 2004; Lewin, 2005). Greene and Winters (2005) estimated that no more than 34% of all high school graduates in 2002 were academically ready to go on to college. Not surprisingly, a staggering 65% of surveyed high school graduates reported having spent a mere five hours or less per week studying during their senior year (Young, 2002), 96% of sampled seniors finish high schooling without advanced mathematics skills (Bozick, Ingels, and Owings, 2008), and only 18% of surveyed university professors felt that students enter college well prepared (Peter D. Hart Research Associates, 2005). The mounting challenge of ensuring college preparation is reflected also in the changing reading habits of young adults, with the proportion of 17-year olds who read nothing at all for pleasure having doubled between 1992 and 2002 (National Endowment for the Arts, 2007). Academic deficiencies often develop early during formal schooling, as 45% of high school freshmen reported poor preparation in their first year (Bridgeland, Dilulio, and Burke-Morison, 2006).

Although insufficient academic preparation for college has become a salient education policy issue, research on the impact of the high school experience on college academic success is limited typically to admission test scores and average high school grades received (e.g., Ishitani and Snider, 2006; Luo and Jamieson-Drake, 2005; Eno and Sheldon, 1999). The cumulative scholarship on what affects students’ academic well-being imparts few answers to the question of how institutional features of high schools shape the prospect for subsequent success at the college level (Pascarella and Terenzini, 2005; Kirst and Venezia, 2004). Specifically, school conditions that are routinely debated in education reform—such as funding, class size, teacher quality, and learning environment—are absent in studies that examine determinants of academic potential at the college level. Inferring school-level influences from individual-level student data further limits an assessment of the impact of school characteristics. For example, in an attempt to estimate first-year university grades from school-level metrics, Pike and Saupe (2002) employed mean test scores of students by school that enrolled at the university from which the sample was drawn, rather than all tested students at a given feeder school. The difficulty of accessing both student-level and school-level data, combined with the need for mixed-level statistical modeling—a relatively novel approach in higher education research—may also explain the paucity of research in this area.

In contrast, a review of studies on school effectiveness—independent of whether or not students continue on to college—yields a substantial body of findings pertinent to this study. Similarly, results from educational production-function analyses that use some form of input-output econometric models include school-environment variables that are believed to influence student learning and, hence, are used to inform schooling policy (Hanushek, 2003b; Montmarquette and Mahseredjian, 1989).

Being among the most contentious issues in education policy, the impact of resources on student achievement has been studied extensively. Release of the 1966 government-commissioned study on Equality of Educational Opportunity (US Department of Health, Education, and Welfare, 1966) to assess the impact of large-scale federal programs designed to promote quality public education for all students, regardless of socio-economic or racial background, became a catalyst for vigorous analyses on school environmental factors. The Coleman Report, named after its principal investigator, concluded that factors associated with a student’s family environment are far more powerful predictors of academic achievement than school resources. A re-examination of the Coleman Report data largely confirmed its initial finding, but also raised new questions on how to best analyze the link between resources and student achievement (Mosteller and Moynihan, 1972). Data limitations and statistical control are usually key obstacles in establishing solid inferences; for example, unobservable family factors may lead to a spurious connection between resources and student achievement.

While studies vary in their control over education-related factors, meta-analysis shows no strong or consistent impact of resources on student performance (Hanushek, 1998, 1997, 1996a). Though some question the cited meta-analytical studies, suggesting that counting multiple variations of models in the same study distorts the overall finding, these critics base their results on works that draw on district-wide or regional data that do not control for the heterogeneity found between individual schools (Greenwald, Hedges, and Laine, 1996a, 1996b; Hanushek, 1996b). More importantly, when re-examining their results, meaningful gains in student achievement are associated only with unrealistic increases in instructional funding.[1] Unfortunately, most of the frequently cited studies employ district-level information, rather than individual school-based data (Deke, 2003; Card and Payne, 2002; Jones and Zimmer, 2001; Wenglinsky, 1998, 1997; Hiller, 1996; Kazal-Thresher, 1993). Others draw conclusions based strictly on statistical significance of one resource-related variable, even though the effect size shows little meaningful impact (e.g., Elliott, 1998). [2]

A recent study of 313 schools appears to corroborate the position that increasing resources may lead to little positive change in student achievement at the high school level (Greene and Winters., 2006). Controlling for the structure of the public school system vis-à-vis non-publicly financed schools in the state of California, Marlow (2000) found that the level of school funding was positively associated with the degree of school monopoly power, but not student achievement; instead, greater market power exercised by public schools correlated with lower achievement in the early years of schooling. Employing panel data from 444 secondary schools in Finland, Häkkinen, Kirjavainen, and Uusitalo (2003) concluded that, after controlling for the initial level of academic mastery, there was no link between instructional funding and student achievement on test scores. Echoing the Coleman Report, the Finnish study highlighted the importance of family background, such as parents’ education, in estimating student achievement.
Further insight into the role of resources can be gleaned from studies on class size. Since most resources are typically allocated for instruction in the form of teacher salaries, small class size translates into greater funding per pupil. Though the majority of studies on class size is based on pre-high school data, the debate between Krueger (2003) and Hanushek (2002) is perhaps most instructive in understanding the cumulative knowledge on the role of class size in student achievement. Responding to Hanushek’s conclusion from meta-analysis of 59 studies that showed little consistent, positive relationship, Krueger argued that a different weighting of the studies—with greater attention to those considered seminal and more robust in design—confirmed a positive influence of smaller classes (Krueger, 2003). In his conclusion, Krueger listed several caveats, however, that cast doubt on the purported benefit of class size reduction. His results were taken from one quasi-experimental study that tied benefits in the early years of schooling to expenditures per student for all years of schooling, and the study relied primarily on data from inner-city schools. More disturbingly, a statewide class-size reduction program in California led to a dramatic increase in inexperienced and uncertified teachers (at an annual cost of $1.5 billion) and no discernable rise in student achievement (Jepsen and Rivkin, 2002).

The risk of spending more money on class size reduction with no commensurate benefit had been identified by Ritterband (1973) decades ago. His finding is echoed in recent international comparisons on class size and student achievement. West and Woessman (2003) examined data from the Third International Mathematics and Science Study (TIMSS) and concluded that class size effects were a function of the quality of the teaching force. Greece and Iceland, both at the low end of TIMSS scores on student achievement, on average conduct classes half the size compared to Korea and Japan, countries that score at the top end of the TIMSS scale. Yet, teachers from Greece and Iceland reported considerably greater constraints on their teaching associated with large classes, suggesting that teachers in the Asian countries manage to generate high student achievement in spite of larger classes.

Evidence from a recent multivariate study that employed matched panel data, whereby student records were tied to individual teachers, showed that teaching quality has a powerful effect on academic achievement (Rivkin, Hanushek, and Kain, 2005). But the study found no positive significance associated with graduate degree credentials of teachers and little correlation with having taught more than three years. Though based on elementary-level schools, these findings were echoed in reviews of what makes for good teaching in public schools in general, independent of a teacher’s salary (Goldhaber, 2002; Podgursky, 2006). What mattered most in teachers’ professional development was the level of subject matter expertise (Clotfelter, Ladd, and Vigdor, 2006; The Urban Institute, 2005) and verbal ability to articulate concepts clearly to students (Ehrenberg and Brewer, 1995).

Curriculum structure, academic rigor, and teacher expectations of their students are other sources that show significant influence on scholastic achievement. Schools that aligned their course content with that required for university admission, that abstained from promoting deficient students to the next grade level, that encouraged deeper learning of a more narrow curriculum, and that administered high-stakes graduation exams demonstrated greater student success (Dounay, 2006; Greene and Winter, 2006; Bishop, 2004; Raymond and Hanushek, 2003; and Nolen, 2002). These attributes are more typical of non-publicly supported schools or public schools that are exempt from many regulations (i.e., charter schools). Enrolling mostly students from disadvantaged backgrounds and operating with more limited resources, these schools nevertheless register greater academic gains in their students, net of other influencing factors, compared to regular public schools (Hoxby and Rockoff, 2005; Education Policy Institute, 2005; Greene, Forster, and Winters, 2003; Hoffer, Greeley, and Coleman, 1985). Thus, aspects of focus, scholarship, discipline, and expectation may significantly define the climate for learning in a school.

Of course, the influence of any school-level factor is circumscribed by variables directly associated with the individual student. Beyond those mentioned above that are typically included in higher education studies, several other factors show significant correlation with academic achievement of students. Meta-analysis on parental involvement revealed a positive impact, particularly a parent’s high expectation for academic success (Xitao and Chen, 2001). In other studies, the influence of high expectation persisted net of parental education, family income, and other risk factors (Barnard, 2004; Okagaki and French, 1998). Others suggest that insufficient social capital, as expressed in familial networks that undervalue education, lower some students’ chances to excel academically and move on to higher education (Perna and Titus, 2005; Zhou and Bankston, 1998). The importance of parental support was echoed by teachers, who ranked the role of parents as the most critical factor to student achievement behind student motivation (Rose, Sonstelie, and Reinhard, 2006). The latter factor, in turn, may determine how much time students invest in learning outside school.

Expectedly, less time devoted to studying, particularly core subjects such as math, had a deleterious effect on achievement (Aksoy and Link, 2000). Useful contrary indicators of student effort are time spent watching television or pursuing employment, which have a significant negative impact at a certain level of engagement (Reinking and Wu, 1990; Hancox, Milne, and Poulton, 2005; Lillydahl, 1990; D’Amico,1984).[3] Behavioral difficulties during childhood and adolescence (Hinshaw, 1992) and hard-to-measure home environmental factors also appeared to impact academic achievement (Fryer and Levitt, 2005).

Notwithstanding the above cautionary note by Pascarella and Terenzini (2005) that findings on the effect of ethnic/racial diversity in higher education rest entirely on student self-reported data from survey instruments, the preponderance of evidence from such studies suggests that diversity promotes a richer academic experience and greater cognitive growth in college students. Chang, Denson, Saenz, and Misa (2006) found that student interactions across race correlate with greater self-reported gains in critical thinking and problem-solving skills. Reason, Terenzini, and Domingo (2006) confirmed that exposure to ‘diverse’ individuals and ideas correlated positively with first-year student academic competence. Terenzini et al. (2001) reported that classroom racial diversity has some educational benefits associated with student learning; and Antonio (2001) uncovered a positive link between interracial interactions and student leadership skills. After examining over 50,000 undergraduate records from 124 four-year institutions, Hu and Kuh (2003) noticed a significant positive correlation between cross-racial interaction and perceived educational gains.

More recently, Pike and Kuh (2006) reported that ethnic/racial diversity among students leads to greater informal interaction between students from different ethnic/racial groups, which in turn fosters more diversity in “viewpoints.”
But none of these studies, or those referenced in the introduction, examined specifically the link between ethnic/racial composition in the classroom and cognitive growth and enrollment persistence on the basis of objective indicators that do not depend on the impressionistic reflections by students or faculty.[4] Surveys on the benefits of diversity rely exclusively on students’ responses to attitudinal questions about perceptions of their own analytical and problem-solving skills, ability to engage in critical thinking, and other general academic skills (Shaw, 2005). These concepts of academic ability invoke multiple meanings, based on context, and are scarcely well defined (Gonyea, 2005; Banta, 1991).

Lastly, the influence of part-time instructors on student success has received scant attention to date. The cumulative research is replete with studies on the effect of instructional techniques, teacher engagement of students in and outside of classroom, organizational influences on teaching (Pascarella and Terenzini, 2005), or faculty equity issues and human resources policy (Gappa and Leslie, 1993; Schuetz, 2002). But few focus on the effect of full-time versus part-time teaching faculty. A review by Schuster (2003) suggests that part-time faculty members are less accessible to students, offer a more limited expertise on the subject they teach, and are socially less integrated on a campus.

Together, these factors may render part-timers less effective compared to regular full-time instructors. But little is known on the relationship between faculty part-time status and student success and persistence. Ehrenberg and Zhang (2004) found a negative correlation between usage of part-time and non-tenure-track faculty and student graduation rates. Their study is based on institutional-level data from many universities and omits the use of individual student records to determine the effect of actual classroom exposure to different faculty types. In a study of 935 community colleges, Jacoby (2006) noticed also a negative link between the share of part-time faculty and graduation rates.

However, this finding is difficult to interpret, as many students enroll in community colleges without the intent to complete a degree program. Using more granular data on first-year students at a single university, Schibik and Harrington (2004) discovered that exposure to mostly part-time faculty in the first semester was associated with a lower rate of student persistence into the second semester, net of academic preparation, gender, and credit hours attempted.

To weigh the influence of high school factors on academic preparation and success after the first year in college, this study selected two first-year student cohorts that entered a moderately selective, medium-size public university in the fall semester of 2004 and 2005. Due to the considerable attention paid to funding, class size, teacher quality, and student failure issues in the research on high school effects, the high school background of each student in the study includes the average expenditure level per pupil at the school attended, the average class size at that school, the percentage of not highly qualified teachers, the student dropout rate, and the percentage of students with limited English proficiency. In addition, the size of the high school, the number of safety-threatening incidents per year, the ethnic/racial composition of students, and school location are included as separate variables. All variables measure conditions at the individual high school the student attended, and they are averages of all students at the school, not just those included in this study that continued to enroll at the selected university.

Following Astin’s (1993, pp. 7-31) well recognized input-environment-outcome (I-E-O) model, the study estimates the influence of these school-level input variables both before and after taking into account the “environment” that may affect the individual student—namely demographic background and motivation to enter college. The demographic makeup includes the student’s gender, ethnicity or race, and parent annual income level; the date of having taken the university admission test serves as a proxy measure for level of motivation, with early test takers considered more determined to acquire university education. Outcome variables comprise of the student’s score on the academic preparation index—a 100-point scale that is derived from a student’s final grade-point average (GPA) in high school, the university admission test (ACT/SAT), and the number of advanced placement (AP) credits earned—prior to entering the university, and the 4-point scale GPA at the end of the first year as a university student. Adelman (1991) and Shireman (2004) confirm that a measure of curricular rigor (in this case AP credits) as part of multiple indicators in an index gauges the academic potential to succeed in college more accurately than a single criterion. Paralleling Astin’s analytical framework (1993, pp. 7-31), the outcome variables are statistically regressed on the high-school variables and individual student-level variables. However, Astin (1993, pp. 80-81) cautions that environmental control variables may be endogenous to the inputs of primary interest—e.g., income background may determine the school of attendance—and thus they may exert “causal effects” on one another. To disentangle such mediating influences, Astin (1993: 103) recommends checking changes in regression coefficients before and after use of student background control variables that may suggest direct versus indirect relationships with the examined outcomes.

The analysis proceeds with an estimation of the influence of part-time instructors on first-year student persistence, namely the likelihood of returning the subsequent fall semester after having attended the institution the previous fall and spring semesters. The effect of exposure to part-time instructors is measured on the basis of the number of courses they taught as a proportion of all courses a student took in the first year, while taking into account the student’s demographic background, academic preparation at college entry, and other important variables identified in the research on freshmen persistence (St. John, 2006; Reason, Terenzini and Domingo, 2006; Seidman, 2005; Kuh et al. 2005; Pascarella and Terenzini, 2005; Herzog, 2005; Braxton, 2000; Astin, 1993; Tinto, 1987). Specifically, the impact of part-time instructors on persistence is gauged before and after statistical controls for first-year grades, credit load, the type of courses taken, the amount and type of financial aid received, residential and employment status on campus, and other variables moderately significant with persistence in preliminary bivariate regression.

Finally, the analysis gauges the influence of ethnic/racial diversity on first-year grades and student persistence. Instead of inferring that influence from student responses on questionnaires, the study uses official matriculation records for each course to measure a student’s actual exposure to ethnic/racial diversity in the classroom—namely, the average proportion of minority students (i.e., Blacks, Hispanics, and Native Americans), Asian students, and non-resident foreign students in classes taken by a student during the first year. Asian American students are separately identified to account for the typically different academic profile and scholastic achievement compared to other non-white students (Adelman 2004a, 2004b), while exposure to foreign students may offer an added value to the learning experience of fellow classmates. Since completion of at least one diversity course is a graduation requirement for all students, the data indicate whether or not a student took such a course during the first year. Diversity courses are designed to expose students to foreign culture and history, or they focus on popular ‘diversity’ themes such as identity politics, race, and gender issues.

Course grades at the end of the first year and university admission test scores from actuarial sources are the most readily available objective measures to gauge student cognitive growth. But how valid are they in reflecting student academic progress? Course grades in conjunction with standardized test scores at college entry are typically used to gauge cognitive growth and to help isolate the influence of certain college experiences on such gains (Carini, Kuh, and Klein 2006; Klein et al. 2005; Astin, 1993). To control for the grading level of the instructor, the average grade awarded in all courses taken during the first year by the student is included as a covariate. Also, estimation of the first-year GPA takes into account the highest level math and English courses taken and the number of science courses completed. Inclusion of these variables helps calibrate a student’s first-year GPA on the basis of the academic rigor experienced. Standardized tests have been found to be free of content and prediction bias (Hunter and Schmidt 2000; Jencks 1998; Klitgaard 1985), and test outcomes are only marginally affected by test-preparation courses (Briggs 2001). Though the predictive validity of standardized tests has room for improvement (Sternberg 2006), such tests far outweigh the predictive power of socioeconomic status, student motivation, academic goals, and self-efficacy based on a recent meta-analysis of 109 studies (Robbins et al. 2004; see also Colom and Flores-Mendoza, 2007). The two standardized admission tests used by the institution in this study also strongly correlate with general cognitive ability (Koenig, Frey, and Detterman, 2008). Thus, coupled with high school grades and AP credits as measured in the preparation index, performance on admission tests provides a suitable benchmark to assess cognitive growth during the first year.

Student-level data originate with the university’s student information system, while high school-level data were extracted from online state department of education accountability reports that are furnished annually by each school (Nevada Department of Education, 2008). After excluding statistical outliers and students from high schools with less than 5 enrollees at the university, a total sample of 2,801 students from 55 high schools remained. They represent 93% of all first-year, full-time students from in-state high schools that entered the institution in the fall of 2004 and 2005—excluding non-resident foreign students, student athletes on varsity scholarships, and those that did not persist for the full academic year (i.e., failed to re-enroll in the spring semester). Continuous numerical variables at the school level are grand mean centered to ease interpretation of predictor effect sizes as percentage changes from the “average” school on that parameter. Data reliability was confirmed via acceptable collinearity in the variance decomposition matrix and regression diagnostics (centered leverage values, studentized residuals, Mahalanobis and Cook’s distance measures) based on proposed cutoff values (Cohen et al., 2003; Pedhazur, 1997).

Due to the nested data structure of students originating from 55 high schools, the study uses hierarchical linear random-intercept regression models (HLMs) to simultaneously estimate school and student-level effects on academic preparation and first-year GPA. An HLM corrects for the smaller degrees of freedom to estimate statistical significance at the school level and furnishes more accurate standard error estimates compared to ordinary least-square (OLS) regression (Porter, 2005; Raudenbush and Bryk, 2002). Subsequent estimates of effects on second-year persistence are carried out with binary logistic regression—a technique widely used with non-linear outcomes (Peng, So, Stage, and St. John, 2002)—and without a hierarchical model in the event that high school-level variables fail to emerge as significant predictors in the presence of other first-year experience covariates. Statistically significant effects on persistence are expressed as percentage change in outcome probability, using a linear transformation of the log odds (p*[1-p]*β) per Morgan and Teachman (1988). The study will probe for significant interaction effects associated with all key variables of interest. The model to estimate high school influences with cross-level interactions takes the following form:

Yij = y00 + yp0Xpij + y0qZqj + ypqZqjXpij + u0j + eij for (1) and (2)

where Yij is the estimated preparation index score in (1) and first-year GPA in (2); yp0Xpij denotes a vector of p student-level variables X (p= 1…p) for i observations (i=1...Nj) in j schools (j=1...N); y0qZqj denotes a vector of q school-level variables Z (q=1…q) in j schools (j=1...N); and ypqZqjXpij is the cross-level interaction term that estimates the regression slope β1j of student-level variable Xij with the school-level variable Zj as a moderator effect of Z on the correlation between preparation (Y) and student characteristics (X). The segment u0j + eij is the residual error at the school (u) and student level (e), and each part is assumed to be independent from the other.

The logistic regression model to estimate the probability of persistence with interaction terms is expressed as:

Logn(pi / [1- pi]) = y0 + y1Xi + y2 Zj + y3 XZij + ei

where pi  is the probability of a persistence;  Xi is a vector of student characteristics and first-year academic and campus experiences; and Zj  is a vector representing exposure to part-time instructors and classroom diversity, including share of classmates from ethnic/racial groups and enrollment in a diversity course; and XZij  is the interaction term that estimates the slope y3  as the effect of student background and first-year experience on persistence to be a linear function of the exposure to part-time instructors and classroom diversity (Jaccard, 2001).   Variables other than those measuring exposure to part-time instructors and classroom diversity are entered as moderators to test their level of significance.

Though broader in scope than previous studies on school effectiveness at the precollege level, the ten school-feature variables selected here do not directly control for instructional quality in a given school.  Data on teachers do not measure pedagogical effectiveness for student learning, nor are teacher data tied to individual students (as in a matched panel design); and teachers’ subject matter expertise is established strictly on the basis of holding a minimum of a bachelor's degree, a state-issued teaching license, and demonstrated competencies in their area of teaching as furnished in the school’s annual accountability report.  Also, the study does not control for parental involvement and other factors that may influence a student’s academic preparation for university enrollment, such as level of subject mastery at the start of high school.

Findings on the effect of ethnic/racial diversity reflect the situation at one, predominantly white, medium-size public university located in an urban area.  Non-Asian minority students constitute on average over 11% of the classmates of first-year students in this study (Table 1), which meets the “critical mass” standard by Coleman and Palmer (2006, p. 35) in order to facilitate their classroom participation and promote learning in others.  For some institutions, however, educational benefits may only set in at much higher levels of minority representation (e.g., Hagedorn et al. 2007).  Second, measures of ethnic/racial makeup of students in the classroom may not be indicative of environmental influences (either academic or social) outside the formal learning setting on a university campus.   Available institutional data on out-of-classroom student-faculty engagement has been excluded due to paucity of surveyed students.  At the same time, empirical research shows that data from administrative records at an institution, as used here, are on average better predictors of student success than subjective feedback from student surveys (Caison 2006).  Also, the metric of compositional diversity in the classroom does not capture level of classroom interaction among students, which some view as critical in promoting cognitive growth (Milem et al.,2005).  Finally, with a non-experimental design, statistical correlations must not be interpreted as causal in nature—notwithstanding the use of terms like “effect” and “influence” here and elsewhere.  They merely indicate that some relationship between observed experiences did not happen by chance.

Table 1 provides descriptive information on the variables used in the analysis.  On average, the 55 high schools that students came from had an enrollment of around 1,900, with the largest counting over 3,500 students and the smallest 101.  They took classes that had on average about 26 students, higher than the national average of 22 students for similar-size schools.[5]  Compared to the national average of $7,972 (constant 2005-$; Johnson, 2005), these schools expended about $2,500 less per pupil, a difference in part due to the exclusion of small schools that typically graduate fewer than 5 students in a given year that enroll at the university selected here.  The expenditure level per pupil at small schools usually dwarfs the number for larger schools where costs are more easily spread.  Given the observed range, some students attended schools with ten times the expenditure level compared to their peers.  Similarly, some were three times more likely to attend a school with highly qualified teachers compared to peers from schools with an average proportion of such teachers.  Some first-year students attended high schools where non-Asian minorities made up three-fourths of students, or triple the typical proportion.  The rate of incidents involving violence, weapons, or drugs also varied considerably by school; the most afflicted had three times the rate of such incidents compared to the average school.  Once in college, some students experienced only part-time instructors during the first year, but most took courses taught by regular full-time faculty.  On average, 40% of courses attended in the first year were offered by part-time instructors.  Eighteen percent took at least one diversity course, and typically at least one out of ten classmates belonged to a non-Asian minority group.  For those at the top end, the ratio was one out of four.

Table 2 presents estimates for academic preparation and associated correlations with the high school-level variables from the random intercept model that excludes socio-demographic features of individual students.   Of the ten high school attributes, only the share of Asian student enrollment and the proportion of students with limited English proficiency weigh in significantly in estimating the number of AP credits accepted at college entry.  The effect sizes suggest that a 5.5% rise in the share of Asian students at a high school is associated with a one-unit increase in accepted AP credits.  A rise of at least 23% in those with limited English proficiency correlates with a one-unit increase in AP credits.  Significance of both of these school variables disappears when estimating preparation beyond AP credits, including high school course grades and performance on university admission tests, as captured in the preparation index.

Table 3 extends the previous model by including individual student attributes in estimates of academic preparation and average grades at the end of the first-year in college.  For either outcome, high school features are largely inconsequential, except for average class size where a 5-student rise is associated with nearly a one-tenth of one letter grade drop in first-year GPA.  Income background of a student appears to matter in level of preparation, but fails to show any significance in estimating first-year grades in college once academic preparation at the start of college is factored in.  Non-Asian minority status is negatively correlated with first-year grades, the estimated GPA being lower by 0.21.  Conversely, female students’ GPA is estimated to vary significantly from males by an additional 0.22.  Taking the university admission test in the final year of high school is negatively correlated with both level of preparation and first-year college performance, which may indicate that intent or motivation to enter college could have an enduring effect on the first-year experience.  The model estimates that delaying the admission test date to the final year in high school lowers first-year grades by 0.16.  Expectedly, level of preparation at the start of the first year correlates strongly with overall grades at the end of the first year.  A one standard deviation rise in preparation translates into an increase of 0.43 in the first-year GPA (7.986 * 0.5382 / 10).

To ascertain whether the estimated fixed effects associated with high school attributes hold for students from different socio-demographic background, the analysis probed for significant cross-level interactions between school and student attributes.  Several emerged that are listed in Table 4.  The first shows that the share of non-Asian minority enrollment at the high school exerts a negative impact on students from low-income background (less than $30,000 per year) compared to those from high-income background (over $80,000 per year).  A one standard deviation rise in the proportion of non-Asian minority enrollment at the high level would lower the first-year college GPA for low-income students by 0.10 (-0.0791 * 13.138 /10).  Low-income students also seem to be negatively affected by two other high school attributes:  the number of peers with limited English skills and the number of incidents that threaten student safety.  Unlike high-income students, a one standard deviation rise in the share of limited English proficiency students at the high school level is associated with a 0.14 (-0.158 * 9.038 / 10) drop in the first-year GPA of low-income students.  The rate of violent, weapons, and drug-related incidents in high schools may further depress the first-year GPA of low-income students.  The model estimates that a one standard deviation rise in the rate of such incidents is associated with a drop of 0.11 (-0.4988 * 2.147 / 10) in the first-year GPA of low-income students.  In turn, the expenditure level per pupil at the high school level is positively related to first-year GPA of Asian students.  On average, their GPA rises by 0.14 (0.1199 * 11.614 / 10), vis-à-vis white students, given a one standard deviation increase in per pupil expenditure.

Table 5 estimates the influence of classroom diversity on first-year grades, net of student socio-demographic background, academic preparation, on-campus experiences, and measures to gauge the grading and curricular rigor.  Inclusion of grading and curricular rigor variables rendered measures of the high school environment statistically insignificant; hence, a single-level regression model sufficed to estimate first-year GPA.  A one-percentage point increase in non-Asian minority classmates is associated with an average drop of 0.016 in first-year GPA; the proportion of Asian classmates shows no significance, and the presence of foreign classmates exhibits only borderline significance.  Thus, the impact of classroom diversity exerts a negligible influence on first-year academic performance.  Conversely, enrollment in a diversity course is associated with a 0.06 rise in the GPA.  The standardized coefficients (Beta) confirm that academic preparation and level of success in English and math strongly influence overall first year grades.  The low variance inflation factor (VIF) of these variables indicates that they all contribute uniquely in explaining first-year GPA.   The instructor’s grading level, as measured in terms of average grade awarded in courses taken, as well as taking at least three science courses both suggest that academic rigor strongly determines first year GPA.  For example, a one-letter grade difference in average grades awarded to classmates is associated with 0.26 change in a student’s first-year GPA, while taking three or more science courses (versus none at all) lowers the GPA by 0.17 on average (Table 5).   In combination, the variables included in the model explain over 54% of the variation in first-year GPA (adjusted R2 = 0.544), which is nearly double the amount typically attained without measures of grading and curricular rigor (Agronow and Studley, 2007).

Table 6 lists results for estimates of factors that influence persistence of first-year students.  High school-level variables failed to exhibit statistical significance in the presence of variables that measure the first-year experience.  Thus, only student-level variables are included as parameters to estimate persistence.  Exposure to part-time instructors reduces the likelihood of return marginally.  A one standard deviation change in the level of exposure is associated with 1.8% change in the probability of return.  This effect disappears after taking into account other first-year experiences. 

Grades weigh in significantly, with a one standard deviation rise in the GPA corresponding to an 11.4% increase in persistence.  Taking at least 15 credits in the first semester raises persistence by 5.8%, and taking at least three science-based courses ups the persistence level by 6.5%.  Conversely, failing to complete a course or receiving an unsatisfactory grade in a course elevates the dropout risk by 6%.  Negative influences on persistence are associated also with residency outside the local commuting range (-10.3%), and with females (-5.3%).  Curiously, academic preparation is negatively related to persistence.  A one standard deviation rise in the preparation index score increases the dropout risk by 5%.  Grouping students by preparation quartile confirms that the dropout risk is fairly linearly related to preparation level, with each successive quartile adding about 4 percentage points to the estimated dropout risk.  Asian students have a lower dropout risk, on average 7.5% lower than whites.  Neither income background nor financial aid received is significantly correlated with persistence, and living or working on campus does not affect persistence of students in general.  However, removal of academic performance variables from the model—including score on the preparation index, first-year GPA, and courses with unsatisfactory grades—renders both the amount of unmet financial need and employment on campus statistically significant, the former showing a negative influence on persistence and the latter a positive influence (results available from the author).

To gauge the influence of ethnic/racial diversity in the classroom and diversity courses, these variables were added first to the individual student attributes in the precollege model and then to all variables listed in the first-year experience model (Table 6).  Results in Table 7 suggest that classmate diversity has no bearing on persistence of first-year students, with or without taking into account first-year experience variables.  Likewise, taking a diversity course does not change the persistence odds for first-year students.   The addition of interaction terms to the estimation model shows that effects vary across student background, however (Table 8).  First-year students from outside the local area where the campus is located (approximately a 70 miles radius) may be negatively impacted by exposure to non-Asian minority classmates.    Students from outside the local area are twice as likely to drop out (1 / 0.45 = 2.2) at an average exposure level to non-Asian minorities (i.e, 11.3% share of all classmates).   The dropout odds for non-local students vis-à-vis local students is estimated to rise by 0.29 given a one percentage point increase in the share of non-Asian minority classmates (1 /[0.45 * 0.891] – 2.2; Table 8, Model A).[6]  Applying a linear transformation to the logit coefficients, this difference translates into a 1.5 percentage point increase in the dropout risk of non-local students for every one percentage point rise in the share of non-Asian minority classmates.  Conversely, non-Asian minority students, regardless of where they come from, are less likely to drop out if exposed to other non-Asian minority classmates.  Increasing the share of non-Asian minority exposure by one percentage point reduces the dropout odds of a non-Asian minority student by 0.3 compared to a white student (1 / 0.675 – [0.675 * 1.267]; Table 8, Model B).  Though non-Asian minority students do not differ significantly in their dropout risk from white students at an average exposure level to non-Asian minority classmates, the significance of the interaction term suggests that increasing the share of non-Asian minority classmates beyond the typical level would lower the dropout risk for non-Asian minority students by 3% for every one percentage point rise in the share of fellow minority classmates.

Exposure to classmates from foreign countries has a statistically significant impact on the most academically prepared students.  On average, the better prepared are more likely to not return for the second year compared to the least prepared, but that difference is conditioned by the level of exposure to foreign classmates (Table 8, Model C).  The best prepared, those in the top quartile on the preparation index, experience a six percentage point decrease in the dropout risk for every one percentage point rise in the share of foreign classmates.  Since foreign students make up less than 2% of classmates on average, presence of just one or two foreign students in a class may have an important influence on the re-enrollment decision of the best prepared first-year students.  The better prepared also benefit from living on campus, which lowers their dropout risk by over 10% compared to living off campus (Table 8, Model F).  However, addition of the preparation-foreign student interaction term has a marginal effect on overall accuracy of predicting student persistence.  Likewise, adding an interaction term to gauge the impact of exposure to part-time faculty by student income background fails to improve overall persistence prediction.  But the result suggests that exposure to part-time faculty may slightly lower persistence of low-income students (Table 8, Model D).

At an average GPA, males are 7% more likely to persist into the second year than females, and for every one letter grade rise in the GPA that difference widens by an additional four percentage points (Table 8, Model E).  Thus, grades have a slightly more positive influence on males than on females. Males also accrue greater benefits from working on campus, which increases the probability to persist by 17% over males that do not pursue work on campus (Table 8, Model G).  Females enjoy no such benefit.  Finally, low-income students seem to be negatively affected by taking a diversity course in the first year, an experience that does not significantly correlate with students from other income backgrounds.  The probability to persist for low-income students is estimated to drop by 16 percentage points after taking a diversity course (Table 8, Model H).

Results on the influence of high school attributes build on cumulative findings in the school effectiveness and production-function literature by looking at students that continued on to college.  Given the paucity of higher education research on the effects of high school-level factors, results from this study help inform university policy on recruitment and academic assessment of new applicants by sorting out the importance of high school background versus individual student characteristics.  According to data from 55 high schools, exposure to Asian students at these schools may increase a student’s likelihood of taking AP courses on the way to college.  In the absence of other control variables that reflect the high school experience (e.g., curricular rigor, preparation at the end of middle school), one cannot be certain that the peer effect associated with Asian schoolmates accounts for the difference in AP credits.  Still, other studies have identified a positive peer influence with Asian classmates in high schools that translates into greater focus on academic achievement (Steinberg, 1996; Thernstrom and Thernstrom, 2003; Kao, 2001).  Betts, Rueben, and Danenberg (2000) document that this advantage is not due to variation in offerings of superior courses across schools, but the greater demand for such courses by academically motivated students.  Klopfstein (2004) corroborates this finding, showing that enrollment patterns in AP courses correlate with parental support at home, not the number of courses offered in a particular school.  However, the observed influence of Asian peers in high schools does not extend to the measure of academic preparation that includes cumulative course grades and scores on the university admission tests.  The significance associated with the share of limited English learners in high schools is hardly meaningful in operational terms (i.e., a 23 percentage point rise equates to a one-unit increase in AP credits).  Although high schools vary considerably in the ten institutional features measured here, as described above, these differences fail to exert a significant impact on preparation at the start of college (see Table 2).

High school features also do not significantly correlate with course grades at the end of the first year in college, except for class size (Table 3).  That variable exerts a minimal influence on first-year grades, considering that a 60% drop in the average class size (i.e., from 26 to 11 students)—scarcely a plausible scenario—is associated with a mere 0.27 drop in first-year GPA.  The first-year GPA is more likely affected by a student’s determination to go to college, as captured on the basis of when students take the university admission test. Those taking the test early in high school are expected to earn a GPA that is on average 0.16 higher compared to late test-takers.  First-year grades are expectedly influenced by academic preparation.  A ten-point rise on the 100-point scale preparation index correlates with a 0.54 increase in the first-year GPA (Table 3).  Income background appears to have some effect on preparation, but only in the absence of statistical controls over curricular experience at the high school level.  Income fails to exert an influence on first-year grades after including level of preparation at the start of college.

Although the high school environment shows little impact on average, the school influence varies significantly depending on individual student characteristics (Table 4).  The findings suggest that low-income students are negatively impacted by three distinct features in high schools: the share of non-Asian minority student enrollment, the share of limited English speakers, and the prevalence of incidents that jeopardize personal safety and a climate conducive to learning.  Together, these features may depress the first-year GPA of a low-income student by 0.35 for a school that is one standard deviation higher on the exposure scale, a finding that corroborates previous research (Barton, Coley, and Wenglinsky, 1998).  Though the estimation of a four-way interaction of the three school features with income background is beyond the scope of this study—such interaction effects are difficult to interpret—additional tests confirmed that exposure to non-Asian minority students combined with compromised school safety results in lower first-year grades for low-income students (t = -1.98).  Sorting out the marginal effect of each school-level variable is difficult in an observational study of this type, since one may reasonably assume that students self-select into certain aspects of the school environment (e.g., low-income students are disproportionately of non-Asian minority background and thus likely more exposed to other non-Asian minorities).  The role of high school environmental influences in the success of first-year college students has not been examined in detail (Pascarella and Terenzini, 2005).  Research results from high school studies show that socioeconomic and ethnic/racial composition of the peer group correlates significantly with student achievement at that level (Rumberger and Palardy, 2005; Hoxby 2002; Caldas and Bankston, 2005; Steinberg, 1996).  The observed result for low-income students here suggests that the high school peer environment has a lingering effect on student success at the college level.

Estimates on the impact of classroom diversity fail to corroborate studies that suggest the ethnic/racial makeup of students is positively associated with student learning and cognitive growth (Brown, 2006, p. 334; Shaw 2005, p. 3-6; Milem, Chang, and Antonio 2005, pp. 6, 13, 18; ACE and AAUP 2000, pp. 4, 8; Chang, 1999).  The very small negative effect observed in this study indicates that students’ academic performance in the first year is scarcely a function of classroom diversity (Table 5).   In contrast, taking a course focused on diversity is associated with a slight increase in the first-year GPA.  Since completion of a diversity course is a graduation requirement for all students, but not mandated to be taken in the first year (unlike math and English), the presence of self-selection into the course complicates an assessment of its impact on first-year cognitive growth.  But the cumulative research shows (Pascarella and Terenzini, 2005) that the potential for learning and academic success in college is strongly influenced by the level of initial preparation, with the academic index score being the best GPA predictor among all variables in this study.   The high standardized coefficients (Betas) for the average grade awarded in classes taken and the highest math and English course experience in the first year confirm that the influence of a particular factor on GPA is more accurately gauged in the presence of control variables that measure the grading and curricular rigor students go through.  This approach renders the GPA, though by no means perfect, a more meaningful indicator of academic achievement and learning gains.  The comparatively high explanatory power of the specified model (adjusted R2 = 0.54) and low collinearity (VIF) values offer greater confidence in the findings for each measured first-year experience vis-à-vis models without indicators of grading rigor and course experience.

Neither high school features nor level of exposure to part-time instructors in the first year of college mattered in estimating persistence into the second year (Table 6).  Taking courses from part-time instructors may have a small indirect effect on persistence that disappears when factoring in a student’s financial aid, course load, grades, and other first-year experiences.  Expectedly, overall academic performance (GPA) and whether or not a student failed to complete a course strongly correlate with persistence.  Taking at least 15 credits in the first semester, which is typically one course more than required to maintain full-time enrollment status, helps improve persistence into the second year.  Enrollment intensity reduces a student’s dropout risk (Pascarella and Terenzini, 2005, pp. 425-427), as it captures commitment to stay in college and complete a degree.  The positive association of enrollment in science courses with persistence underlines the importance of using measures of curricular rigor when gauging a student’s persistence odds.  The number of science courses taken turned out to be a slightly better choice than the first-year math experience on the basis of overall model fit (Nagelkerke R2).  Next to math courses, science courses pose usually the greatest academic challenge for students (Adelman, 2004a, 2004b).

The negative relationship between level of pre-college preparation and first-year persistence has been observed in past analyses (Herzog, 2007a, 2005) and at first seems counterintuitive.  However, the interaction effects observed here suggest that academically better prepared students may lack the social integration that some research shows improves persistence (Pascarella and Terenzini, 2005; Astin, 1984; Tinto, 1987).  The difference in dropout risk associated with academic preparation is minuscule for students living on campus in contrast to those living away from campus (Table 8, Model  F).  Exposure to classmates from foreign countries may be another source to enhance persistence of well prepared students (Table 8, Model C).  Though of only borderline statistical significance, the nexus between foreign student exposure and persistence of the well prepared may signal a benefit associated with cultural diversity in the classroom.

The importance of probing for significant relationships among variables beyond the “average” student is illustrated in the findings on classroom diversity.  Results from models that included interaction terms with diversity-related variables show that exposure to non-Asian ethnic/racial minority students seemingly affects students differently depending on their background.  Persistence of students from outside the local area is negatively affected by exposure to non-Asian ethnic/racial minority classmates.  But the estimated impact on persistence is small considering the actual exposure level to minority classmates (Table 8, Model A).  To reduce the persistence of non-local versus local students by 5%, exposure to minority classmates would have to increase by 1.45 standard deviations (5 / 1.5 / 2.291); i.e., a student with an average number of minority classmates located at the 50-percentile on the exposure scale would have to move past the 80-percentile mark to experience a 5% persistence deficit vis-à-vis a local student.  Why non-local students would be less likely to return due to minority student classmates is not readily apparent.  Eighty percent of those students reside in the state’s largest metropolitan area, which is considerably more populated, over 400 miles away, and racially more diverse than the community surrounding the campus.  The remaining 20% are from small, mostly white rural towns with few or none alternative higher education options.  Given the lack of research on the influence of ethnic/racial diversity across students from varied residential settings, the findings here should provide a starting point for further inquiry.

The finding that the share on non-Asian minority classmates correlates positively with persistence of fellow non-Asian minority students partly corroborates results from previous studies on the educational benefits of student diversity (Shaw 2005; Milem et al., 2005; ACE and AAUP 2000, Chang, 1999).  If estimates in this study are indicative of potential benefits at other institutions, increasing the number of non-Asian minority students may reduce their dropout risk (Table 8, Model B).  A statistically significant reduction in the dropout risk of those students versus whites requires that non-Asian minorities make up at least 11% of all classmates, according to model estimates, which parallels the definition of  “critical mass” put forth by Coleman and Palmer (2006, p. 35).  A notable rise in non-Asian minority persistence would not easily materialize, however.  To lower their dropout risk by ten percentage points vis-à-vis white students, the share of fellow minority classmates would have to increase by 3.3 percentage points, which corresponds to approximately a 1.5 standard deviation change from the average share of such classmates.  Thus, enrollment of first-year non-Asian minority students would have to rise considerably to affect persistence of most fellow minority classmates.   Also, the statistical evidence here indicates that the potential for improved persistence is limited to non-Asian minority students and does not extend to students from other ethnic/racial backgrounds.

The observed negative impact on persistence of low-income students that is associated with taking a diversity course during the first year does not furnish sufficient evidence on the role of diversity courses in student persistence.  Neither is there enough evidence that exposure to part-time faculty plays a role in students’ choice to leave the institution after the first year.  In both cases, the added variables to test each proposition failed to improve the overall estimation model (Table 8, Models D, H).  The most important correlates of persistence center on a student’s academic experience.  The findings here suggest that taking a full load of courses, including those in the physical and natural sciences, and finishing all courses with good grades (i.e., avoiding incomplete grades) maximizes a student’s chance to persist.  This corroborates the summative assessment of three decades of research by Pascarella and Terenzini (2005, p. 396-397): “[A]cademic achievement during a student’s first year of college may be a particularly powerful influence on subsequent persistence. [Though] grades are hardly a perfect measure of learning, [they] may well be the single best predictors of student persistence, degree completion, and graduate school enrollment.”

Although not a central focus of this study, the observed variation in persistence estimates between male and female students should be explored in greater detail.  On average, females attain higher grades in the first year than males (Table 3), yet the former are less likely to be retained than the latter (Table 6).  Part of the explanation may be due to the slightly greater persistence benefit males accrue from good grades.  More importantly, being employed on campus significantly improves persistence of males in contrast to females (Table 8, Model G).  The estimated 17 percentage point gain in persistence of employed males (versus males without a job) may result from enhanced social or academic integration in ways that females fail to enjoy.  Perhaps males are more likely than females to be employed in academic settings on campus that nurture connection to, and identity with, a program of study.  Evidence to support this proposition cannot be established with the student employment data used in this study, which does not identify the nature of work performed by a student.  The plausibility of an accrued positive integration effect for males, but not females, is strengthened by the additional finding that the dropout risk for female students taking at least five courses in the first semester is elevated by 5 percentage points over females taking fewer courses.  If presence on campus can be gauged on the basis of course load, arguably a reasonable assumption, females are less likely to persist at the institution by being on campus compared to males.

Beyond the variables of primary interest in this study—including high school features, part-time faculty, and student ethnic/racial diversity—it is worth adding that financial aid, both amount and type, did not emerge as a significant predictor of first-year student persistence.  The influence of aid on persistence is mediated by a complex web of interrelated factors: including the timing, type, and amount of aid and how they correlate with persistence in the presence of other student and institutional attributes.  This may well explain the absence of consistent findings in empirical studies on the impact of aid (Pascarella & Terenzini, 2005).  One critical deficit in the research corpus is the omission of a measure of financial burden the student faces in order to stay enrolled in college.  Studies have included the amount and type of aid received, but they fail to statistically control for the actual financial hardship a student incurs, namely the net cost of attendance after taking into account all aid awarded and the personal or parental financial contribution  (e.g., St. John and Wilkerson; 2006; Rhee, 2008; Lohfink and Paulsen, 2005).  This study corrects for this problem with the inclusion of a separate variable in the estimation model that captures the amount of remaining financial need, which is the total cost of attendance minus the total amount of aid received and the estimated financial contribution (EFC) by the student or parents.  There is a negative correlation between remaining financial need and persistence of students with greater course loads.  Conceivably, the greater cost associated with taking more courses per semester outpaces the amount of financial aid awarded or requested by the student, which may elevate the dropout risk.  However, it is unlikely that the typical student’s dropout risk in this study has been impacted substantially.  The model estimates the dropout risk to rise by one percentage point for each $1,000 in remaining need for those taking at least five courses per semester (as opposed to those with fewer courses).   To elevate the dropout risk by 5 percentage points, the typical amount of remaining need would have to more than triple.  No other significant interaction effects emerged that would indicate the financial challenge to attend college varies across first-year students.

As higher education institutions grapple with the problem of enrolling mounting numbers of academically under-prepared students, the question arises as to the impact of the high school environment on student success at the university level.  To help answer this question, the present inquiry examined ten features that define the environment at individual schools; in contrast, findings from previous studies are typically based on district-wide data that fail to statistically connect conditions at an individual school with students that graduated from there and proceeded to enroll at a university.  Results from this study suggest that frequently used metrics of school quality—such as funding, teacher quality, and class size—bear little relevance to level of preparation and persistence of the average first-year university student.  However, there is some evidence that students from low-income background are less likely to succeed academically in college if they attended a high school with a sizeable non-Asian minority enrollment, with limited English speakers, or where personal safety is compromised due to violence or the presence of guns and drugs.  These environmental features are more likely associated with schools serving low-income residential areas, where the social network and cultural capital mitigate the aspiration and planning to acquire a college education.  But high school origin does not appear to influence a first-year student’s chance to persist in college, which is determined largely by academic performance, success in individual courses, and the curricular rigor experienced during the first year.  Moreover, there is no indication that taking part-time instructors has a significant impact on persistence of college freshmen.

Classroom ethnic/racial diversity appears to slightly enhance persistence of non-Asian minority students, a finding that supports the view that stepped up recruitment and matriculation of such students is important to improve their success in college.  Yet, there is no evidence on the basis of actual enrollment records that classroom diversity improves persistence of all students, and there is no indication that it promotes cognitive growth in first-year students.  These mixed findings on the purported benefits of diversity signal a need to complement, if not substitute, subjective responses from student questionnaires with objective data from enrollment records in future research.  To date, only one study could be identified that empirically assessed the contribution of diversity to educational outcomes with objective data (Herzog, 2007b).  But its findings are not pertinent to first-year students.  The potential value of data triangulation in observational studies that may enter the body of evidence in high-stake policy decisions (e.g., preferential admission to university) has been explained recently by Adelman (2006) and Gonyea (2005).  Greater use of data warehouses and more sophisticated data management algorithms should facilitate the generation of studies that take advantage of multiple data sources.

Lastly, this study confirms that the findings gleaned from multivariate estimation models may vary considerably as a result of parameter specification, statistical controls, and data selection.  Academic performance and engagement are expectedly reliable predictors of student persistence, but they may exert different effects depending on student background.  Course grades, course load, and on-campus employment are all factors that affect female students differently than males.  Furthermore, the financial burden a student faces in the first year is estimated to have some influence on enrollment persistence, but likely limited to students taking greater course loads.   The presence of these marginal effects underlines the importance of exploring the interaction among and between environmental and student characteristics.

Tables and Figures

Click here to view tables

Herzog Tables


[1] A translation of the tabulated standardized coefficients in Greenwald, Hedges, and Laine (1996) shows that a 10% rise in per-pupil-expenditure may lead to a 1.7% increase in student achievement, using post-1970 studies that are deemed more appropriate by the authors; similarly, an 18% rise in teacher salary may effect a mere 0.15% rise in achievement, using post-1980 studies. (calculations by the author)

[2] Results listed in Table 4 from Elliott’s 1998 study show that an approximately 30% increase in the per-pupil core expenditure (including teachers’ salaries), which equates to $1000, may yield a 0.38% rise in math achievement. Only modestly significant (alpha p = 0.26), a 10% rise in math achievement would be associated with a 794% increase in per-pupil funding—well beyond a conceivable change in expenditures! (calculations by the author)

[3] Heavy TV viewing (over 3 hours per day) in adolescents correlated strongly with a decline in reading ability (Reinking and Wu, 1990), while the average amount of TV viewing during childhood and adolescence was associated with school dropout and failure to complete a university education (Hancox, Milne, and Poulton, 2005). Gentzkow and Shapiro (2006) found little negative impact on educational achievement due to TV viewing. But, in contrast to aforementioned studies, they based their analysis on students growing up during the 1950s and ‘60s, when programming content was on average decidedly more educational than today. The harmful effect of TV on cognitive development has been argued by a noted German neurologist in Plüss and Scheytt (2006). Lillydahl (1990) and D’Amico (1984) showed that working more than part-time interferes with a student’s academic progress.

[4] On limitations of self-reported data, see Clayson and Sheffet, 2006; Gonyea, 2005; Feeley, 2002; Pike, 1999; Coren, 1998; Pohlmann and Beggs, 1974.

[5] The national average is adjusted up to account for the student-teacher ratio metric used in Rooney, Hussar, and Planty (2006), as their number includes teachers with non-instructional assignments.

[6] Calculation and interpretation of interaction terms follows Jaccard (2001, pp. 18-37).


ACE [American Council on Education] and AAUP [American Association of University Professors]. (2000). Does Diversity Make a Difference? Three Research Studies on Diversity in College Classrooms. Washinton, DC: Authors.

AAUP [American Association of University Professors] (2005). Looking the Other Way? Accreditation Standards and Part-Time Faculty. Washington, DC: Author.

Achieve, Inc. (2008, 2006). Closing the Expectations Gap. Washington, DC: Author.

ACT. (2004). Crisis at the Core: Preparing All Students for College and Work. Iowa City, IA: Author.

Adelman, C. (2006, October 13). The propaganda of numbers. The Chronicle Review. Retrieved October 13, 2006, at

Adelman, C. (2004a). Principal Indicators of Student Academic Histories in Postsecondary Education, 1972-2000. Washington, DC: U.S. Department of Education.

Adelman, C. (2004b). The Empirical Curriculum: Changes in Postsecondary Course-Taking, 1972-2000. Washington, DC: U.S. Department of Education.

Adelman, C. (1999). Answers in the Tool Box. Washington, DC: US Department of Education.

Agronow, S., and Studley, R. (2007). Prediction of college GPA from new SAT test scores: a first look. Paper presented at the annual meeting of the California Association for Institutional Research, Monterey, CA, November 16.

Aksoy, T., and Link, C. R. (2000). A panel analysis of student mathematics achievement in the US in the 1990s: does increasing the amount of time in learning activities affect math achievement? Economics of Education Review 19(3): 261-277.

Antonio, L. A. (2001). The role of interracial interaction in the development of leadership skills and cultural knowledge and understanding. Research in Higher Education, 42(5): 593-617.

Astin, A. (1993). What Matters in College? Four Critical Years Revisited. San Francisco: Jossey-Bass.

Astin, A. (1984). Student involvement: A development theory for higher education. Journal of College Student Personnel 25(4): 297-308.

Banta, T. W. (1991). Toward a plan for using national assessment to ensure continuous improvement of higher education. Commissioned paper prepared for a workshop on assessing higher-order thinking and communication skills in college graduates. Washington, DC. (ERIC Document Reproduction Service No. ED 340753).

Barnard, W. M. (2004). Parent involvement in elementary school and educational attainment. Children and Youth Services Review 26(1): 39-62.

Barton, P. E. R., Coley, J., and Wenglinsky, H. (1998). Order in the Classroom: Violence, Discipline, and Student Achievement. Princeton, NJ: Policy Information Center, Educational Testing Service.

Betts, J. R., Rueben, K. S., & Danenberg, A. (2000). Equal Resources, Equal Outcomes? The Distribution of School Resources and Student Achievement in California. San Francisco: Public Policy Institute of California.

Bishop, J. H. (2004). Money and motivation. Education Next (Winter): 63-67.

Biswas, R. R. (2007, September). Accelerating Remedial Math Education: How Institutional Innovation and State Policy Interact. Boston, MA: Jobs for the Future.

Bottoms, G., and Young, M. (2008). Lost in Transition: Building a Better Path from School to College and Careers. Atlanta, GA: Southern Regional Education Board.

Bozick, R., Ingels, S. J., and Owings, J. A. (2008, January). Mathematics Coursetaking and Achievement at the End of High School: Evidence from the Education Longitudinal Study of 2002 (ELS:2002). (NCES-2008-319). Washington, DC: US Department of Education.

Braxton, J. M. (2000). Reworking the Student Departure Puzzle. Nashville, TN: Vanderbilt University Press.

Bridgeland, J. M., Dilulio, J. J. (2006). The Silent Epidemic: Perspectives of High School Dropouts. Report by Civic Enterprises in association with Peter D. Hart Research Associates for the Bill & Melinda Gates Foundation.

Briggs, D. C. (2001). The effect of admission test preparation: Evidence from NELS:88. Chance, 14(1): 10-21.

Brown, K. M. (2006). The educational benefits of diversity: the unfinished journey from ‘mandate’ in Brown to ‘choice’ in Grutter and Comfort. Leadership and Policy in Schools 5: 325-354.

Caison, A. L. 2006. Analysis of institutionally specific retention research: a comparison between survey and institutional database methods. Research in Higher Education 48(4): 435-451.

Caldas, S. J., & Bankston, C. L. (2005). Forced to Fail: The Paradox of School Desegregation. Westport, CT: Praeger.

Card, D., and Payne, A. A. (2002). School finance reform, the distribution of school spending, and the distribution of student test scores. Journal of Public Economics 83: 49-82.

Carini, R. M., G. D. Kuh,, and S. P. Klein. (2006). Student engagement and student learning: testing the linkages. Research in Higher Education, 47(1): 1- 32.

Chang, M. J. (1999). Does racial diversity matter? The educational impact of a racially diverse undergraduate population. Journal of College Student Development, 40(4): 377-395.

Chang, M. J., Denson, N., Saenz, V., & Misa, K. (2006). The educational benefits of sustaining cross-racial interaction among undergraduates. The Journal of Higher Education, 77(3): 430-455.

Clayson, D. E., & Sheffet, M. J. (2006). Personality and the student evaluation of teaching. Journal of Marketing Education, 28(2): 149-160.

Clotfelter, C. T., Ladd, H. F., and Vigdor, J. L. (2006). Teacher-Student Matching and the Assessment of Teacher Effectiveness (Working Paper 11936). Cambridge, MA: National Bureau of Economic Research.

Cohen J., Cohen P., West, S. G., and Aiken, L. S. (2003). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Mahwah, NJ: Erlbaum Associates Publishers.

Coleman, A. L. and S. C. Palmer. 2006. Admissions and Diversity After Michigan: The Next Generation of Legal and Policy Issues. New York: The College Board.

Colom, R., and Flores-Mendoza, C. E. (2007). Intelligence predicts scholastic achievement irrespective of SES factors: Evidence from Brazil. Intelligence 35: 243-251.

Coren, S. (1998). Student evaluations of an instructor’s racism and sexism: Truth or expedience? Ethics & Behavior, 8(3): 201-213.

D’Amico, R. (1984). Does employment during high school impair academic progress? Sociology of Education 57(3): 152-164.

Deke, J. (2003). A study of the impact of public school spending on postsecondary educational attainment using statewide school district refinancing in Kansas. Economics of Education Review 22: 275-284.

Dounay, J. (2006, January). Ensuring rigor in the high school curriculum: what states are doing. ECS Policy Brief. Denver, CO: Education Commission of the States.

Education Policy Institute. (2005). Focus on Results: An Academic Impact Analysis of the Knowledge is Power Program (KIPP) Paper prepared for the KIPP Foundation. Virginia Beach, VA: Author.

Ehrenberg, R. G., and Zhang, L. (2004). Do tenured and tenure track faculty matter? NBER Working Paper No. W10695. Cambridge, MA: National Bureau of Economic Research.

Ehrenberg, R. G., and Brewer, D. (1995). Did Teachers' Race and Verbal Ability Matter in the 1960's? Coleman Revisited. NBER Working Papers 4293. Cambridge, MA: National Bureau of Economic Research.

Elliott, M. (1998). School financing and opportunities to learn: does money well spent enhance students’ achievement? Sociology of Education 71(3): 223-245.

Eno, D., and Sheldon, P. (1999, Summer). Predicting freshmen success based on high school record and other measures. AIR Professional File No 72, Association for Institutional Research.

Feeley, T. H. (2002). Evidence of halo effects in student evaluations of communication instruction. Communication Education, 51(3): 225-236.

Fryer, R. G., and Levitt, S. D. (2005). The black-white test score gap through third grade (Working Paper 11049). Cambridge, MA: National Bureau of Economic Research.

Gappa, J. M., and Leslie, D. W. (1993). The Invisible Faculty: Improving the Status of Part-Timers in Higher Education. San Francisco: Jossey-Bass.

Gentzkow, M., and Shapiro, J. M. (2006). Does television rot your brain? New Evidence from the Coleman Study (Working Paper 12021). Cambridge, MA: National Bureau of Economic Research.

Goldhaber, D. (2002). The mystery of good teaching. Education Next (Spring): 50-55.

Gonyea, R. M. (2005). Self-reported data in institutional research: review and recommendations. In P. D. Umbach (ed.), Survey Research: Emerging Issues. New Directions for Institutional Research, no. 127, pp. 73-89. San Francisco: Jossey-Bass.

Greene, J. P., Forster, G., and Winters, M. A. (2003). Apples to Apples: An Evaluation of Charter Schools Serving General Student Populations (Education Working Paper). New York: Manhattan Institute.

Greene, J. P., and Winters, M. A. (2006). Getting ahead by staying behind: an evaluation of Florida’s program to end social promotion. Education Next (Spring): 65-69.

Greene, J. P., and Winters, M. A. (2005). Public High School Graduation and College-Readiness Rates: 1991-2002 (Education Working Paper). New York: Manhattan Institute.

Greenwald, R., Hedges, L. R., and Laine, R. D. (1996a). The effect of school resources on student achievement. Review of Educational Research 66(3): 361-396.

Greenwald, R., Hedges, L. R., and Laine, R. D. (1996b). Interpreting research on school resources and student achievement: a rejoinder to Hanushek. Review of Educational Research 66(3): 411-416.

Hagedorn, L. S., W. Chi, R. M. Cepeda, and M. McLain. 2007. An investigation of critical mass: The role of Latino representation in the success of urban community college students. Research in Higher Education, 48(1): 73-91.

Häkkinen, I., Kirjavainen, T., and Uusitalo, R. (2003). School resources and student achievement revisited: new evidence from panel data. Economics of Education Review 22: 329-335.

Hancox, R. J., Milne, B. J., and Poulton, R. (2005). Association of television viewing during childhood with poor educational achievement. Archives of Pediatrics and Adolescent Medicine 159: 614-618.

Hanushek, E. A. (Ed.). (2003). The Economics of Schooling and School Quality Volume II: Efficiency, Competition and Policy. Northampton, MA: Edward Elgar Publishing.

Hanushek, E. A. (2002). Evidence, politics, and the class size debate. In E. A. Hanushek, Lawrence Mishel, and Richard Rothstein (eds.) The Class Size Debate, pp. 37-65. Washington, DC: Economic Policy Institute.

Hanushek, E. A. (1998). Conclusions and controversies about the effectiveness of school resources. Economic Policy Review 4(1): 11-27.

Hanushek, E. A. (1997). Assessing the effects of school resources on student performance: an update. Educational Evaluation and Policy Analysis 19(2): 141-164.

Hanushek, E. A. (1996a). School resources and student performance. In G. Burtless (Ed.), Does Money Matter? (pp. 43-73). Washington, DC: Brookings Institution Press.

Hanushek, E. A. (1996b). A more complete picture of school resource policies. Review of Educational Research 66(3): 397-409.

Hasler, L. (2008, August 21). Was an Gymnasien verschlaften wird. Weltwoche 76(34): 24-29.

Herzog, S. (2005). Measuring determinants of student return vs. dropout/stopout vs. transfer: a first-to-second year analysis of new freshmen. Research in Higher Education 46(8): 883-928.

Herzog, S. (2007a). The ecology of learning: the impact of classroom features and utilization on student academic success. In N. Valcik (ed.) Space: The Final Frontier for Institutional Research. New Directions for Institutional Research, no. 135. San Francisco: Jossey-Bass.

Herzog, S. (2007b, November). Diversity and Educational Benefits: Moving Beyond Self-Reported Questionnaire Data. Education Working Paper Archive, University of Arkansas. Fayetteville, AR.

Hess, F. M. (2008). Still At Risk: What Students Don’t Know, Even Now. Washington, DC: Common Core.

Hiller, R. B. (1996). School district wealth and participation in college preparatory courses. High School Journal 80(1): 48-58.

Hinshaw, S. P. (1992). Externalizing behavior problems and academic underachievement in childhood and adolescence: causal relationships and underlying mechanisms. Psychological Bulletin 111(1): 127-155.

Hoffer, T., Greeley, A. M., and Coleman, J. S. (1985). Achievement in public and catholic schools. Sociology of Education 58(2): 74-97.

Hoxby, C. M., and Rockoff, J. E. (2005). Findings from the city of big shoulders. Education Next (Fall): 52-58.

Hoxby, C. M. (2002, August). Peer Effects in the Classroom: Learning from Gender and Race Variation. (Working Paper No. 7867). National Bureau of Economic Research, Cambridge, MA.

Hu, S., & Kuh, G. D. (2003). Diversity experiences and college student learning and personal outcomes. Journal of College Student Development, 44(3): 320-334.

Hunter, J. E., and F. L. Schmidt. (2000). Racial and gender bias in ability and achievement tests: Resolving the apparent paradox. Psychology, Public Policy, and Law, 6(1): 151-158.

Ishitani, T. T., and Snider, K. G. (2006). Longitudinal effects of college preparation programs on college retention. IR Applications Vol 9, May 3, Association for Institutional Research.

Jaccard, J. (2001). Interaction Effects in Logistic Regression. Sage University Papers Series on Quantitative Applications in the Social Sciences, 07-135. Thousand Oaks, CA: Sage.

Jacoby, D. (2006). The Effects of Part-time Faculty Employment upon Community College Graduation Rates, Journal of Higher Education 77(6): 1081-1103.
Jencks, C. (1998). Racial bias in testing. In C. Jencks and M. Phillips (Eds.). The Black-White Test Score Gap, pp. 55-85. Washington, DC: Brookings Institution Press
Jepsen, C., and Rivkin, S. (2002). Class Size Reduction, Teacher Quality, and Academic Achievement in California Public Elementary Schools. San Francisco, CA: Public Policy Institute of California.
Jones, J. T., and Zimmer, R. W. (2001). Examining the impact of capital on academic achievement. Economics of Education Review 20: 577-588.
Kao, G. (2001). Race and ethnic differences in peer influences on educational achievement. In E. Anderson & D. S. Massey (Eds.). Problem of the Century: Racial Stratification in the United States, pp. 437-460. New York: Russell Sage.

Kazal-Thresher, D. M. (1993). Educational expenditures and school achievement: when and how money can make a difference. Educational Researcher 22(2): 30-32.

Kirst, M. W., and Venezia, A. (Eds.). (2004). From High School to College - Improving Opportunites for Success in Postsecondary Education. San Francisco: Jossey-Bass.

Klopfenstein, K. (2004). Advanced placement: do minorities have equal opportunity? Economics of Education Review 23: 115-131.

Koenig, K. A., Frey, M. C., and Detterman, D. K. (2008). ACT and general cognitive ability. Intelligence 36: 153-160.

Krueger, A. B. (2003). Economic considerations and class size. The Economic Journal 113(February): F34-F63.

Kuh, G. D., Kinzie, Schuh, J. H., Whitt, E. J., and Associates. (2005). Student Success in College: Creating Conditions That Matter. San Francisco: Jossey-Bass.

Lewin, T. (2005, August 17). Many going to college are not ready, report says. The New York Times. Retrieved August 17, 2005, from

Lillydahl, J. (1990). Academic achievement and part-time employment of high school students. Journal of Economic Education 21(3): 307-316.

Lisbon Council. (2006, March 13). The evidence is clear: education pays. Press release retrieved June 18, 2006, from

Lohfink, M. M., and Paulsen, M. B. (2005). Comparing the determinants of persistence for first-generation and continuing-generation students. Journal of College Student Development 46(4): 409-428.

Luo, J., and Jamieson-Drake, D. (2005). Linking student precollege characteristics to college development outcomes: the search for a meaningful way to inform institutional practice and policy. IR Applications Vol 7, November 3, Association for Institutional Research.

Marlow, M. L. (2000). Spending, school structure, and public education quality: evidence from California. Economics of Education Review 19: 89-106.

Milem, J. F., M. J. Chang, and A. L. Antonio. (2005). Making Diversity Work on Campus: A Research-Based Perspective. Washington, DC: Association of American Colleges and Universities.

Montmarquette, C., and Mahseredjian, S. (1989). Does school matter for educational achievement? A two-way nested-error components analysis. Journal of Applied Econometrics 4: 181-193.

Morgan, S. P., and Teachman, J. D. (1988). Logistic regression: Description, examples, and comparisons. Journal of Marriage and the Family, 50(4): 929-936.

Mosteller, F., and Moynihan, D. P. (Eds.). (1972). On Equality of Educational Opportunity: Papers Deriving from the Harvard University Faculty Seminar on the Coleman Report. New York: Random House.

Murray, V. E. (2008, June). The High Price of Failure in California: How Inadequate Education Costs Schools, Students, and Society. San Francisco: Pacific Research Institute.

National Endowment for the Arts. (2007, November). To Read or Not To Read: A Question of National Consequence. Research Report No. 47. Washington, DC: Author.

Nevada Department of Education. (2008). Nevada Annual Reports of Accountability, 2005-2006 [Data file]. Available at

Nolen, S. B. (2003). Learning environment, motivation, and achievement in high school science. Journal of Research in Science Teaching 40(4): 347-368.

Okagaki, L., and Frensch, P. A. (1998). Parenting and children’s school achievement: a multiethnic perspective. American Educational Research Journal 35(1): 123-144.

Pascarella, E. T., and Terenzini, P. T. (2005). How College Affects Students: A Third Decade of Research. San Francisco, CA: Jossey-Bass.

Pedhazur, E. J. (1997). Multiple Regression in Behavioral Research. New York: Harcourt Brace.

Peng, C. J., So, T. H., Stage, F. K., and St. John, E. P. (2002). The use and interpretation of logistic regression in higher education journals: 1988-99. Research in Higher Education 43(3): 259-293.

Perna, L. W., and Titus, M. A. (2005). The relationship between parental involvement as social capital and college enrollment: an examination of racial/ethnic group differences. The Journal of Higher Education 76(5): 485-518.

Peter D. Hart Research Associates. (2005). Rising to the Challenge: Are High School Graduates Prepared for College and Work? Washington, DC; Author.

Pike, G. R., & Kuh, G. D. (2006). Relationships among structural diversity, informal peer interaction, and perception of the campus environment. Review of Higher Education, 29(4): 425-450.

Pike, G. R., and Saupe, J. L. (2002). Does high school matter? An analysis of three methods of predicting first-year grades. Research in Higher Education 43(2): 187-207.

Pike, G. (1999). The constant error of the halo in educational outcomes research. Research in Higher Education, 40(1): 61-86.

Plüss, M., and Scheytt, S. (2006). Das ist doch hirnrissig. Die Weltwoche 74(23): 56-59.

Podgursky, M. (2006). Is there a ‘qualified teacher’ shortage? Education Next (Spring): 27-32.

Pohlmann, J. T., & Beggs, D. L. (1974). A study of the validity of self-reported measures of academic growth. Journal of Educational Measurement, 11(2): 115-119.

Porter, S. (2005). What can multilevel models add to institutional research? In M. A. Coughlin (ed.), Applications of Intermediate /Advanced Statistics in Institutional Research, pp. 110-131. Tallahassee, FL: Association for Institutional Research.

Ramsey, J. (2008, June). Creating a High School Culture of College-Going: The Case of the Washington State Achievers. Issue Brief. Washington, DC: Institute for Higher Education Policy.

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical Linear Models: Applications and Data Analysis Methods. Thousand Oaks, CA: Sage.

Raymond, M. E., and Hanushek, E. A. (2003). High-stakes research. Education Next (Summer): 48-55.

Reason, R. D., Terenzini, P. T., & Domingo, R. J. (2006). First things first: Developing academic competence in the first year of college. Research in Higher Education, 47(2): 149-175.

Reinking, D., and Wu, J. (1990). Reexamining research on television and reading. Reading Research and Instruction 29(Winter): 30-43.

Rhee, B. (2008). Institutional climate and student departure: a multinomial multilevel modeling approach. The Review of Higher Education 31(2): 161-183.

Rivkin, S. G., Hanushek, E. A., and Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica 73(2): 417-458.

Ritterband, P. (1973). Race, resources, and achievement. Sociology of Education 46(2): 162-170.

Robbins, S. B., K. Lauver, L. Huy, D. Davis, and R. Langley. (2004). Do psychosocial and study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130(2): 261-288.

Rooney, P., and Hussar, W., and Planty, M. (2006). The Condition of Education (NCES 2006-071). Washington, DC: US Department of Education.

Rose, H., Sonstelie J., and Reinhard, R. (2006). School Resources and Academic Standards in California: Lessons from the Schoolhouse. San Francisco, CA: Public Policy Institute of California.

Rumberger, R. W., & Palardy, G. J. (2003). Does segregation still matter? The impact of student composition on academic achievement in high school. Teachers College Record, 107(9): 1999-2045.

Shaw, E. J. (2005). Researching the Educational Benefits of Diversity (Research Report No. 2005-4). New York: The College Board.

Shireman, R. (2004). Rigorous Courses and Student Achievement in High School: An Options Paper for the Governor of California (CSHE.13.04). Berkeley, CA: Center for Studies in Higher Education, University of California.

Schibik, T., and Harrington, C. (2004). Caveat Emptor: Is There a Relationship between Part-Time Faculty Utilization and Student Learning Outcomes and Retention? AIR Professional File, Number 91, Spring.

Schuetz, P. (2002). Instructional Practices of Part-Time and Full-Time Faculty: Community College Faculty, Characteristics, Practices, and Challenges. New Directions for Community Colleges, 118: 39-46.

Schuster, J. H. (2003). The Faculty Makeover: What Does It Mean for Students? New Directions for Higher Education, 123: 15-22.

Seidman A. (ed.) (2005). College Student Retention: Formula for Success. Wesport, CT: Greenwood Publishing Group.

Soares, L., and Mazzeo, C. (2008, August). College-Ready Students, Student-Ready Colleges. Washington, DC: Center for American Progress.

St. John, E.P., and Wilkerson, M. (eds.) (2006). Reframing Persistence Research to Improve Academic Successs. New Directions for Institutional Research, no. 130. San Francisco,CA: Jossey-Bass.

St. John, E.P. (ed.) (2006). Improving Academic Success: Using Persistence Research to Address Critical Challenges, New Directions for Institutional Research, San Francisco,CA: Jossey-Bass.

Steinberg, L. (1996). Beyond the Classroom: Why School Reform has Failed and What Parents Need To Do. New York: Simon & Schuster.

Sternberg, R. J. 2006. The Rainbow Project: Enhancing the SAT through assessments of analytical, practical, and creative skills. Intelligence, 34: 321-350.

Terenzini, P. T., A. F. Cabrera, C. L. Colbeck, S. A. Bjorklund, and J. M. Parente. (2001). Racial and ethnic diversity in the classroom: does it promote student learning? The Journal of Higher Education, 72(5): 509-531.

The Education Trust. (2005, November). Gaining Traction, Gaining Ground: How Some High Schools Accelerate Learning for Struggling Students. Washington, DC: Author.

The Urban Institute. (2005, February). What Do We Know: Seeking Effective Math and Science Instruction. Washington, DC: Author.

Thernstrom, A., and Thernstrom S. (2003). No Excuses: Closing the Racial Gap in Learning. New York: Simon and Schuster.

Tinto, V. (1987). Leaving College: Rethinking the Causes and Cures of Student Attrition. Chicago: University of Chicago Press.

US Department of Health, Education, and Welfare. (1966). Equality of Educational Opportunity. Two volumes. Washington, DC: US GPO.

Walters, A. K. (2006, June 30). Most states seek to define ‘rigorous curriculum’ options for new federal grant program. The Chronicle of Higher Education. Retrieved June 30, 2006, from

Wenglinsky, H. (1998). Finance equalization and within-school equity: the relationship between education spending and the social distribution of achievement. Educational Evaluation and Policy Analysis 20(4): 269-283.

Wenglinsky, H. (1997). How money matters: the effects of school district spending on academic achievement. Sociology of Education 70(3): 221-237.

West, M. R., and Woessman, L. (2003). Crowd control: an international look at the relationship between class size and student achievement. Education Next (Summer): 56-62.

Woodhead, C. (2002). The Standards of Today and How to Raise Them to the Standards of Tomorrow. London, UK: Adam Smith Institute.

Xitao, F., and Chen, M. (2001). Parental involvement and student academic achievement: a meta-analysis. Educational Psychology Review 13(1): 1-22.

Young, J. R. (2002, December 6). Homework? What homework? The Chronicle of Higher Education, A35-A37.

Zhou, M. and Bankston, C. L. (1998). Growing Up American: How Vietnamese Children Adapt to Life in the United States. New York: Russell Sage Foundation.

Back to the top of the page.



Department of Education Reform
University of Arkansas
201 Graduate Education Building
Fayetteville, AR 72701
Ph: 479/575-3172
Fax: 479/575-3196