Transforming Postsecondary Remediation Webinar Follow-Up Questions: Part 1 – Placement Test Models and Effectiveness

On August 15, the College and Career Readiness and Success Center at the American Institutes for Research and the American Youth Policy Forum co-hosted a Webinar, “Transforming Remediation: Understanding the Research, Policy, and Practice”.  A brief summary of the webinar is available here.

This post is the first in a two-part series in which presenters Bruce Vandal, Vice President of Complete College America; Katie Hern, Director of the California Acceleration Project; Cynthia Liston, Associate Vice President, Policy Research & Special Projects at the North Carolina Community College System; and Michelle Hodara, Senior Researcher at Education Northwest respond to questions submitted by participants.

Can you expand on placement tests that are used to place students into remediation?  What are the different models that exist, and how effective are they at predicting a student’s academic ability and needs?

Katie Hern: The most common placement tests used in community colleges nationwide are standardized tests like Accuplacer and Compass. Accuplacer is the instrument at my own college, and we use two versions of the test for placement in English. There is a test on reading comprehension, which gives students short reading excerpts on a variety of topics (typically a paragraph or two) and asks them to respond to questions by choosing from multiple choice options.  There is also a test of what’s called “sentence skills.”

Tests like these can lead us to under-estimate students’ real capacity. There are a lot of reasons why a capable student – one that could do fine in college coursework – might get a placement test question wrong. For example, a sample question from the Accuplacer practice test includes a sentence about a writer who is “freed from the necessity of selling his pen for the political purposes of others.” Students are asked to rewrite the sentence in their head, beginning with the phrase “The author was not obliged” and then choosing from among four multiple choices for other phrases that would make sense in the new sentence.  The prompt includes several phrases that are unlikely to be part of students’ everyday language, and its content will also be unfamiliar to many of them (how many 18-year-olds have spent time thinking about the ethical dilemma of writers-for-hire who express others’ political views?).  The item’s language and references may lead students to choose the wrong answer and be judged deficient in sentence skills, but if asked to produce their own writing, about a topic in which they had some knowledge, their actual sentences might be clear and competent.

As noted by Michelle Hodara in the webinar, standardized placement tests like Accuplacer are inexpensive and efficient, but they are poorly aligned with the curriculum of college English, which is unlikely to include tasks like the one above. After all, how often does reading and writing involve multiple choice answers? The skills that can be measured efficiently and inexpensively – such as awareness of discrete grammatical topics -- are not in fact the most essential skills for being successful in college. Academic literacy involves grappling with longer, complex texts, engaging in higher order thinking, and producing extended written work that integrates information and ideas from readings. And being successful in school also involves a host of other affective, behavioral, and external factors that are not measured by current placement tests, such as students’ willingness to persist with difficult material, consistency in completing assignments, and external life circumstances (especially important in community college populations with higher rates of poverty).

My colleague Myra Snell and I have tried to approach the placement question quantitatively, with a study of eight semesters of student data from remedial English courses at Chabot and Las Positas colleges. We examined students’ Accuplacer scores and their course pass rates, and our goal was to determine whether some students should be directed to the one-semester, accelerated English course and some should be directed to the non-accelerated, two-semester remedial path. Perhaps students scoring in the bottom 20% should be discouraged or blocked from the accelerated course? The bottom 15%? 10%? What we found surprised us. Low-scoring students did have lower pass rates than their higher-scoring peers, but they did not perform any better in the slower curriculum. Their pass rates in the accelerated course were as high or higher than in the first course of the longer sequence. Even students with scores in the bottom 5% at both colleges still passed the accelerated course at a rate of 48% and saw no gains from being slowed down. The study indicated to us that Accuplacer was meaningless as an instrument for sorting students into the number of semesters of remediation needed – ironically, the primary use of this test across the country. In addition to our descriptive analysis, Irvine Valley College researcher Craig Hayward conducted a regression analysis  (Hern & Snell, 2010) of the same data set and concluded that Accuplacer scores explained just 3% of the variation in students’ pass rates.

In math, there are several discipline-specific problems with placement testing. I’ll discuss two here.

First, recall of math terminology and procedures deteriorates rapidly when not used, and most community college students take placement tests without any review or other preparation. They often don’t realize that this single test could add multiple semesters of non-credit-bearing coursework to their already long degree program, and that the lower down they are placed, the less likely they are to make progress on that degree (see Venezia’s study “One Shot Deal”, WestEd 2010). As a result, they perform more poorly than they might have.

In response to the first problem, some community colleges have focused on having students do pre-testing reviews. This solution, however, does not address a more significant problem: the curricular misalignment between algebra-based definitions of “college-readiness” and the college-level coursework for students in pathways that aren’t math-intensive (e.g., students who’ll take statistics or quantitative reasoning courses). Our current placement tests assess students on their recall of a long list of arithmetic and algebra procedures, knowledge that is generally agreed to be pre-requisite for the study of Calculus (required for students in STEM, business fields). However, for students in other pathways, most of these topics will never be needed in their college-level courses. Why, then, is knowledge of algebra being used to determine placement into college statistics or liberal arts math?

An analogy might be to use a Latin exam to qualify students for a biology course. Proponents of Latin might argue that many biological terms originate in Latin, or that studying Latin fosters analytic thinking, or that Latin is inherently valuable for an educated human being (indeed, it used to be an entrance requirement for higher education). But the fact remains that students can be perfectly successful in a rigorous biology class without being fluent in Latin. And while a Latin test may be relevant for placement into more advanced Latin courses, we no longer believe it is a valid way to determine access to other college-level courses.

Michelle Hodara: Katie Hern provides a great answer to this question. I’d also like to refer you to page 40 of a study I conducted. In this study, we examined assessment and placement policies and practices at seven states across the country. The table on page 40 provides some idea of the types of placement exams and policies used in the states.

Bruce Vandal: States concerned about remedial education should not spend time trying to build the perfect test for assessing college readiness.  It is simply infeasible and unwise for states to spend the resources necessary to create new assessments.  Instead, they should emphasize placing significantly more students into gateway courses and providing them additional academic support.   Placement policies should use multiple measures, like high school GPA, high school transcripts and non-cognitive measures like “grit” (discussed further in next week’s post) to determine the courses and instructional models that will increase student success in gateway courses.

Check back on Tuesday, September 10, when the presenters address grit, the Common Core State Standards, and special populations in relation to remediation and share what they see are future needs for research in this area.

Erin Russ is a Program Associate at the American Youth Policy Forum.

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <i>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
To prevent automated spam submissions leave this field empty.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
2 + 8 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.