86 THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005/PP. 86–96
Accounting for the Performance of Students With
Disabilities on Statewide Assessments
Kimber W. Malmgren, University of Wisconsin–Madison
Margaret J. McLaughlin, University of Maryland
Victor Nolet, Western Washington University
The current study investigates school-level factors that affect the performance of students with disabilities on statewide assessments. Data were collected as part of a larger study examining the effects
of education policy reform on students with disabilities. Statewide assessment data for students with
disabilities from 2 school districts within 1 state were analyzed. Assessment results in reading and
math in 3rd, 5th, and 8th grades across 2 school years were analyzed using a series of hierarchical
linear regressions. Of the variables considered, only the performance of schools’ general education
students on the assessments added any predictive value to the regression model after accounting for
school demographic indicators.
The passage in 2002 of the No Child Left Behind Act (NCLBA)
signaled a new era in accountability for students with disabilities. While the 1997 amendments to the Individuals with Disabilities Education Act (IDEA, 2001) required that students
with disabilities participate in state and local assessments and
that results be reported, the IDEA did not require that the results be factored into accountability indexes (McLaughlin &
Thurlow, 2003). Even after passage of the 1997 amendments,
state policies regarding the performance of students with disabilities were either dismissive (e.g., assessment exemptions
were permitted) or undermining (e.g., scores were not included
in accountability indexes because of the use of accommodations or alternate assessments; Elliott, Erickson, Thurlow, &
Shriner, 2000; Thurlow, Lazarus, Thompson, & Robey, 2002).
The NCLBA has reinforced the necessity of including all students in state assessments and created the mandate for universal accountability. The act and accompanying regulations
require states to have a single accountability system that is
based on challenging content and achievement standards in
reading, language arts, and science in Grades 3 through 8. In
addition, states must have an assessment that is aligned with
the grade-level standards, and the results of those assessments
must be reported in terms of the proportion of students who
performed at basic, proficient, and advanced levels. Results
must also be disaggregated and reported by specific subgroups
of students, including five major racial and ethnic groups, poverty, English language learners (ELL), and disability. As part
of the accountability requirements of the NCLBA, states must
establish annual performance objectives for each subgroup
of students that enable all students to reach the state standard
of proficiency or advanced on the state assessments within
12 years. The annual progress objectives, referred to as Adequate Yearly Progress (AYP), require increased percentages of
students within each subgroup to meet the proficient level.
Failure to make AYP for any one subgroup can result in a
school’s facing a series of consequences (McLaughlin & Thurlow, 2003). As the requirements of the NCLBA are being implemented in school districts across the United States, the
performance and progress of the subgroup of students with
disabilities appears to be one of the most problematic (“Quality Counts,” 2004).
In the 2002–2003 academic year, states reported varying levels of proficiency on assessments in reading and math
for students with disabilities in the various grade levels with
one common theme: Many were well below the initial performance objectives that had been set for them. In Washington State, for example, only 25% of the fourth-grade students
met the criteria for proficiency on the math assessment, and
31% met the criteria for proficiency on the reading assessment. In Maryland, only 23% of third-grade students with
disabilities met the criteria for proficiency on the reading assessment. In Wisconsin, only 31% of the eighth-grade students with disabilities met the criteria for proficiency on the
math assessment. In addition, during that same school year,
about a third of the schools in Maryland that failed to meet
the AYP goals did so solely because of the performance of
students with disabilities.
Performance deficits are only one of several challenges
associated with including students with disabilities in the
NCLBA. One of the more difficult issues concerns how to
consider assessment accommodations within calculations of
AYP. While some states have adopted very permissive ac-
Address: Kimber W. Malmgren, University of Wisconsin, Rehab Psych and Special Ed Dept., 432 N. Murray St., Madison, WI 53706
THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
commodation policies that allow individual students with disabilities to have any of a number of accommodations without
affecting their scores, other states invalidate or automatically
count as basic the scores of students who receive certain assessment accommodations. Another factor that contributes to
the complexity of the issues surrounding students with disabilities pertains to alternate assessments and the recently released
regulation concerning students with the most significant cognitive disabilities. States and local districts may now count as
proficient up to 1% of the scores of students who are held to
alternative achievement standards (typically through the use
of an alternative assessment). Obvious from these examples
is the unique complexity of including students with disabilities in the type of assessment-based accountability defined by
the NCLBA.
Due to this uniqueness, the Educational Policy Reform
Research Institute (EPRRI) was established in 2000 and funded
by the U.S. Department of Education to investigate the impact of educational accountability reforms on students with
disabilities and the programs and systems that serve them. The
impact and interpretation of the NCLBA has become an important area of investigation. The EPRRI’s overall program of
research includes both qualitative and quantitative investigations at the state, district, and building levels and is focused
on documenting performance trends among students with disabilities as well as identifying factors that affect that performance. The analyses reported here pertain to one research
question: What building-level factors account for the performance of students with disabilities on statewide assessments?
In this study, we set out to examine the effect of various schoollevel factors on the achievement of students with disabilities
on statewide assessments at the building level. In framing our
analyses, we chose to examine three types of school-level
variables: demographic characteristics, school characteristics,
and special education characteristics. In our conceptualization
of factors to consider, school-level demographic characteristics
(e.g., overall socioeconomic status of the school population,
percentage of students who qualify as ELL) were included because they have been shown to influence student achievement
above and beyond individual student characteristics (e.g., Caldas & Bankston, 1997; Ma & Klinger, 2000). In addition to
school-level demographic characteristics, other school characteristics have also been shown to affect or predict student
achievement at the building level. School size, for example, has
been repeatedly linked to overall student achievement (e.g., Alspaugh & Gao, 2003; Borland & Howsen, 2003; Driscoll,
Halcoussis, & Svorny, 2003; Ho & Willms, 1996). In addition,
the level of preparation of the teaching staff (e.g., DarlingHammond, 1997), level of parental participation (e.g., Goldring
& Shapira, 1996), and levels of school funding (e.g., Namboodiri, Corwin, & Dorsten, 1993) have also all been linked
to student achievement at the building level. Heretofore, these
relationships have remained largely unexplored for students
with disabilities.
87
Because our interest was in examining factors associated with
the performance of students with disabilities, we also considered a set of school-level special education variables. Our
consideration of school-level special education factors was
guided by our identification of school-level variables that are
linked theoretically to achievement in ways that also apply
to the subset of students receiving special education services.
For example, the concept of economies of scale has been
applied to education for decades (Driscoll et al., 2003); the
observed relationships between school size and student achievement are based on this notion. The same notion could be applied to the delivery of special education services. Therefore,
we hypothesized that schools with a larger special education population might have a larger special education staff, be
more efficient, and ultimately be more effective academically.
In addition to theoretically derived special education factors, we also felt it important to consider factors emerging as
problems as school personnel began to grapple with the logistics of moving students with disabilities squarely into the
school accountability arena. For example, the number of students with disabilities being exempted from participation in
state assessments was considered an extremely important variable in our initial discussions with state and local education
agency (LEA) directors of special education from our participating states, as well as our discussions with federal Department of Education staff and nationally known experts in the
legal aspects of assessing students with disabilities and in Title
I research. It is important to recognize, however, that these initial discussions occurred prior to the full implementation of
the NCLBA, which holds schools accountable for assessment
participation as well as for the performance of students with
disabilities.
We acknowledge that additional variables likely related
to performance on assessments at the building level (e.g., percentage of students in various categories of disability, data reflecting percentage of time students with disabilities spend in
general education, indicators of school climate and teacher
quality) were not included in our analyses. These additional
variables were not available in disaggregated form at the
school level when the data for these analyses were collected.
As schools struggle to increase the percentage of their students with disabilities who reach proficiency on their state assessments, understanding the variables linked to exemplary
performance may influence the current policy debates regarding whether these students should be included in the same
accountability system as other students and may allow education professionals to direct their attention and resources to
those elements of schools that are most likely to have an impact. At the same time, exposing elements commonly believed
to affect student performance but that in actuality have little
impact on the performance of students with disabilities is also
a positive outcome in that it forces schools and districts to recognize that the performance of students with disabilities is not
88 THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
simply or completely a reflection of each individual child’s
disability.
Method
The EPRRI conducts its research in four states: California,
Maryland, New York, and Texas. These four states were purposely selected for their demographic diversity and their history of implementing standards-based reform. In addition, all
four states maintained comprehensive databases on students
with disabilities and their participation in and performance
on state assessments. Within each state, EPRRI staff and the
state director of special education identified and secured the
participation of two school districts. As a group, the school
districts were selected to vary across several key accountability features, including high-stakes versus low-stakes accountability consequences, recentness of reforms, stability versus
instability of reform efforts, participation of students with disabilities in all accountability reports, and use of alternate assessments. However, a key criterion was that the districts have
an efficient and comprehensive data system that would permit analysis of school-level variables related to students with
disabilities. Again, it is important to note that since the inception of the study, it is likely that all LEAs in these four
states have increased their reporting capabilities in response
to NCLBA requirements.
In this study, we chose to analyze the data from the participating school districts in one state (i.e., Maryland). We felt
it was important to analyze data from one state for our initial
analyses because the participating states varied in terms of
their assessments, their assessment grades, their accommodation and participation policies, and even the ways in which
they categorized their low-income students. We felt it was important to maintain this consistency in our initial tests and intend to replicate these analyses with the other participating
states. In addition, the Maryland data set was complete, provided us with enough range in performance data to reliably
analyze variance, and represented large numbers of students
with disabilities in grades ranging from lower elementary to
middle school. The two participating Maryland districts reflect demographic diversity—within as well as between the
two districts—and include students with disabilities who are
also members of minority and culturally and linguistically different groups.
Sample School Districts and Schools
The school districts ranged in size from 27,528 (District 1) to
134,180 (District 2) students in the 2000–2001 school year.
The percentage of students qualifying for free or reducedprice meals in the 2000–2001 school year was 23% in
District 1 and 8% in District 2. The mean percentage of nonWhite students at the school level was 51% in District 1 and
4.4% in District 2. The mean percentage of students identi-
fied for special education services at the building level was
10.9% in District 1 and 13.5% in District 2. Quantitative data
were collected for every school in District 1 and for a subset
(i.e., 33%) of randomly selected schools in District 2. This
sampling strategy was adopted because of the particularly
large size of District 2.
Data Sources
As part of the EPRRI’s goal of documenting impacts of accountability, school-level demographic and reading and math
performance data were collected within the eight study districts over 3 years at every testing grade in elementary and
middle schools. Data were collected over the years of 1999–
2000, 2000–2001, and 2001–2002. Students with disabilities
attending special schools or in nonpublic settings were not included in the database unless the student’s participation and
performance were reported with the home school and included
in their accountability system.
Data were initially retrieved by EPRRI staff from publicly reported data sources (e.g., state and LEA Web sites).
Data entry was carried out by graduate students employed by
the EPRRI. The data were double-checked for entry errors by
EPRRI staff. Subsequent to the data entry reliability checks,
all data imported from the public sources were summarized
and shared with state and local district stakeholders. As missing and erroneous data were identified by these stakeholders
and by EPRRI researchers, custom reports documenting the
performance of students with disabilities on the various indicators were generated by the local districts for the EPRRI’s
use and analysis.
Demographic data were collected at the state and LEA
levels via Web site searches, on-site interviews, phone interviews, and e-mail communication. School-level data collected
by EPRRI staff within each of the eight LEAs included in the
overall research project included percentage of students receiving special education services, number of students by ethnicity, number of students by gender, number of students by
grade, percentage of students receiving free or reduced-price
meals, percentage of students qualifying as ELL, total number of students enrolled, performance of general education
and special education students on statewide assessments,
number of students exempted from testing by grade, number
of students receiving accommodations on the statewide assessments by grade, and graduation rates for high schools.
Although the percentage of students receiving free or
reduced-price meals at a school is an admittedly crude proxy
for socioeconomic status, we utilize this percentage as our
measure of low SES (a General School factor) in our analysis
and the discussion of results that follows.
Analyses
A series of hierarchical linear regressions was conducted to
determine which building-level factors predicted the perfor-
THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
mance of students with disabilities on statewide assessments,
above and beyond the other predictors. Analyses were conducted initially with data available for the statewide reading
and math assessments in the testing grades in Maryland for
the 1999–2000 and 2000–2001 school years. In the regressions, the performance of students with disabilities (at the various grade levels and in the two content areas of interest) was
utilized as the dependent variable. Performance was operationalized as the percentage of students meeting the criteria
for proficiency as defined by the state. Separate regression
analyses were conducted for third-, fifth-, and eighth-grade
data across the two school years in the two content areas of
interest. This resulted in a total of 12 separate sets of regressions using 12 separate measures of performance (of students
with disabilities) as criterion variables.
A set of seven school-level predictor variables representing school demographics, general school factors, and special education factors was chosen for these analyses. The
predictors that were ultimately selected and entered into the
regression analyses included (a) percentage of students with
disabilities not tested or exempted from the individual content
area assessments (Special Education factor); (b) percentage
of students without disabilities who met proficiency for that
test (General School factor); (c) school’s total enrollment (General School factor); (d) percentage of students at the school
qualifying for free or reduced-price meals (School Demographic); (e) percentage of minority students at the school
(School Demographic); and (f) percentage of students receiving special education services enrolled at the school (Special
Education factor). A seventh predictor variable, percentage of
students identified as ELL at the school (School Demographic),
89
was entered in the analyses of 2000–2001 school year performance data. Data reflecting the percentage of students identified as ELL in the 1999–2000 school year were incomplete
for one of the grade levels of interest, so this variable was not
included. The performance of general education students was
identified as a general school factor and included as a predictor to investigate the presumed independence of these two
groups of students. Obviously, in most analyses of schoollevel factors influencing achievement, the performance of the
general education population is used as the outcome variable
and therefore not as a predictor. Given our interest in factors
affecting achievement of students receiving special education
services, it was deemed important to consider the general culture of performance at the school in our analyses.
The independent (predictor) variables were entered in
the regression equations in blocks. Each of the predictors was
initially entered in the first position, and then, in a second set
of regressions, in the last position so that we could determine
how much variance in the dependent variable could be accounted for by each independent variable of interest after controlling for the others.
Results
Descriptive summary statistics were generated for all continuous variables of interest and are reported in Tables 1 though
4. Data in these tables were generated from each school’s December 1 count data. The total enrollment at the sample elementary schools was approximately 540 on average in each
data collection year. The mean percentage of students receiv-
TABLE 1. Summary Descriptive Statistics for Variables Entered Into the Analyses of Elementary School
Performance
School characteristic
n
M
Minimum
Maximum
Total enrollment
1999–2000
2000–2001
61
61
540.0
542.9
298.0
271.0
Special education (%)
1999–2000
2000–2001
61
61
11.3
11.8
3.3
3.3
33.6
33.9
4.7
4.6
Low SES (%)
1999–2000
2000–2001
61
61
18.7
18.4
1.2
1.4
70.7
66.7
16.0
15.5
Minority (%)
1999–2000
2000–2001
61
61
31.0
32.4
1.5
1.3
90.6
92.2
27.1
27.8
ELL (%)
1999–2000
2000–2001
50
57
4.9
5.1
0.0
0.0
24.3
27.7
5.3
5.9
807
820
SD
123.8
126.5
Note. Low SES = students receiving free or reduced-price meals; Special education = students receiving special education services; ELL = students qualified as
English language learners.
90 THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
TABLE 2. Summary Descriptive Statistics for Variables Entered Into the Analyses of Middle School
Performance
School characteristic
n
M
Minimum
Total enrollment
1999–2000
2000–2001
19
20
814.5
809.6
Special education (%)
1999–2000
2000–2001
19
20
12.8
13.8
7.6
9.5
19.2
18.3
3.6
2.8
Low SES (%)
1999–2000
2000–2001
19
20
21.3
18.0
3.5
2.8
54.4
54.0
15.6
14.2
Minority (%)
1999–2000
2000–2001
19
20
34.8
32.8
2.3
2.1
76.8
79.3
29.2
29.3
ELL (%)
1999–2000
2000–2001
14
20
3.6
2.7
0.0
0.0
10.2
11.2
3.4
3.4
509
526
Maximum
1,192
1,244
SD
208.3
181.2
Note. Low SES = students receiving free or reduced-price meals; Special education = students receiving special education services; ELL = students qualified as
English language learners.
ing special education services in both school years was approximately 11%, which is similar to the national average for
children aged 6 to 17 years (U.S. Department of Education,
2002). As can be seen in Tables 1 and 2, however, the range
of proportions in enrollment varied greatly, with some schools
reporting as few as 3% of their enrollment receiving special
education services, and others reporting over 33%. Whereas
the percentage of students receiving services as ELL in our
sample schools was relatively low (mean of 5% or lower for
elementary schools and mean of 2.8% for middle schools), the
percentage of students coded as minority students was higher
(i.e., 31%–35% for all sample schools) and much more variable (i.e., ranging from just over 1% to over 90%). The percentage of students receiving free or reduced-price meals at
the sample schools ranged from just over 1% up to 70.7% in
one school in 1999–2000, with a mean in both data collection
years of approximately 18%.
Results of the regression analyses conducted with the
1999–2000 school year data are summarized in Table 5. Results of the regression analyses conducted with the 2000–2001
data are reported in Table 6. The predictor variables identified
and entered into the analyses accounted for significant amounts
of the variance in the criterion variables in all cases. In the
analyses of the 1999–2000 data, the predictors combined accounted for 51% of the variance in the variable indicating the
performance of third-grade students with disabilities, and 39%
of the variance in the performance of the same students on the
math assessment. For the fifth-grade participants, the set of predictor variables accounted for less of the variance in the reading performance of students with disabilities (i.e., 26%) but a
higher percentage of the variance in math performance (i.e.,
47%). In the analyses of the 2000–2001 data, the combined
predictors accounted for 33% of the variance in the percentage of students with disabilities scoring proficient on the thirdgrade reading assessment, 51% for third-grade math, 26% for
fifth-grade reading, and 33% for fifth-grade math. Between
59% and 64% of the variance in the performance of the eighthgrade participants was explained by the set of predictor variables in both reading and math in both data collection years.
Simple Model Regression Analyses
In the simple model regression analyses, the percentage of
students receiving free or reduced-price meals at the school
predicted a significant amount of the variance in the performance of students with disabilities in 8 of the 12 analyses.
The percentage of students receiving free or reduced-price
meals (referred to in the tables as “Low SES”) accounted for
a significant amount of the variance in the performance of the
third-grade students with disabilities in reading and in math
in both the 1999–2000 and 2000–2001 school years (see Tables 5 and 6). This same predictor variable also accounted for
a significant amount of the variance in the performance of
fifth-grade students with disabilities in reading in 1999–2000
and in math in both 1999–2000 and 2000–2001. However,
with regard to the performance of the eighth-grade students
with disabilities, low SES accounted for a significant amount
of variance in only one content area (i.e., math) in one data
collection year. When all other predictor variables were controlled for, the contribution of the variable capturing SES to
THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
91
TABLE 3. Performance Summaries for Sample Elementary Schools
Grade
Grade 3 Reading
Gen. Ed. proficient (%)
1999–2000
2000–2001
Spec. Ed. proficient (%)
1999–2000
2000–2001
Grade 3 Math
Gen. Ed. proficient (%)
1999–2000
2000–2001
Spec. Ed. proficient (%)
1999–2000
2000–2001
Grade 5 Reading
Gen. Ed. proficient (%)
1999–2000
2000–2001
Spec. Ed. proficient (%)
1999–2000
2000–2001
Grade 5 Math
Gen. Ed. proficient (%)
1999–2000
2000–2001
Spec. Ed. proficient (%)
1999–2000
2000–2001
n
M
Minimum Maximum
61
61
48.5
41.8
49
44
32.5
26.6
61
61
52.2
48.5
56
54
21.1
18.5
SD
78.3
66.2
11.9
11.1
85.7
80.0
20.4
18.7
23.6
24.6
93.3
84.5
15.8
13.9
42.8
25.9
9.1
100.0
90.0
22.4
20.2
61
61
55.8
53.5
25.0
28.1
83.0
91.1
13.7
14.3
58
52
32.6
25.2
77.8
80.0
18.6
17.6
61
61
64.6
58.7
23.7
20.9
100.0
100.0
17.9
18.0
60
59
38.1
23.4
.0
.0
85.7
55.6
19.1
15.3
Note. Gen. Ed. = students receiving general education services; Spec. Ed. = students receiving special education services.
the explanation of variance in the performance outcomes for
students with disabilities became negligible in all but one
case. When “% Low SES” was entered in the last position, it
accounted for a significant amount of additional variance in
only 1 out of 12 outcome variables: third-grade reading performance in the 1999–2000 school year.
The percentage of students in general education who
met the criteria for proficient on the statewide assessments (referred to in the tables as “Gen. Ed. proficient”) was the only
predictor variable that was more consistent in predicting a significant amount of the variance in the performance of students
with disabilities in the simple model analyses. In all 12 sets
of analyses, the percentage of general education students meeting the criteria for proficient was statistically significant as an
explanatory variable.
The percentage of non-White students at each school
was significant in predicting variance in the simple model in
4 of the 12 outcome variables (i.e., third-grade math in both
1999–2000 and 2000–2001, fifth-grade reading in the 1999–
2000 school year, and eighth-grade math in 2000–2001). The
only other predictor variables that reached significance in the
simple model were the percentage of special education students considered exempt and the percentage of students receiving special education services at the school. The variable
capturing the percentage of special education students considered exempt accounted for a significant amount of the variance in the performance of the third-grade students with
disabilities in the 2000–2001 reading assessment only. The
percentage of students receiving special education services
at the school level was a significant predictor in the analysis
of the performance data for the fifth-grade students with
disabilities on the 1999–2000 math assessment. No other
predictor variables reached significance in the simple model
regressions.
92 THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
TABLE 4. Performance Summaries for Sample Middle Schools
Grade 8
n
M
Minimum Maximum
SD
Reading
Gen. ed. proficient (%)
1999–2000
2000–2001
18
20
33.8
31.7
18.8
13.7
56.0
56.5
9.7
10.3
Spec. ed. proficient (%)
1999–2000
2000–2001
18
20
8.7
8.0
0
0
22.7
24.0
5.4
6.8
Math
Gen. ed. proficient (%)
1999–2000
2000–2001
18
20
71.2
66.4
51.1
40.5
91.4
92.0
10.6
11.9
Spec. ed. proficient (%)
1999–2000
2000–2001
18
20
29.9
20.6
12.5
0
60.0
48.6
13.5
12.5
Note. Gen. ed. = students receiving general education services; Spec. ed. = students receiving special education services.
Hierarchical Regression Analyses
In the full model analyses, when the contribution of each predictor was examined after controlling for all other predictors,
the percentage of students receiving free or reduced-price
meals was significant for only 1 out of the 12 outcome variables (see Tables 5 and 6). The percentage of minority students
at the schools accounted for a significant amount of variance in
2 out of the 12 outcome variables (i.e., third-grade reading in
1999–2000 and third-grade math in 2000–2001). The percentage of students with disabilities served at the building level
and the percentage of students with disabilities whose scores
were considered exempt accounted for a significant amount of
the variance in the performance of third-grade students with
disabilities in math in 2000–2001, even though those two predictors had not been significant in the simple model.
The variable capturing the performance of general education students (i.e., Gen Ed. proficient) on the same tests reflected in the outcome variable of interest was the only
variable that was significant in a large majority of analyses,
across all three grades and both subject areas.
While the changes in R2 ranged from modest (i.e., .070
for fifth-grade math in the 2000–2001 school year) to marked
(i.e., .490 in eighth-grade reading in the 1999–2000 school
year), the value of the general education performance variable
was statistically significant in the complex model in 10 out of
12 cases. The only discrepant finding was in the analyses of
the 2000–2001 eighth-grade performance data, where none of
the predictors—including the performance of the general education students—was determined to add unique explanatory
power once the other predictors had been entered into the
equation as a control.
Discussion
Implications
In our analyses, the single most consistently significant predictor variable across LEAs, grade levels, and content areas
was the performance of the general education students. This
finding is striking because it suggests a school effect, whereby
schools that show good results for students without disabilities also show good results for students with disabilities.
When considering the achievement of students receiving special education services, the ownership of those students’ performance is typically considered to be the domain of special
educators. The success of students with disabilities, as well as
their difficulties, is usually linked to special education variables such as the qualifications of the special education teaching staff or the model of special education service delivery
embraced by the particular school. Viewing the achievement
of students with disabilities as the result of general schoolwide variables shifts the “ownership” of special education students’ success to a broader set of educators. This finding of a
relationship between the performance of students with disabilities on statewide assessments and that of their general education peers merits further exploration and is the focus of
ongoing research by EPRRI researchers.
An additional noteworthy finding was the lack of significance of certain predictors once other predictors were controlled for. Specifically, the percentage of students qualifying
for free and reduced-price meals at each school was, in all
cases but one, nonsignificant when other demographic and
performance variables were accounted for. This was startling,
given the strong association between a whole host of variables
TABLE 5. Results of Regression Analyses of the Performance of Third-, Fifth-, and Eighth-Grade Students with Disabilities on Maryland Statewide Assessment,
1999–2000 School Year
Reading
Grade 3
Math
Grade 5
Change
in R2
Grade 8
df
Initial
R2
Change
in R2
df
Initial
R2
Change
in R2
Grade 3
Grade 5
df
Initial Change
R2
in R2
df
Initial
R2
Change
in R2
Grade 8
df
Predictors
df
Initial
R2
Initial Change
R2
in R2
Low SES (%)
48
.136**
.072*
57
.067*
.000
17
.014
.000
55
.199**
.003
59
.179**
.000
17
.152
.000
Minority (%)
48
.025
.064*
57
.077*
.010
17
.000
.002
55
.088*
.000
59
.056
.002
17
.015
.009
Spec. ed. (%)
48
.008
.009
57
.001
.006
17
.024
.012
55
.003
.000
59
.078*
.039
17
.072
.014
Enrollment (total)
48
.000
.021
57
.030
.038
17
.023
.155
55
.040
.002
59
.027
.000
17
.004
.010
Gen. ed. prof. (%)
48
.414**
.227**
57
.126**
.089*
17
.328**
.490**
55
.381**
.152**
59
.426**
.146**
17
.404**
.196*
Spec. ed. exempt (%)
48
.027
.007
57
.030
.053
17
.015
.076
55
.000
.001
59
.006
.005
17
.117
.034
Note. Low SES = students receiving free or reduced-price meals; Spec. ed. = students receiving special education services; Gen. ed. prof. = students in general education whose scores were at the proficient level or above; exempt = scores were not included in the accountability index.
*p ≤ .05. **p ≤ .01.
TABLE 6. Results of Regression Analyses of the Performance of Third-, Fifth-, and Eighth-Grade Students with Disabilities on Maryland Statewide Assessment,
2000–2001 School Year
Reading
Grade 3
Math
Grade 5
Predictors
df
Initial
R2
Change
in R2
df
Initial
R2
Low SES (%)
41
.116*
.013
47
Minority (%)
41
.084
.026
Spec. ed. (%)
41
.078
Enrollment (total)
41
Gen. ed. prof. (%)
Grade 8
Grade 3
Grade 5
Grade 8
Change
in R2
df
Initial
R2
Change
in R2
df
Initial Change
R2
in R2
df
Initial
R2
Change
in R2
df
Initial Change
R2
in R2
.009
.004
18
.108
.076
51
.152**
.009
54
.142**
.000
18
.459**
.128
47
.010
.019
18
.000
.035
51
.136**
.134**
54
.059
.009
18
.220*
.000
.033
47
.010
.015
18
.030
.009
51
.038
.064*
54
.042
.034
18
.003
.002
.001
.002
47
.029
.010
18
.049
.003
51
.017
.033
54
.008
.020
18
.005
.000
41
.250**
.109*
47
.145**
.112*
18
.478**
.077
51
.208**
.071*
54
.212**
.070*
18
.428**
.001
Spec. ed. exempt (%)
41
.100*
.000
47
.004
.026
18
.000
.004
51
.066
.167**
54
.016
.046
18
.032
.001
ELL (%)
41
.048
.002
47
.003
.033
18
.006
.036
51
.042
.035
54
.040
.001
18
.228
.073
Note. Low SES = students receiving free or reduced-price meals; Spec. ed. = students receiving special education services; Gen. ed. prof. = students in general education whose scores were at the proficient level or above; exempt = scores were not included in the accountability index; ELL = students qualified as English language learners.
*p ≤ .05. **p ≤ .01.
THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
tapping SES and the achievement of students in general. That
this school-level variable was not particularly predictive of the
performance of students with disabilities is heartening, because SES is not a variable that schools or LEAs have much
direct control over.
With respect to special education variables, it was noteworthy that the proportion of students with disabilities within
a school was not a significant predictor. This means that large
populations of students with disabilities within a building did
not have the effect of bringing down the level of performance
of students with disabilities in that school, nor did they have
the effect of raising the level of performance of students with
disabilities by virtue of concentrated special education resources. Likewise, the other special education variable that was
examined in this study, the percentage of students with disabilities exempted from specific tests, did not predict a significant amount of the variance in the performance of students
with disabilities on those tests. That is, schools that tended to
have exempted more students with disabilities from the assessments did not have higher performance among the students
who did take the test. Because high rates of exemption were
spread across schools and grade levels, and not confined to
schools that housed special education centers, we do not believe that those schools with high exemption rates were simply the schools with the highest needs populations of special
education students. This was notable, given current concerns
about excluding the scores of some students with disabilities
from accountability. We do not deny that decisions to exclude
certain students’ scores from accountability could conceivably affect a school’s reported level of proficiency with regard
to students with disabilities (especially in very small or rural
schools where the population of students with disabilities is
very small). However, because of our large sample size, we
were able to control for varying rates of participation and verify that the percentage of students with disabilities participating in a specific assessment did not, in any case, predict level
of proficiency for that subgroup of students.
The finding in the study reported here that schools appear
to make a difference confirms one of the underlying assumptions of the new accountability system as formalized through
the NCLBA; that is, student performance is cumulative and is
influenced by the entire school (Goertz, 2001). Focusing accountability solely on individual children’s performance can
end up “blaming the victim” for failure as opposed to recognizing the responsibility and impact of all the faculty and staff
in a school. Instead, educational results are “co-produced”
(Goertz, 2001, p. 43) by students and teachers. Our findings
support this concept, as they make evident the complex mix of
factors that must be considered as states and school districts
endeavor to improve the results for students with disabilities.
Limitations
We readily acknowledge some potentially significant limitations of this research. First, we used publicly reported schoollevel data. Through contact with state and LEA administrators,
95
we made every effort to identify errors and to explain any
anomalies in the data (e.g., exceptionally large numbers of
specific groups of children, large fluctuations in participation
rates or demographic characteristics from one year to the
next). However, we did not collect the data at the school level.
In addition, any time performance on statewide assessments is used as a criterion variable in large-scale analyses, a
number of issues related to reporting of performance scores
must be considered. For example, in many schools throughout the country, the actual number of students with disabilities in a given grade level is so small that performance scores
are either not reported or must be treated with skepticism because of the variability inherent in the performance of small
numbers of students. This is especially important when
changes in performance trends are of interest, as changes that
appear to be trends may be natural but misleading fluctuations
in the data. In the case of the results presented here, the issue
of low ns is not a pressing one, as the schools in our sample
had fairly large enrollments of students with disabilities in the
data collection years. For example, in the 1999–2000 school
year, the mean enrollment for students with disabilities in the
sample elementary schools was 14 for third grade and 18 for
fifth grade, with a mode of 12 in both third and fifth grade. No
school had fewer than 6 students with disabilities in a testing
grade in either data collection year.
Another difficulty inherent in examining factors related
to the performance of students with disabilities is the changing policy environment. In Maryland, for example, a new statewide assessment was adopted after the collection of the data
presented here. The assessments included new accommodation requirements. Also, at the time of the study, the mandate
for fully including 95% of students with disabilities in the assessment was not in force, nor was there a requirement to include scores from alternate assessments in accountability.
Finally, we do not know why certain students’ scores were not
reported. For example, we do not know whether these scores
were possibly invalidated because of the use of nonstandard
accommodations. As all states are examining and altering assessment and accountability policies, for example, by changing the criteria for inclusion in alternate assessments and
changing accommodations policies, what is referred to and reported as “performance” continues to be in flux.
AUTHORS’ NOTE
1. This manuscript was produced under Cooperative Agreement
H324P000004 from the Office of Special Education Programs,
U.S. Department of Education. The views presented herein do not
necessarily represent those of the U.S. Department of Education.
REFERENCES
Alspaugh, J. W., & Gao, R. (2003). School size as a factor in elementary
school achievement. Columbia: University of Missouri. (ERIC Document Reproduction Service No. ED475062)
Borland, M. V., & Howsen, R. M. (2003). An examination of the effect of elementary school size on student academic achievement. International
Review of Education, 49, 463–474.
Caldas, S. J., & Bankston, C., III. (1997). Effects of school population so-
96 THE JOURNAL OF SPECIAL EDUCATION VOL. 39/NO. 2/2005
cioeconomic status on individual academic achievement. The Journal of
Educational Research, 90, 269–277.
Darling-Hammond, L. (1997). Doing what matters most: Investing in quality teaching. New York: National Commission on Teaching and America’s Future.
Driscoll, D., Halcoussis, D., & Svorny, S. (2003). School district size and student performance. Economics of Education Review, 22, 193–201.
Elliott, J. L., Erickson, R., Thurlow, M., & Shriner, J. (2000). State-level accountability for the performance of students with disabilities: Five years
of change? The Journal of Special Education, 34, 39–47.
Goertz, M. E. (2001). Standards-based accountability: Horse trade or horse
whip? In S. Fuhrman (Ed.), From the capitol to the classroom: Standardsbased reform in the states (One hundredth yearbook of the National Society of the Study of Education, pp. 39–59). Chicago: University of
Chicago Press.
Goldring, E. B., & Shapira, R. (1996). Principals’ survival with parental involvement. School Effectiveness and School Improvement, 7, 342–360.
Ho, S., & Willms, J. D. (1996). Effects of parental involvement on eighthgrade achievement. Sociology of Education, 69, 126–141.
Ma, X., & Klinger, D. A. (2000). Hierarchical linear modeling of student and
school effects on academic achievement. Canadian Journal of Education, 25(1), 41–55.
McLaughlin, M. J., & Thurlow, M. (2003). Educational accountability and students with disabilities: Issues and challanges. Educational Policy, 17, 431–
451.
Namboodiri, K., Corwin, R. G., & Dorsten, L. E. (1993). Analyzing distributions in school effects research: An empirical illustration. Sociology of
Education, 66, 278–294.
The No Child Left Behind Act. (2001). Pub. L. No. 107-110, 115 Stat. 1425
(2002).
Quality counts: Special education in an era of standards. (2004, January 8).
Education Week, 23(17).
Skrla, L., Scheurich, J. J., Johnson, J. F., & Koschoreck, J. W. (2001). Accountability for equity: Can state policy leverage social justice? International Journal of Leadership in Education, 4, 237–260.
Thurlow, M., Lazarus, S., Thompson, S., & Robey, J. (2002). 2001 state policies on assessment participation and accommodations (Synthesis Report 46). Minneapolis: University of Minnesota, National Center on
Educational Outcomes.
U.S. Department of Education. (2002). Twenty-fourth annual report to Congress on the implementation of the Individuals with Disabilities Education Act. Washington, DC: U.S. Government Printing Office.