Trends in Basic Reading Skills Among School-Age Children
- Of the three age groups assessed by the LTT, the youngest students (nine-year-olds) experienced the greatest gains from 1971 to 2012, the most recent year in which all three groups were assessed. In 2012, 74% of nine-year-olds—up 15 percentage points from 1971—demonstrated at least partially developed reading skills (Indicator I-01a). From 2012 to 2022, however, some of that ground was lost, with only 67% of nine-year-olds demonstrating that level of reading skill in the latter year. The bulk of the decline was during the COVID pandemic period.
- From 1971 to 2012, the share of 13-year-olds scoring at the mid-range skill level or better (the ability to interrelate ideas and make generalizations) increased from 58% to 65% (the largest share recorded over the time period), and those demonstrating understanding of complicated information increased from 10% to 15% (Indicator I-01b). By 2023, however, the share with at least mid-range skills had shrunk to its 1971 size. The share scoring at the highest skill level also decreased somewhat. As with nine-year olds, much of the decline occurred after 2020.
- For 17-year-olds, the percentage of students demonstrating the ability to interrelate ideas and make generalizations (the basic performance level for this age) rose from 79% in 1971 to a high of 86% in 1988 (Indicator I-01c). Subsequently, the trend reversed, with scores falling back to near 1971 levels by the early 2000s. Then, from 2004 to 2012 (as of this update, the last time students in this age group were tested) the share demonstrating basic skills or better increased to 82%.
- The share of students leaving high school with the ability to extend and restructure ideas drawn from specialized or complex texts (competencies associated with the highest performance level) was 6% in 2012, having not changed by a statistically significant amount since the assessment was first administered in 1971.
- On every assessment since 1971, only a small minority of students demonstrated advanced reading skills. Nine-year-olds were more likely to read at a high level (for their age) than their older counterparts.
* Data for years 2004 and later are based on a revised assessment. See https://nces.ed.gov/nationsreportcard/ltt/howdevelop.aspx for an overview of the differences between the original and revised assessments.
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, NAEP Data Explorer (Long-Term Trend), accessed 11/30/2023. Data analyzed and presented by the American Academy of Arts and Sciences’ Humanities Indicators (www.humanitiesindicators.org).
The National Assessment of Educational Progress (NAEP) includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test generates long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.
The NAEP long-term trend exam (LTT), upon which this indicator is based, is taken by a nationally representative sample of students in each of three age groups: nine-year-olds, 13-year-olds, and 17-year-olds. The NAEP LTT uses a single scale, referred to as a “performance scale,” to assess all students. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a nine-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the nine-year-old and a basic level of performance on the part of the 17-year-old. The percentages indicated on the graphs displaying LTT data (Indicators I-01a, I-01b, and I-01c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; the three performance levels at which the bulk of students scored are included on the graph).
Although the performance levels that constitute “basic,” “proficient,” and “advanced” are different for each of the age groups, to facilitate interpretation of the LTT graphs the color-coding of the levels is consistent across them. Green represents the percentage of students scoring at or above the most basic performance level for that age group. Orange represents the percentage scoring at or above the intermediate performance level. Teal represents the percentage scoring at or above the advanced performance level.
In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English-language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so the National Center for Educational Statistics (NCES) could investigate the effects of the new format on scores. This “bridge” study indicated that differences in average student scores between the two formats were solely attributable to the inclusion of students with disabilities and English-language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.” (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.)
* Data for years 2004 and later are based on a revised assessment. See https://nces.ed.gov/nationsreportcard/ltt/howdevelop.aspx for an overview of the differences between the original and revised assessments.
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, NAEP Data Explorer (Long-Term Trend), accessed 11/30/2023. Data analyzed and presented by the American Academy of Arts and Sciences’ Humanities Indicators (www.humanitiesindicators.org).
The National Assessment of Educational Progress (NAEP) includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test generates long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.
The NAEP long-term trend exam (LTT), upon which this indicator is based, is taken by a nationally representative sample of students in each of three age groups: nine-year-olds, 13-year-olds, and 17-year-olds. The NAEP LTT uses a single scale, referred to as a “performance scale,” to assess all students. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a nine-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the nine-year-old and a basic level of performance on the part of the 17-year-old. The percentages indicated on the graphs displaying LTT data (Indicators I-01a, I-01b, and I-01c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; the three performance levels at which the bulk of students scored are included on the graph).
Although the performance levels that constitute “basic,” “proficient,” and “advanced” are different for each of the age groups, to facilitate interpretation of the LTT graphs the color-coding of the levels is consistent across them. Green represents the percentage of students scoring at or above the most basic performance level for that age group. Orange represents the percentage scoring at or above the intermediate performance level. Teal represents the percentage scoring at or above the advanced performance level.
In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English-language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so the National Center for Educational Statistics (NCES) could investigate the effects of the new format on scores. This “bridge” study indicated that differences in average student scores between the two formats were solely attributable to the inclusion of students with disabilities and English-language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.” (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.)
* Data for years 2004 and later are based on a revised assessment. See https://nces.ed.gov/nationsreportcard/ltt/howdevelop.aspx for an overview of the differences between the original and revised assessments.
Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, NAEP Data Explorer (Long-Term Trend), accessed 11/30/2023. Data analyzed and presented by the American Academy of Arts and Sciences’ Humanities Indicators (www.humanitiesindicators.org).
The National Assessment of Educational Progress (NAEP) includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test generates long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.
The NAEP long-term trend exam (LTT), upon which this indicator is based, is taken by a nationally representative sample of students in each of three age groups: nine-year-olds, 13-year-olds, and 17-year-olds. The NAEP LTT uses a single scale, referred to as a “performance scale,” to assess all students. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a nine-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the nine-year-old and a basic level of performance on the part of the 17-year-old. The percentages indicated on the graphs displaying LTT data (Indicators I-01a, I-01b, and I-01c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; the three performance levels at which the bulk of students scored are included on the graph).
Although the performance levels that constitute “basic,” “proficient,” and “advanced” are different for each of the age groups, to facilitate interpretation of the LTT graphs the color-coding of the levels is consistent across them. Green represents the percentage of students scoring at or above the most basic performance level for that age group. Orange represents the percentage scoring at or above the intermediate performance level. Teal represents the percentage scoring at or above the advanced performance level.
In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English-language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so the National Center for Educational Statistics (NCES) could investigate the effects of the new format on scores. This “bridge” study indicated that differences in average student scores between the two formats were solely attributable to the inclusion of students with disabilities and English-language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.” (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.)