Predicting which fifth-grade students will quit band before the year is over

Based on Klinedinst (1991): Predicting performance achievement and retention of fifth-grade instrumental students.  ·  17 min read.

A fifth-grade student holding a band instrument next to a music stand, with a simple bar chart visible in the background suggesting data about practice

In a rush?

We've broken down the insights into easy-to-digest Blini-bites at the end of this article. Jump straight to any question.

TL;DR
  • Reading achievement, math achievement, and scholastic ability were the strongest predictors of performance in beginning band, outperforming music aptitude scores after seven months of instruction.
  • Socioeconomic status was the single best predictor of whether a student would quit, which means dropout risk is partially visible in data your school already has.
  • Self-concept in music, not attitude toward music or general motivation, showed up as a significant retention predictor, suggesting how students feel about themselves as musicians matters more than how much they like music in the abstract.
  • The model correctly identified 97% of students who would stay in the program, but only 17% of students who would drop out, which tells us something important about the limits of prediction and why dropout prevention can't rely on data alone.

Every fall, band directors watch a group of fifth-graders walk in for the first time, instruments in hand, and wonder the same thing: which of these kids will still be here next back to school? Some directors have learned to read the room, to notice which students lean forward during demonstrations and which ones are already scanning the exits. But that kind of intuition is hard to act on, and it doesn't tell you much until the habits are already set.

Richard E. Klinedinst spent a full school year trying to answer that question with data. His 1991 study, published in the Journal of Research in Music Education, followed 205 fifth-grade beginning instrumental music students in the Cumberland Valley School District in Pennsylvania across 32 weeks of instruction. He collected 11 different predictor variables before instruction began, then measured three outcomes at the end: adjudicator ratings of performance, teacher ratings of achievement, and who was still enrolled. The goal was to find out which of those early variables actually predicted what would happen.

The results are more useful than most recruitment lore, and more sobering in a few places too.

Why a 30-year-old study still matters

Studies from 1991 can feel dated, but this one holds up for a specific reason: the variables Klinedinst examined are still the same variables directors are working with today. Your school still administers reading and math assessments. Your students still come from families across a range of socioeconomic situations. They still walk into fifth grade with a self-image already formed about whether they are "a music person." None of that has changed.

What has changed is how much easier it is to access some of these data points, and how much harder it has become to hold onto beginning students in an environment of competing activities, shorter attention spans, and programs under constant pressure to justify their existence.

Klinedinst drew on a body of prior research that had identified musical aptitude, intelligence, and academic achievement as predictors of success, but he went further than most. He included variables that had not previously been studied together in a single predictive model: self-concept in music, attitude toward music, home musical environment, motivation for achievement, socioeconomic status, and a detailed assessment of physical characteristics relative to specific instruments. That combination made it the most comprehensive study of its kind at the time, and it still covers more ground than most follow-up research has attempted.

What the study actually measured

The 11 predictor variables were:

  • Musical aptitude (Gordon's Intermediate Measures of Music Audiation)
  • Scholastic ability (Otis-Lennon School Ability Test)
  • Math achievement (Stanford Achievement Test)
  • Reading achievement (Stanford Achievement Test)
  • Classroom music teacher rating of student potential
  • Attitude toward music (Hedden's Attitude Towards Music Scale)
  • Self-concept in music (Svengalis's Self-Concept in Music scale)
  • Music background (Svengalis's Music Background Inventory, measuring home music activity)
  • Motivation to achieve in music (Asmus instrument)
  • Socioeconomic status (Hollingshead's Two-Factor Index of Social Position, based on parent occupation and education)
  • Instrument adaptation assessment (a researcher-designed scale evaluating physical characteristics relevant to specific instruments)

Each of these was assessed before instruction began in September 1988. Seven months later, in April 1989, Klinedinst measured outcomes three ways. Student performances on researcher-composed etudes were tape-recorded and independently rated by three adjudicators, each with more than 20 years of experience teaching beginning instrumental music. Separately, each student's own instrumental music teacher completed a global achievement rating. And teacher records determined which students were still enrolled.

The adjudicator and teacher ratings correlated strongly with each other (r = .68 to .72), which gave confidence that both measures were capturing something real about student achievement, not just teacher favoritism.

Academic ability beat music aptitude

This is probably the most counterintuitive finding for directors who spend real time and money on music aptitude testing during recruitment.

After seven months of instruction, the variables most strongly correlated with performance achievement were reading achievement, math achievement, and scholastic ability (r = .36 to .42 for adjudicator ratings, r = .43 to .49 for teacher ratings). Music aptitude, while statistically significant at the .01 level, accounted for less than 10% of the variance in both adjudicator and teacher ratings.

When Klinedinst ran the stepwise multiple regression, reading achievement was the only variable to enter the equation as a significant predictor of adjudicator ratings. For teacher ratings, scholastic ability came out on top. The three academic variables were so highly intercorrelated with each other (r = .71 to .79) that only one could enter the regression model at a time, which is why they traded places depending on the outcome measure. But all three pointed in the same direction: academic performance is a better predictor of early instrumental success than a dedicated music aptitude test.

r = .42

Reading achievement vs. adjudicator ratings

Strongest academic predictor

<10%

Variance explained by music aptitude

Adjudicator & teacher ratings

97%

Accuracy predicting who stays

Five-variable model

The reason is probably not mysterious. After seven months, what gets assessed in a beginning band context is mostly music reading: rhythmic accuracy, tonal accuracy from notation, tempo consistency. These are skills that require the same cognitive processing as reading text and working with symbolic systems. Musical aptitude in the Gordon sense measures something more innate, but the beginning band curriculum doesn't primarily test innate musicality. It tests how well a student can decode and reproduce notation-based material on a new instrument. Students who are stronger at symbolic processing have an early advantage.

This doesn't mean music aptitude is irrelevant. Klinedinst notes that earlier researchers, including Hufstader (1974), Mitchum (1969), and Young (1971), found similar results, and the consensus interpretation is that music reading criteria are more closely tied to intelligence and academic ability than to raw musical aptitude. The implication isn't "stop testing aptitude." It's "stop treating aptitude scores as the main selection tool" and recognize that the school records you can already pull from the front office may tell you as much or more.

There's also a practical upside here. Reading and math scores are already collected, already available, and already in your school's data system. You don't need to administer an additional instrument to identify which incoming fifth-graders are likely to perform well in the first year of instruction.

Retention was harder to predict than achievement

At the end of the seven months, 155 of the 205 students (76%) were still enrolled. Fifty students (24%) had dropped out.

Klinedinst found that five variables predicted retention: socioeconomic status, self-concept in music, reading achievement, scholastic ability, and math achievement. All five entered the stepwise discriminant analysis as significant predictors (F = 6.82 to 3.17, p < .01).

But the classification results reveal a problem worth sitting with. The model correctly identified 147 of 152 students who stayed (97% accuracy for the stay-in group). For the dropout group, it correctly identified only 8 of 47 students (17% accuracy). Overall classification accuracy was 78%.

The asymmetry problem

97% accuracy: who stays vs. 17% accuracy: who leaves

What that asymmetry tells you is that the model is quite good at predicting who is likely to persist, but much weaker at flagging who is likely to quit. Klinedinst is direct about why: "reasons given by students for discontinuing are many in number and are difficult to measure. Many times student dropout is influenced by external reasons, including peer pressure, conflicts with other activities, student-teacher relationships, and family considerations."

Those external reasons don't show up in any of the 11 predictor variables. A student with strong academic ability, a good self-concept in music, and a family with stable socioeconomic resources can still quit band because a coach scheduled practice on the same day, because their best friend quit, or because a single humiliating lesson experience tipped the balance. Those events aren't predictable from pre-enrollment data.

This is an important limitation to carry forward, and it comes up in the Blini section below.

Self-concept in music: the quiet variable

Of the five retention predictors, self-concept in music is the one that deserves special attention because it's the one a teacher can actually influence during the year.

Socioeconomic status, reading achievement, scholastic ability, and math achievement are all set before a student walks through your door. Self-concept in music is more malleable. It describes how a student views themselves as a musician, and Klinedinst's data suggest it plays a meaningful role in whether they stay.

The intercorrelation table adds texture to this. Attitude toward music and self-concept in music were moderately correlated with each other (r = .45), and both were moderately correlated with home musical background (r = .41 and .49 respectively). Students from musical households tended to have more positive musical self-concepts and better attitudes toward music. Motivation to succeed in music also correlated with attitude and self-concept. The picture is of a cluster of mutually reinforcing factors: music at home builds self-concept, self-concept builds attitude, attitude builds motivation.

None of these cluster variables correlated strongly with achievement in the regression analysis, but self-concept made it into the retention model. The interpretation Klinedinst offers is that self-concept may not determine how well you play after seven months, but it does influence whether you keep going. A student who has been told (explicitly or implicitly) that they are not a music person may achieve adequately but still leave.

The practical implication is straightforward, even if it's hard to execute: how you talk to beginning students about their progress, and how you frame early failures and corrections, may matter more for retention than for achievement. A student who leaves with their musical self-concept intact is more likely to come back or to continue outside your program.

Socioeconomic status and what it means for equity

Socioeconomic status entered the discriminant analysis first, with the highest F value (6.82), making it the strongest individual predictor of retention among the five variables.

The direction of the relationship was that students from lower-SES families were more likely to drop out. This replicates findings from earlier work by McCarthy (1980) and Mitchum (1969). Klinedinst measured SES using the Hollingshead Two-Factor Index, based on parent occupation and education level, which is a standard sociological instrument.

There was also a complication worth noting: SES varied significantly between the seven schools in the study (p < .01). Two schools had notably lower SES means (24.27 and 24.32) compared to the other five schools (33.33 to 39.50). Because SES was both a strong retention predictor and unevenly distributed across schools, Klinedinst acknowledged that the predictive value of SES may be somewhat reduced by those between-school differences.

None of this is surprising to directors who have worked in diverse districts. The reasons SES predicts dropout are multiple and not all directly visible: families with fewer resources face more scheduling pressure, more competing financial demands on activity fees, more likely instrument rental gaps, and in some cases less cultural familiarity with band programs as something worth fighting to keep a child in. The student's own self-concept in music may also be shaped by whether music is a regular part of family life.

An equity signal, not just a retention metric

If you know that lower-SES students leave at higher rates, and the research has confirmed this across multiple studies, a program that doesn't actively compensate for that disadvantage will gradually self-select for students whose families have more resources. That's a problem for the program and for the students who leave.

What the model missed and why that matters

The 17% accuracy rate for predicting dropouts is worth returning to because it contains a lesson about what data can and can't do.

Klinedinst's five variables were genuinely predictive of staying. They were not genuinely predictive of leaving. The difference matters because dropout prevention programs built on predictive models tend to focus on identifying high-risk students in advance. If the model misses 83% of the students who will leave, an intervention strategy built around that prediction will also miss most of them.

The variables that predicted departure in individual cases, peer pressure, schedule conflicts, teacher relationships, a bad week followed by a decision that quietly hardened into permanence, are inherently harder to measure from baseline data. Some of them can only be observed in real time, in the actual behavior of students during the year.

This is one of the strongest arguments for ongoing monitoring during the instructional period rather than just pre-enrollment prediction. A student who was predicted to stay can still be at risk by January if their practice frequency dropped to zero in November. That signal isn't in any pre-enrollment data. It only shows up if someone is watching.

Practical implications for directors today

Klinedinst's own conclusions section is worth reading closely because he moves directly from the research to what directors should do differently. A few of his recommendations translate well to current practice.

  • Use school records in recruitment. Academic achievement and scholastic ability scores are already collected. They're sitting in your student information system. Cross-referencing those scores against your recruitment pool can help identify students with high potential who haven't shown strong interest yet. Klinedinst cites Froseth (1974) on this point: these high-potential, low-interest students are often easy to recruit, tend to do well, and are less likely to drop out once enrolled.

  • Don't recruit on interest alone. Many beginning band programs focus almost entirely on generating excitement, demos, instrument petting zoos, parent nights. These are valuable, but they recruit based on enthusiasm rather than fit. The students who show up buzzing with excitement may not be the same students who are still there in April. Directing some recruitment energy toward identifying students likely to succeed, even if they haven't raised their hand yet, may produce better retention outcomes.

  • Pay attention to musical self-concept, especially early. The first few months of instruction are when self-concept in music is most fragile and most formable. Feedback that is framed around effort and growth rather than talent or natural ability may be particularly important in this window. A student who leaves the first semester believing they "just aren't musical" is a retention loss that didn't have to happen.

  • Know your SES distribution and plan accordingly. If your program draws from a diverse socioeconomic range, you already know that some of your students face real financial barriers to continued participation. Identifying those students early and connecting them with fee waiver programs, instrument loan programs, or booster support isn't just good policy. It's targeted retention practice grounded in what the research says about who leaves.

  • Don't try to predict dropout from enrollment data alone. The 17% accuracy rate for the dropout group is a clear signal that pre-enrollment variables are insufficient. The students who will leave are not uniformly identifiable in advance. Monitoring what happens during the year, specifically practice behavior and engagement patterns, gives you a real-time layer of information that enrollment data cannot provide.

Source: (). Predicting performance achievement and retention of fifth-grade instrumental students. Journal of Research in Music Education, 39(3), 225-238.

Blini logo

How we think about this at Blini

Reading this paper, what strikes us is how early the trajectory forms. Klinedinst's data was collected before instruction even started. By September, before a single lesson, the combination of academic background, self-concept in music, and family socioeconomic context already pointed toward who would be playing in April. That's not fatalistic. It's a call to act early and keep watching.

The part of Klinedinst's findings that shaped how we think about Blini most directly is the 17% figure. The model was good at predicting retention. It was poor at predicting dropout. The variables that drive individual dropout decisions, a schedule conflict, a bruising lesson, three weeks of not practicing followed by too much ground to make up, aren't visible in enrollment data. They become visible during the year, in behavior.

Where Blini fits

Blini is a practice tracking tool. It attaches to a student's music stand and passively detects when they practice, without requiring them to open an app, log in, or interact with a screen. After practice, session data syncs to the teacher's ensemBlini dashboard: how long the student practiced, how consistently they practiced across the week and month, and early pattern signals in their playing.

In the context of Klinedinst's research, what Blini addresses is the real-time monitoring gap. You can't predict from September data which specific students will leave by April. But you can notice in November if a student who was practicing three times a week has dropped to zero. You can notice if a student's session durations have been declining for three consecutive weeks. Those are behavioral signals of disengagement, and they appear before the student has made a final decision to quit.

The students in Klinedinst's dropout group weren't identifiable in advance. But some of them almost certainly had an observable behavior change in the weeks before they stopped showing up. Blini's job is to make that window visible.

The screen problem

Klinedinst's study is partly a story about self-concept. Students who felt worse about themselves as musicians were more likely to leave. Adding a practice-logging app to the already fragile habit of home practice introduces a new source of friction and potential failure. If a student forgets to log, or finds the app confusing, or just dislikes the feeling of being tracked through a screen, the app itself can contribute to disengagement.

Blini removes the screen from the practice moment entirely. The student clips the device to the stand, picks up the instrument, and plays. Nothing to open. Nothing to log. The device handles the rest. For a student whose musical self-concept is already shaky, not having to also manage a digital task during practice is a meaningful reduction in cognitive load.

What the teacher gets

The ensemBlini dashboard gives you a class-level view of practice participation: who practiced this week, how often, for how long. It also gives you a student-level view for any student whose guardian has activated an account. That combination lets you see two kinds of signals.

The first is the aggregate signal: if practice participation drops across the whole class after a concert or a school break, you know before the next rehearsal that the group needs to rebuild momentum. The second is the individual signal: if one student's practice frequency has been declining for three weeks while everyone else is holding steady, that's a quiet flag worth a five-minute conversation before the student goes quiet in other ways too.

This connects directly to Klinedinst's finding that retention is more predictable from behavioral patterns than from one-time enrollment data. Real-time practice data gives you the behavioral layer that pre-enrollment assessments can't provide.

If you want to see what the practice data layer looks like in practice, the demo app is available at try the demo app. No login required. It shows a sample teacher dashboard with realistic data patterns so you can decide whether it's worth adding to your program.

Blini-bites

Quick answers: each question has its own link.

1

What are the best predictors of success for beginning band students in the first year?

According to Klinedinst (1991), who studied 205 fifth-grade beginning instrumentalists over seven months in the Journal of Research in Music Education (Vol. 39, No. 3), reading achievement, math achievement, and scholastic ability had the strongest relationship to performance achievement as rated by both independent adjudicators and classroom teachers. Music aptitude scores were statistically significant but accounted for less than 10% of the variance in performance ratings, suggesting that academic skills matter more than music-specific aptitude in the first year of instruction.

#blini-bite-1
2

Does music aptitude testing predict which students will do well in beginning band?

Klinedinst (1991) found that music aptitude, while statistically significant, was a weaker predictor of first-year performance than reading achievement, math scores, and general scholastic ability. The likely explanation is that beginning band instruction emphasizes music reading and notation decoding, skills that rely more on general symbolic processing than on raw musical aptitude. Aptitude tests remain useful for long-term potential, but they appear to be less predictive than school records in the short term.

#blini-bite-2
3

What predicts whether a fifth-grade student will drop out of band?

Klinedinst (1991) found that five variables significantly predicted student retention: socioeconomic status, self-concept in music, reading achievement, scholastic ability, and math achievement. Together these variables correctly classified 78% of students overall. Socioeconomic status entered the model first and was the single strongest predictor, followed by self-concept in music.

#blini-bite-3
4

How accurately can you predict band dropouts before the year starts?

The prediction is asymmetric. Klinedinst (1991) found that the five-variable model correctly identified 97% of students who would stay in the program but only 17% of students who would drop out. This is because dropout is often triggered by external factors during the year, including peer pressure, schedule conflicts, and student-teacher dynamics, that are not captured in pre-enrollment data.

#blini-bite-4
5

Does a student's socioeconomic background affect whether they stay in band?

Yes. Klinedinst (1991) identified socioeconomic status as the strongest single predictor of retention among beginning instrumentalists, replicating earlier findings by McCarthy (1980) and Mitchum (1969). Students from lower-SES families were more likely to discontinue. The effect may relate to financial barriers, competing family pressures, and reduced access to home music environments that support continued participation.

#blini-bite-5
6

Does how a student feels about themselves as a musician affect whether they quit?

Klinedinst (1991) found that self-concept in music was a significant retention predictor, separate from attitude toward music or motivation to achieve. Students with a weaker musical self-concept were more likely to drop out even when controlling for academic ability. Self-concept in music also correlated moderately with attitude toward music and home musical background, suggesting it is part of a broader cluster of reinforcing factors that influence persistence.

#blini-bite-6
7

Can you use school records to identify which students have the best potential for beginning band?

Klinedinst (1991) explicitly recommends consulting school records for academic ability and achievement test results when identifying recruitment targets, particularly for students who have high potential but have not self-selected into the program. His data showed reading achievement and scholastic ability were the strongest performance predictors, both of which are already collected in standard school testing programs.

#blini-bite-8
8

Why does reading ability predict instrumental music achievement?

Klinedinst (1991) interpreted this relationship as reflecting the nature of beginning band instruction itself. Early instrumental music heavily emphasizes music reading from notation, a task that shares cognitive demands with general reading and symbolic processing. Students who are stronger in these areas can decode musical notation more readily, giving them an early advantage that compounds across the instructional year.

#blini-bite-9
9

What percentage of fifth-grade band students quit within the first year?

In Klinedinst's (1991) study of 205 fifth-grade beginning instrumentalists in a suburban Pennsylvania school district, 24% of students (50 of 205) discontinued within the seven-month study period. This figure reflects a specific district context (predominantly white, middle to upper-middle income) and is likely a conservative estimate compared to districts with greater socioeconomic diversity or fewer program resources.

#blini-bite-10
10

Are attitude toward music and motivation to achieve good predictors of whether students stay in band?

Neither attitude toward music nor motivation to achieve in music emerged as significant predictors of retention in Klinedinst's (1991) discriminant analysis, despite both being included in the 11-variable model. This is somewhat counterintuitive, but may reflect the fact that students who initially enroll in band already have above-average positive attitudes toward music, reducing the predictive power of that variable. Self-concept in music, a more internalized measure, showed up as the more relevant predictor.

#blini-bite-11
11

How can teachers use practice monitoring to improve retention if dropout is hard to predict?

Klinedinst (1991) found that pre-enrollment data correctly identified only 17% of eventual dropouts, suggesting that enrollment-time prediction alone is insufficient for prevention. What the research implies is that real-time behavioral monitoring during the instructional year, specifically tracking whether students are actually practicing at home, may provide early warning signals that static baseline data cannot. A student whose practice frequency drops sharply in the weeks before quitting is showing a behavioral signal before the final decision is made.

#blini-bite-12

See who is still practicing before they go quiet

The students most likely to quit are already showing you signs. The demo shows what the data layer looks like.

Try the demo free