Tuesday, April 22, 2014

Was proctoring the FCAT worth 50+ million of your tax dollars every year?

It currently costs $12,700 dollars a year to educate a student in the US public schools.  That works out to about $70/day.  Here in Florida, that number shrinks to $8,887/student, or about $49.40/day (let's just call it $50/day).  Today's FCAT glitch probably just cost a half-day for each student; "thousands" of students were affected.  That's ~$25/student/test that was probably completely wasted.  So are we talking $50,000 of tax dollars down the drain?  $100,000 (4,000 students)?


Is all of this time being spent to collect data (often of dubious quality, to be polite...) a waste and every half-day spent testing is $25/student thrown away?  (And I'm guessing between VAM tests, progress monitoring tests, and FCAT, the average Florida student spends close to a full week of school testing.)   The total opportunity cost caused by lost class time and proctoring these standardized tests must be staggering.

Or, put another way--is proctoring the FCAT worth $50 million?*  Oh, and don't forget Pearson's contract with the state--that costs you another ~$51 million a year.  I have a hard time believing this is worth it; it's probably enough to run a small school district for a year. 



*  There are 2,587,000 students in Florida's public schools.  If that distributes evenly amongst grades K-12, that works out to 199,000 students/grade.  It looks like nearly every grade above 2nd tests (at least once), so that's 10 grades, or ~2,000,000 of the students were tested every year.  So two million tests multiplied by $25/student/test yields ~$50,000,000.  (This is admittedly a pretty rough estimate with some mutant statistics.)

Wednesday, November 20, 2013

"I'm not even going to read the question."

...So said one of my students, prior to my handing out the wonderful test for my VAM score today.  And while this student wound up actually reading (at least) some of the questions, others wanted to start filling out the bubble-sheets before they even got the test.  Others were done with the 40-problem test within ~5-10 minutes. So please, tell me highly-paid "educational experts", why is it a good idea to judge me on these test scores?

How many millions of dollars are being spent collecting junk data for a statistically flawed analysis that is shown to be inappropriate and has essentially no basis as far as effectiveness is concerned?  It's stupid bogus sophistry (BS) like this that makes me think the schools probably would have enough money if the people at the top (Federal, State, District) knew what they were doing.


Wednesday, May 29, 2013

Visible Learning, Invisible Evidence

So I'm "done" (returning) Hattie's Visible Learning tomorrow.  I read over the first two chapters; didn't really focus on the actual "meat" of the book as I don't think the numbers mean squat.  They are at best extremely unreliable; I'd love to see someone try to test some of these numbers.  (i.e., focus on one strategy, test it repeatedly, and see if the results come back anywhere near the average Hattie presents.  Or even take a few [large] random samples of older research and see if the same number comes back up.)

A few of my questions/comments/concerns:

1.  If these effect sizes are accurate, why can a teacher not focus on 2-3 things and thus be more-or-less a "great" teacher?  If these evaluations' [e.g., Marzano] checklists aren't checklists as claimed, that is, "It's stuff [we're] already doing in class"...well...with all these great effects, why isn't virtually every teacher great?  I see three possibilities (not mutually exclusive):
i.    Virtually every teacher is not doing them (and there are a LOT of them) enough.
ii.   Virtually every teacher sucks at virtually every one of them.
iii.  The numbers suck.

(Technically I can think of a fourth but I excluded it; there is the--illogical--possibility that the numbers are somehow not cumulative.  But if that's the case, it destroys the whole argument for implementing these strategies.)

2.  Hattie states that a d=0.4+ is the "zone of desired effects".  Yet he also states, "Further, there are many examples that show small effects may be important" and goes on to mention a study with a d = 0.07 wherein "34 out of every 1,000 people would be saved from a heart attack if they used low dose aspirin on a regular basis".  Well, if it effects 34 out of 1,000 people, it would save 1.9 million out of ~55 million.  I use this latter number because that's how many K-12 students there are in the US.  Obviously this wouldn't be as signficant as a life-or-death situation, but if it's going to help (rather than save) that many kids, is it worth looking into?  To quote Hattie, "This sounds worth it to me." (pg 9)  Hattie's "hinge point" seems purely arbitrary.  This also highlights the difference between the (pseudo)scientific approach of meta-analysis in the medical field and education, which leads to...

3.  Applying a scientific approach to unscientific data results in unscientific results.  And seeing as how this whole book strikes me as just yet another attempt to latch onto science's credibility (something educational research, generally speaking, does not have), that's a big deal.  In fact, there's something absurd about even having to discuss whether the quality of the data matters (pg 11).  Case in point:  He cites Torgerson et al. (2004), who used 29 out of 4,555 potential studies on a subject area.  These were chosen as "quality" (Torgerson's definition) studies because they used randomized controlled trials.  That helps improve the quality of your data, alright, but...what about the other 4,526?  99.4% of the research didn't use random trials?  The best education can typically do (not faulting education, it's just the nature of the beast) is "quasi-experimental" studies.

4.  Another problem with the data that puts a big red flag on all these numbers (again, GIGO):  There are no real (scientific) controls in educational research.  A control is a "yes/no" situation; Group A gets the experimental treatment (e.g., a drug) and Group B does not (e.g., a placebo). Obviously you can't do this without doing something tantamount to child abuse (i.e., standing there and doing absolutely nothing)...but frighteningly, that's the only meaningful "control" there could be.  (And that's one reason why education data will never be scientific in nature.)

5.  Barring a strictly regimented routine (one that could probably be automated via presentation software), it's highly unlikely two teachers using the same "technique" will apply it identically.  (And the same goes for the "controls" above; what teachers replaces the experimental technique with will differ, rending comparisons dicey) This leads to another "apples and oranges" scenario for meta-analysis (albeit admittedly a relatively weak one).

6.  More apples and oranges:  One technique may be effective at one grade level but not another.  I have no problem accepting that having a learning goal may help first graders.  They may need the focal point and they are (I believe...) general subjects/topics.  I have a hard time accepting that having written on the board "student is going to factor tri-nomials" is going to have a significant impact on seniors in algebra.  (Anecdote:  My students have repeatedly mocked/made derisive comments when they see me changing the learning goals.  For example:  "You know we never look at those, right?"  "Yes, I know, it's just something I have to do."  Very empowering, let me tell you...)  Mushing multiple grades together into one statistic is just a bad idea.  Ditto for different subjects (at higher grades).

Sunday, January 27, 2013

Is Teaching Still a Viable Career Path?

One of the worst questions I'm asked nowadays are questions along the lines of "What should I major in?"  To which I have no good answer--I don't know what I'd do in this generation's shoes.  About the only "safe" majors are probably finance/accounting.  A few seem to be trying to go into teaching, to which I offer two main pieces of advice:

1.  Get a degree in the subject you want to teach, not a degree in education of that subject (if applicable).

2.  Get a second major or minor or in some way, shape, or form, start preparing for "Plan B", because there's nearly a 50% chance they'll need it within five years.

I offer this advice for one main reason:  teacher turnover is insane.  The reasons are myriad, but they probably all fall under "burnout" (or "stress", financial or otherwise) in some fashion (whether it's being blamed for all of society's woes, the attacks on their benefits, or what have you).  And it got me thinking: 

Is becoming a teacher a viable career path anymore?

No one goes into teaching for the pay; it's always been (way, in my admittedly biased humble opinion) too low.  But at this point, after stagnation and massive increases in health care costs, is it really viable to go into teaching as a career?  I see it more as a second income for couples at this point; I could not in good conscience recommend it as a "primary" (sole income) career path for college-bound students.  Update:  Though this may finally make it worthwhile; $10,000 bachelor's degrees in science and math education.  (Though I'd still recommend a backup plan.)

Like most middle-class salaries, teaching salaries have stagnated (below).  But again, given that they were already low, has teaching fallen from "middle class" (financially) because of the constant erosion of their salary by rising health care costs?

Average teacher salaries (constant dollars):  (Note:  I wish I had an "average salary" for teachers' first five years; I imagine these figures below are greatly skewed up by the fact that the teaching population is aging.  The current average age is ~41, which means these figures are probably for teachers averaging 15-20 years of experience.)


Ventingmycynicism...now with graphics!

Thursday, January 17, 2013

Hattie vs. Willingham (and science)

So I'm currently trying to read Dr. Willingham's When Can You Trust the Experts and Dr. Hattie's Visible Learning....  Granted, my bias is for the scientist (Willingham), not the education major.  But I thought it funny that last night I read from Trust things to the effect of watching out for marketing buzz-words preying on enlightenment area thinking:  "research [or evidence] based", unlocking potential, etc."  Turn to Hattie:

Reveals teaching's Holy Grail (right off the cover) 

Yeah, um...no.  To paraphrase Willingham:  there is no "magic bullet", no "hidden potential".  (Also, it's...odd...that education's "Holy Grail" would have garnered a total of 12 reviews on Amazon after 3+ years.)

Turn to the back of Visible:

"...represents the largest ever collection of evidence-based reaserch...."

"Evidence-based" was one of the meaningless buzz phrases Willingham said to watch out for.  (What exactly is NON evidence-based research???)

Oddly enough, this shows up just two paragraphs later (also from the rear cover of Visible):

"Although the current evidence-based fad has turned into a debate about test scores...."

Wait, what?  Did the book's back cover just call this book part of a fad???  Methinks they probably should have used "trend" or "movement" if they wanted to promote this book.

I'm going to have an extremely difficult time giving Hattie's book a fair shake; I've already read a few things that make me dubious.  And one thing that, while not discrediting the whole book, shoots a pretty big hole in it:

"Matching style of learning" (pg 195)

d=0.41  Wow, so matching students to their learning styles has an "average" effect?  In other words, something that does not exist has a sizable (average) impact?  Neat.  What does this say about the methodology?

A few other quick points:

1.  This is a synthesis of meta-analyses.  Which I think is the same thing as saying it's a meta-analysis of meta-analyses, but everyone would see the immediate problem here; you're now two levels away from the raw data.  (To continue the banking analogy, this is like the mortgaged-based derivatives; their value collapsed because no one knew what they were really based on after being sliced and diced, repackaged and so on.)  And you've now "massaged" the numbers twice, introducing error the first time, then compounding it.

2.  Who peer-reviewed this work?  (I'm pretty sure the answer is "no one".)

3.  I'm still not convinced these effect-sizes mean a whole lot in and of themselves.  As of right now, I see no measurement of whether or not the effects are real, just that they are "big".  And the definition of "big" is ambiguous.  More on this later; I'm still trying to really figure these out before I open my mouth.

Monday, January 14, 2013

What do Banks and Bad Ideas in Education Have in Common?

They're too big to fail.

As I was reading through Dr. Willingham's When Can You Trust the Experts last night it hit me:  many of these bad ideas (e.g., Marzano, learning styles, no zero policies) are now "too big to fail."  Too many people have made their careers on these ideas.  Too many people have careers based on these ideas (see below).  So even if you could get through the "it must be right because everyone believes it" mentality, you would still have an uphill fight--displacing people in administration that base their livelihoods on (essentially) wasting time and money.  (And of course, these people would have an added incentive to NOT believe what they were doing was useless, and they're in power...which means getting through that "this doesn't work/this isn't real" barrier may very well be impossible.)

My district alone has a seven-person "Accountability and Assessment" department that includes:

Director of Assessment and Accountability
Program Manager for Testing, Grants, Development & Evaluation
Program Manager for Assessment and Data Analysis
Program Evaluator and Data Analyst
Test Development Specialist
Test Warehouse Operator
Clerical Assistant

We won't ask why a seven-person department needs a Director and two managers.  Gotta love that near 1:1 ratio of managers to non-managers!  (And I got $5 that says that the clerical assistant does more real [honest] work than the managers and director combined--and for half their salary.  ;) )

Potential solution:  Education grad schools need to focus more on how to conduct and/or review research.  This will (hopefully...) start putting more knowledgeable people in power down the road, thus avoiding falling for these bad ideas.

Update:  1/19/13

Just heard this and thought it apropos:  "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"--Upton Sinclair

Saturday, December 22, 2012

Arming teachers?

I'm not going to pretend to have an answer to this cultural problem, but I did want to comment on the reaction of "let's arm the teachers".  Overall, I am pretty strongly against this idea.  Here is my take:

Pros:

1.  It could be a deterrent.
2.  It could limit casualties.
3.  Relatively cheap and easy implementation.

Cons:

1.  It could be a deterrent, but that's about it.
2.  It could limit casualties, but barring highly idealized situations, won't actually stop these shootings--making it a bit of a half-assed "solution".
3.  Relatively cheap and easy implementation--if you get full buy-in, which is extremely unlikely; it would pretty much require every teacher to be armed to be remotely effective in stopping these shootings.

To expand on the cons a bit (since I'm trying to show why I do NOT support this idea):


Realistically, it most likely won't actually stop a shooting, it can only hope to limit the casualties.  Even if the teacher is armed, a rampaging student will most likely get off several shots before an inexperienced (in firearms & self-defense) teacher could draw the gun.  Further, if said shooter(s) think the teacher has a gun, that may just make said teacher the first target; again, without extensive training, they'd be down before they could do anything (see video above).

While the theory is sound--armed citizens keep people safe--the reality is that it doesn't seem to hold much water.  This is another solution that requires several ideal assumptions to be true--every time.  (e.g., the shooter has to come into a room where the teacher is armed, the teacher has to react faster than the shooter...otherwise, it won't stop the shooting, it will at best minimize casualties).  And how much of a deterrent will the possibility of getting killed be when many of these massacres end with the shooter killing themself?

How many teachers would want to carry a gun?  I like guns--grew up shooting them--which probably (I have no data to back this up; just an assumption) makes me unusual as far as teachers go.  However, I would never take one onto a campus--way too much liability for something to go wrong, for starters.  (God help the first teacher that, for whatever reason, has their weapon accidentally discharge on campus, regardless if anyone is hit.)  But most importantly--have you seen kids these days?  Half of my male students--and a handful of female students--could easily overpower me and take the gun with little problem.  Between that and the potential for accidents, this situation has "mistake" written all over it. 

The last one is more of a cultural awareness.  When you think of a place where teachers have to be armed to keep kids safe, what/where do you think of?  I think of stereotypical third-world places.  Is this really what America has come to?  As the president said, surely we can do better than this.

Tuesday, December 11, 2012

How do you sell a solution to a nonexistent problem?

You make the problem up, that's how!

In the case of (self-described) educational "experts" and their corporate-centric "reforms", nothing could be worse than any data indicating our schools are doing fine.  And the media seems to be right along for the ride.  Case in point:

International test scores expose U.S. educational problems

Now, to their credit over at Huffingtonpost, the headline was changed sometime between this morning and this afternoon to the less-antagonistic International Tests Show East Asian Students Outperform World As U.S. Holds Steady.  The problem is that this still leaves the (false) impression our schools suck compared to the rest of the world.  The article itself states:


Overall, the U.S. ranked sixth in fourth-grade reading, ninth in fourth-grade math, 12th in eighth-grade math, seventh in fourth-grade science and 13th in eighth-grade science.

This is out of 60 countries taking the TIMSS.  This is, considering the challenges we face that other smaller, culturally-homogenous countries do not face, is amazing.*  How did Secretary of Education Arne Duncan handle this fabulous news that we're (far above) average in many categories?

U.S. Secretary of Education Arne Duncan called the U.S. scores encouraging, but described older students' performance as "unacceptable."

Encouraging?  Really?  Yes, we need to address the falling-off in later grades (which is, I'm betting, a cultural problem, not a schooling problem) but that's it?  Not even a "This shows we're on the right track" or "Congratulations to our teachers for making us competitive"?  Does education "reform" have ANYTHING to do with, well, actually educating people or is it just a money-making (and union-busting) scheme?  (Or, to paraphrase the late Dr. Gerald Bracey, "It is important to remember that to 'reform' does not necessarily mean improve, just to reshape.")


*  And let's not forget our horrible funding of schools, which presents all sorts of challenges as this (5+ year old data) shows:
But in the Progress in International Reading Literacy Study, American kids in low poverty schools stomped the top-ranked Swedes. Even kids in schools with up to 50% of the students in poverty attained an average score that, had they constituted a nation, would have ranked 4th. Only American students attending schools with 75%+ poverty scored below the international average of the 35 participating countries.