Will Jason Kamras be the next RPS Superintendent?

Editor's Note: Several highly placed and trusted sources within Richmond Public Schools (RPS) and in the business community are whispering about the possibility that Jason Kamras, currently employed by Washington, D.C. Public Schools, will become the next Superintendent of Richmond Public Schools. The following is from Wikipedia, the free encyclopedia:

Jason Kamras

Jason Kamras, 2005 Teacher of the Year, and President George W. Bush in the White House Rose Garden.

Rose Garden ceremony honoring the National Teacher of the Year Jason Kamras, April 20, 2005.
Jason Kamras (born December 12, 1973) was selected as the 2005 National Teacher of the Year[1] and was an adviser on education policy to the 2008 Barack Obama presidential campaign. He now serves as Chief of Instructional Practice for DC Public Schools (DCPS).[2]
Prior to his current role, Mr. Kamras served as Chief of Human Capital for DCPS, overseeing the district’s efforts to attract and retain outstanding teachers, principals, and central office staff. His teacher and principal quality work was instrumental to DCPS achieving unprecedented gains in student learning, student and staff satisfaction, graduation rates, and enrollment, prompting former Secretary of Education Arne Duncan to highlight DC Public Schools as one of the fastest improving urban districts in the nation.[3]
Mr. Kamras began his career in education in 1996 as a seventh and eighth grade mathematics teacher at John Philip Sousa Middle (formerly Junior High) School (DCPS), a National Historic Landmark for its role in desegregating public education in the nation’s capital.[4] He taught at Sousa for eight years, receiving numerous awards, including the Mayor’s Arts Award for infusing photography into his mathematics instruction.[5]
Mr. Kamras views public education as means of promoting equity and justice for all children in the United States.[6] In his various roles, he has advocated for policies to eliminate the “opportunity gap” that disadvantages many low-income children and children of color, and has led efforts to dismantle institutionalized racism in public schools and other public institutions.
Mr. Kamras holds a bachelor's degree in public policy from Princeton University[7] and a master's degree in education from the Harvard Graduate School of Education.[8]
His wife Miwa also works in the field of public education, and he has two elementary school-aged sons, both of whom are DCPS students.

Neither RPS school officials nor Jason Kamras have returned several telephone calls for comment. The following is reposted, with permission, from a May 9, 2011 story in The Washington Post.  See far-right sidebar* for more articles that quote Jason Kamras.  ~ Carol A.O. Wolf

By Bill Turque

I sat down recently with Jason Kamras, chief of human capital for DCPS and the principal architect of the IMPACT teacher evaluation system that is now in its second year. We discussed possible changes in IMPACT and the use of “value-added” methodology--holding teachers accountable for students reaching or exceeding predicted growth on standardized test scores--to measure effectiveness. This is the first of two parts on our conversation, edited for length and clarity.
BT: The number of teachers who actually had value-added as part of their evaluation last year was 476. It’s small.
JK: It is small. You know the grades we test [on the DC CAS, 3 through 8 and 10]. You have to knock off third grade because we don’t test second grade, so you have no benchmark. You have to knock out 10th grade because we don’t test in ninth grade, although that’s changing. You always have some [teachers] drop out because they didn’t have enough kids.
BT: How do you expect that group to grow this year?
JK: It won’t be this coming year, not significantly. We’re piloting the ninth grade assessment. We’re looking at expanding some testing in the lower elementary grades and expanding end-of-course exams in the middle and high school grades. We don’t have everything pinned down yet, but this is part of the conversation, to figure out the three-to-five-year plan to fold in all those pieces. We’d like to get north of 75 percent coverage. It’s going to take us five years to get there.
BT: What’s your take on the fact that only 60 percent of the teachers rated “highly-effective” under IMPACT accepted bonuses for which they were eligible? [Teachers who accept IMPACT performance bonuses give up certain job protections.]
JK: It was about two-thirds. We never expected everybody to take the bonus. The guidelines around the bonus were set out in the contract. It was very clear. It was a contract that was ratified overwhelmingly. Teachers are professionals and professionals get to make choices. I’m delighted that two-thirds took it. The one-third, they made a calculus for themselves and I totally respect that. I think probably some folks didn’t actually think we were going to pay the money. We don’t have a great track record on that. And we did. And so I think more folks this year may think twice about turning it down. The base increases will begin to kick in this year as well, and I think that’s going to be significant for people and they’ll recognize that’s something they’ll want to be a part of. But if people want to keep the extra security, then so be it.
BT: What other student-generated work besides test scores could be used to evaluate teacher effectiveness? [Under IMPACT, 10 percent of the evaluation for teachers not eligible for value-added is based on other forms of student work.] Portfolios of student work are one thing people mention.
JK: We looked at portfolios and lots of other things. And then you’ve got to push a little bit. How do you do portfolios? Everybody has a different idea of what a portfolio is, number one. Actually it’s really hard to demonstrate growth clearly and quantitatively. Who is assessing the portfolios and under what standards? When we looked at it in depth, what we came to was that the operational burden to do this well was simply probably beyond the capacity of the school system at this point.
BT: Operational burden?
JK: If you’re going to really dig into looking at all these pieces of student work, and let’s say we’re going to ask principals to now do this, when exactly is all that going to happen? Then how is it going to be standardized, so that the way they’re looking at your portfolio isn’t different from the way they’re looking at another teacher’s portfolio. So this isn’t to say that this can’t ultimately be worked out. But for where we are right now we felt we wanted to go with the thing we felt strongest about [such as value-added]. 
BT: Here’s the problem. How do you explain value-added to a lay audience, or even beyond a lay audience, when you’re dealing with Ph.D.-level statistics and math?
JK: Basically think of it this way. A teacher has a set of students. We can, through this formula, figure out what the typical ending score is for kids like this. By kids like this I mean kids who had a similar performance history and some of the similar demographic characteristics, like free-and-reduced-price lunch, special education, ELL and so forth. And through this regression formula [a statistical tool for studying relationships between multiple variables] we can figure out that on average, kids like this tend to end up here. Then we can calculate how did your kids actually end up, and then we compare the two. And that’s it. That is essentially what it comes down to. Now that first piece is a bit of black box, surely. And there’s a lot of math in there. But the math is in there to make it as fair as possible, so that we’re taking into consideration where the kids started, what they’ve done previously and the other things we know are outside the teacher’s control. By doing all that we’re isolating the impact of the teacher.
BT: And you’re comfortable with this, even with what some experts say is the uncertainty and potential for error?
JK: There is uncertainty of course. But you’ve seen the Brookings Report on value-added. A lot of value-added models actually do a better job of predicting future performance than, for example, the SATs do of predicting future performance in college. And yet we use those to make decisions about who gets into what schools. And there’s evidence that they also do a better job, of actually predicting performance than principals’ evaluations themselves. And so what I would try to encourage people to think about is to look in the context of all the measures that are out there. It is imperfect as all measures are, but it’s been shown to be pretty highly predictive. On top of which, it’s not the only thing we look at. Yes, it is 50 percent, but it’s just 50 percent and nobody is going to get fired because they have low value-added alone.
BT: That’s not possible?
JK: No. You have to have low observations [low scores from principals and master educations who sit in on classes] and low commitment to school community [another key IMPACT criterion]. So that’s just not going to happen. There’s actually, as you’ve seen from the documents, a decent correlation between our observations and the value -added. I’ve been to these classrooms [with high value-added teachers] These are good teachers. These are places where you want to put your children. So if it was totally crazy, if you walked into the high value-added classrooms and you saw terrible teaching, it would give you more pause. But to me there’s something certainly solid there.
BT: As I recall it was not a huge correlation, but a modest correlation.
JK: Yes, a modest correlation, but as these things go, relative to similar studies, it’s right in the ballpark of what one would expect. It’s also right the ballpark of similar things in the social sciences.
BT: So do you don’t envision a day where that 50 percent might roll back to 30 percent because you’re using some other measure?
JK: I can’t predict the future. We’re always open to thinking about what the right balance is. We are looking at student measures. The Gates study has done some interesting work on that. They found a strong correlation or at least a decent correlation between student responses and value-added.
BT: Student responses?
JK: Student perceptions of the teacher. It’s not, ‘Do you like your teacher?’ It’s things like, ‘Do you feel your teacher pushes you? Do you feel safe to make mistakes in his classroom? If you get something wrong does he help you explain? That sort of thing,.
BT: So you could see that being a part of IMPACT?
JK: I think eventually. There’s a lot of work that would need to be figured out. You can’t do it with all students. You can’t do it with the little ones. But it’s something certainly to explore.

The comment section on this story is now closed. All comment sections close after 14 days.
For more on how we manage comments and other feedback, please see our discussion and submission guidelines  
From a real Ph.D in math, John Ewing, Director of Math for America: 
"Whether naïfs or experts, mathematicians need to confront people who misuse their subject to intimidate others into accepting conclusions simply 
because they are based on some mathematics. Unlike many policy makers, mathematicians are not bamboozled by the theory behind VAM, and they need to speak out forcefully. Mathematical models have limitations. They do not by themselves convey authority for their conclusions. They are tools, not magic. And using the mathematics to intimidate—to preempt debate about the goals of education and measures of success—is harmful not only to education but to mathematics itself." 
To read the full article de-mystifying VAM: http://www.ams.org/notices/201105/rtx110500667p.pd... 
Again, VAM and IMPACT were created to hide the truth and justify anything the Rheeformers wanted to do, not to gain any valid data..
BT: Here’s the problem. How do you explain value-added to a lay audience, or even beyond a lay audience, when you’re dealing with Ph.D.-level statistics and math? 
PhD level statistics? Really? IMPACT was created and administered by imbeciles like Kamas (a lousy teacher, by all accounts, much like Rhee). All it does is create random, meaningless numbers as an excuse to fire teachers. When you add that to the observational criteria, which is designed to identify bad teaching technique as good and vice versa, and the ability of principles and administrators to manipulate all of the above, you end up with a system that allows anyone to be fired or kept at the whim of the administration, thus ensuring loyalty above competence and plenty of $$$ for temp agencies like TFA. That was the point, after all, wasn’t it? 
This evaluation is just like any other. It is made up of 50% of one thing, and 50% of so many other things that it almost seems inconsequential for things that fall into the latter half. If student work makes up only 10%, why bother with it in the first place?
How would Jason fair if IMPACT was used as his evaluation tool with the varioius union leadership? If they should offer their perceptions of him as the Chief of Human Capital, would he receieve an excess-letter or a job-well-done.
If we had a competent head of schools and a more democratic system, Kamras would be out on his *&^% after publicly embarrassing himself like this, revealing such a breathtaking lack of knowledge about education and equally breathtaking lack of respect for teachers, students and families. Of course, if we had those things in place he never would have been hired to begin with. 
Kamras strikes me as a guy who is tied up in his own underwear - trying to sound smart, but having no idea, really, what he's talking about and just counting on the continuing good will of central office to keep him in his position, hoping that it would look bad for the system if he were thrown out of it. 
[originally mis-posted on another turque blog]
5/10/2011 2:33 PM EDT
I have spoken often with Mr. Kamras, and while I do not doubt his sincerity, but I also find that he is inflexible in regards to teacher's concerns. He listens, but I haven't seen any change in attitude or approach with Ms. Henderson in charge. DC is still a very difficult place to teach and the fact that little things--like firing teachers during National Teacher Appreciation Month--do matter. It seems that DCPS doesn't care.
5/10/2011 5:16 PM EDT
JK's ultimate goal was policymaking. His masters was in the subject and he has attained the first rung on the ladder when he became the Chief of Teacher Capital or something or other. Good job, Jason, maybe you'll be a really important person one day when you become a Superintendent or Chancellor.  
From the field, I can say you are despised and discredited. People laugh at your prescriptions and you've become a joke in the hallways. I don't consider you an inspiring leader in any way and I wish you would go back into the classroom where you could continue making a difference. And this is from someone who was open minded and gave you and TFA an honest hearing. No more. 
What kind of teacher would opt not to participate in IMPACT? Do they have legitimate concerns regarding actual progress or are they simply trying to hang on to their job in the least threatening theater? How effective have subjective administrative evaluations been historically in determining the quality of a teacher? Under IMPACT, will there be more or less pressure on administrators/peers to actually get into the classroom and make the necessary observations? Will the DCPS bureaucracy allow IMPACT the necessary five years to be deemed valid or will politics win out, as usual?
5/10/2011 8:02 AM EDT
Paulhoss, I'll try my best to answer some of your interesting questions. For me personally, IMPACT is not such a bad deal. I teach mostly affluent kids and they do well on the DC CAS. I have more issues with the observations, which are based on agreeable enough ideas, but which are packaged irrationally and imposed upon teachers who are already overloaded with the gritty workings of the teaching profession. Imagine yourself as an athlete trying to win a game in a pressure situation and having your coach yelling at you to do it his way while you are trying to perform. Doable, but not ideal. 
But for teachers in less affluent schools this is a really difficult challenge. Their students are not from affluent homes and there is little to no history of educational attainment. They are pressured to the nth degree to get the test scores up but the skill base of the students is really low. They did not get into the profession to teach to the test but that is what they are forced to do daily. Yes, they are probably scared for their jobs, and rightly so. Most of the minimally effective teachers on the chopping block this year (some 700 I believe) are in the poorest wards of the city. 
Why do I care? Because as time goes on I see how DCPS sets up its most important asset (teachers) for failure. "Do it this way, and I'm going to hold you accountable if you don't do it this way. You are not here to do your own thinking, you are here to get these test scores higher." That is an affront to the idea of free thinking, public education, and to the citizens of this city who send their children to public schools. The American state is built on a foundation of free thinking, not narrow measures of reading and math (they are necessary but not sufficient to a solid education). 
See More
5/10/2011 8:44 AM EDT
Excellent explanation, Tension.  
Too bad that the folks at Central Office would not take this to heart even if they heard it straight from other thoughtful, competent teachers like you.
5/10/2011 9:06 AM EDT
Further thoughts on some of Hoss' questions: 
"What kind of teacher would opt not to participate in IMPACT?"  
>For instance, an experienced, competent teacher who sees that IMPACT is not measuring or increasing his/her skill as a teacher 
"Do they have legitimate concerns regarding actual progress or are they simply trying to hang on to their job in the least threatening theater?" 
>This is not an either/or situation. Teachers could have both reactions or a different set of reactions. Also, teachers shouldn't be demonized for wanting to "hang on" to their jobs. Most people do, right? In the case of DCPS teachers, they also know that doing their jobs right and conforming to IMPACT are not one and the same. They also know that many people look down on them and are more likely to assume that teachers are incompetent rather than thinking that the system devised to evaluate them is poorly designed. 
View More Replies
Nice spin of how modest correlations are OK in the social sciences, especially in light of Kamras using IMPACT to turnover the teaching corps. Bill, please remember to ask him about the discrepancy in Highly Effectives between Ward 3 and Ward 8. I'm assuming the number of Minimally Effectives is much higher there and that many of these folks will be fired over the summer partly as a result of modest correlations.
Here's a reason why we see the likes of LATimes, Michelle Rhee et. al. in favor of expansive testing: 
When she learned this year that her 10-year-old daughter Erin is considered proficient in fourth-grade math, Nicolette "Nikki" Moberly felt upset and offended. 
Erin, who has Aicardi syndrome, cannot walk, talk or write, and is slowly learning to indicate that she is aware of her surroundings. The rare genetic disorder causes a variety of physical problems and developmental delays. 
"She is like a 3-month-old baby," said Moberly, a Woodsboro resident said. 
And yet, Erin's teachers at Rock Creek School — which educates children with the most severe types of disabilities in Frederick County — are required under federal and state laws to test her using the alternative Maryland School Assessment, also known as the alt-MSA. 
The test requires Erin's teachers to test her on 20 learning objectives; 10 in math and 10 in reading. Teachers have to get Erin to answer grade-level questions about letters and geometric shapes by pressing one of two buttons on an assistive communication device. It is an intense and painfully slow process of testing and retesting, which occupies most of Erin's time and energy at school from October through March. 
Erin's sessions with her teachers are all recorded on video and indicate that she was proficient in fourth-grade math. But when Moberly reviewed the videos, she saw that most of the time Erin didn't know the correct answer — she was just randomly hitting the right button at the right time. 
"I watched about 20 minutes of it and I cried," said Moberly. "She is not at that place yet. Sometimes the teachers have to wait for 20 minutes for her to answer. They know she can't answer this [correctly] either." 
The way Moberly sees it, taking the test is a waste of time for Erin. It's an unnecessary distraction that takes her away from focusing on skills that she really needs, such as learning how to stand on her feet or respond to her surroundings. 
That is why Moberly and an increasingly vocal coalition of Frederick County parents of severely disabled children are fighting to change the rules which impose the alt-MSA exams on their children. 
See More
How valid can value-added scores be if the results of DCCAS scores in over 100 schools in 2008, 2009, AND 2010 are questionable due to flaggings for high erasures from wrong to right? Next time you talk to Jason Kamras, Bill, please ask him this question. 
Also, Jason Kamras seems confident that that observations correlate (at least decently) to value-added scores. So why spend so much MONEY on crunching the numbers?
5/9/2011 9:47 PM EDT
Great point on how strong correlations for observations would weaken the need for such a heavy data component to IMPACT
5/10/2011 8:53 PM EDT
Phoney "educators" like Rhee and Kamras rely on phoney "data." No surprise.
Regarding other student-generated work, Kamras says: "Actually it’s really hard to demonstrate growth clearly and quantitatively." 
Actually, the "clearly" part is not really hard if you entrust the assessment to knowledgeable professionals working with good standards. The "quantitatively" part is appropriately hard, because the outcomes most worth pursuing are not quantifiable.  
Regarding VAM formulas, Kamras states, "the math is in there to make it as fair as possible, so that we’re taking into consideration where the kids started, what they’ve done previously and the other things we know are outside the teacher’s control. By doing all that we’re isolating the impact of the teacher." 
This statement contains a number of flaws. The formulas take into consideration a very limited idea of "where the kids started" and reduce that idea to an imperfect number. Then, they attempt to assign numeric values to the limited information they have about a few of the many relevant "things we know are outside the teacher's control." Then, they say they can isolate the teacher effect, which is only true to the extent that you believe every element of the formula is "true" and appropriate for inclusion, and that every element excluded from the equation is either a). not statistically significant, or b). equally applicable to all students and teachers each year. 
5/9/2011 9:50 PM EDT
It's easy to throw stones at IMPACT and its quantitative focus. But on the other hand, prior observation systems led to poor teachers surviving and clogging the system with ineffective teachers. How would you rid the system of ineffective teachers without IMPACT? What other assessments would you use? 
I believe it's fair and necessary to criticize DCPS and IMPACT for a whole host of reasons. My starting point, however, is transparency. From what it sounds like, this statistical "black box" will make it very hard for teachers and the public to understand why favorite teachers get fired. Seems to me that student and parent satisfaction should be a heavy component in any evaluation system. Aren't they the people being served? And aren't they smart enough to know whether their teachers are good enough if they are smart enough to make an informed school choice?
5/9/2011 11:04 PM EDT
Seems to me that student and parent satisfaction should be a heavy component in any evaluation system. Aren't they the people being served? And aren't they smart enough to know whether their teachers are good enough if they are smart enough to make an informed school choice? 
Not according to the professional reform crowd.
IMPACT is here to stay. Here's a question to ask: The whole curriculum is going to be changing once you go to a Core Curriculum model. Why is it worth it to ramp up IMPACT now at a time when many metrics are going to have to be totally revamped come the new test? Why not introduce IMPACT slowly, carefully and with broad political support across the aisle?  
It's not that I mind many of the changes that DC makes; I just mind how these changes may be politically unstable. They've survived the Gray transition so far. But they may not survive the next mayoral transition (and given Gray's age and the changing demographics of the city that could be in less than 4 years) if they are implemented through a direct assualt on WTU.  
So much of DCPS' sustainability with reform depends on how skillful they are in managing WTU politics and a Nathan Saunders who is likely to be far more resistant to IMPACT than the previous union head was.
5/10/2011 2:39 PM EDT

IMPACT is not only an inexact science, it is being used as a tool to belittle teachers, remove others, and to push an agenda that actually hurts teaching. If we are expected to allow our students multiple ways to access information, why are teachers being limited to a small "script" of what "excellent teaching looks like?" Many teachers have complained that they are getting less done because they have to waste time fulfilling the obligations of IMPACT. Sometimes direct teaching is necessary. Do it when you are being observed? You could lose your job.