Dec 152009
 

[note: this was originally posted April 30, 2008 — back when I apparently used to blog more often. I’m resuscitating it as part of a #edublogBT meme begun by Jon Becker]

All this talk about writing, grade books, and “the unthinking habits of grading” has given me so much to think about. My mind is swimming.

The thing is, I think about this stuff all the time. It is only recently, after reading hoards of comments and postings (and all the bits in between) that I begin to understand my naivety. Or is it ignorance? (Hint: not everyone thinks about this stuff all the time.)


First, a bit of background, for the sake of context

I grew up in Calgary, Alberta, Canada and attended Catholic, publicly funded schools. The teachers I had, with two notable exceptions1, all used criterion-referenced assessment to grade my work. I always (other than with the two notable exceptions) knew how I was being graded, even if they did average my scores and turn them into percentages. I graduated from an unusual work-at-your-own-pace high school in 1992.2

After completing an English Lit degree on the West coast, I entered Education. I did not realize at the time (1997) that the program I was in was progressive compared to most Ed programs out there. Thinking, ignorantly, that what I learned was what all teachers-to-be learned, I eagerly entered the world of K-12 education, armed with what I thought was Everything A Beginning Teacher Should Know.

One Epiphany (of many)

Fast-forward to 2001: I entered the realm of international education, working at an MYP school. Before this moment, what I knew about MYP could have filled an ant’s mouth. Sitting in an MYP training session, my then-mentor flashed the subject-specific criteria for Language A (MYP’s equivalent to English Language Arts) on a projector screen.

Thought #1: “Hey, that’s cool! That’s the same criteria my grade 7 teacher used to grade my writing, and it’s the same criteria I have always used to assess student work.”

[insert hmms and haws of other training participants here, as they ponder the criteria on the screen]

Thought #2: “Wait… doesn’t everyone use this?”

It wasn’t long after Thought #2 occurred that I learned the answer: No, not everyone is using this. Plenty of conversation and interaction with my then-colleagues (from various backgrounds in education, as expected in an international setting) taught me that what I had taken for granted my entire (short) life was indeed not “the norm.”

The Interim and a Confession

Over the past 7 years, plenty more colleagues, students, and their parents have shown me that other ways of assessing are indeed rife and plentiful. Just yesterday I engaged in three different conversations with three different families about this very topic (parent conferences were timely). Witness a verbatim quote from one of those discussions:

“Wow, this is so different from what we’re used to. You mean you want your students to come show you their work before they finish? You won’t take points off?”

[I won’t even get into the connotations implied by the use of the words “want”, “before”, and “points.”]

Don’t get me wrong — I do not think the same way about this issue as I did 10 or even 3 years ago. I have learned more than I can express on this small page about how to assess meaningfully. I have spent many, many teacher days fantasizing about not assessing at all, and like Dana Huff, I still have those days. I am guilty, in past years, of assigning my students the most boring five-paragraph essay you’ve ever read, just so I could be bored to death reading it and they could be bored to death writing it.

A Question … and Answers?

I have offered some of my thoughts about assessment before — indeed, the reason I initially began this blog was to reflect on what I was learning in an IBO PD course on MYP Objectives and Assessment. Now, having learned so much, I feel my philosophy of assessment is still evolving, and I do think long and hard about why I assess my students’ work and how I do it.

(And, please know that I mention MYP only because I feel it is one of the best educational systems out there for student learning. Is it the only one? No. Are there others that do the same? Yes. Is it just about best practice? Yes.)

So here’s the thing: I know there are other methods of assessment. I know about them well enough because I took the required courses in university, and I have seen them used in classrooms. But here’s what I still don’t understand — and please don’t mistake this for a rhetorical question:

Why are we still using them? (Do they facilitate learning?)

I’m starting, today, with just this question about criterion-referenced assessment, but know that I’m not limiting my thoughts to only this aspect of assessment. I anticipate that those thoughts — and more questions — will follow as my assessment philosophy further evolves.


Mid-evolution

So far, here is what I believe. Assessment is…

  • primarily for learning; the assessment of learning is secondary.
  • real and not “fabricated” just to put a number on a paper or in a box.
  • goal-focused, and those goals should be based on where the students are at in their learning.
  • varied, with a wide variety of opportunities given for students to reach their goals.
  • frequent and woven into every aspect of what we do, while we are learning. (I am uncomfortable with the thought of students being either too excited or filled with dread at the mention of assessment; I want my students to see assessment as something we do all the time.)
  • part of the natural learning process, not something tacked onto the end.
  • not driven by reporting terms, boxes that need to be filled, administrative software, or any other nonsense that has nothing to do with the learner.
  • applied when needed for learning, and not at calendar dates specified a year in advance.

1Okay, so really it was three notable exceptions. And they were notable because they were exceptionally bad teachers. I’m not naming names, it’s water under the bridge, yadda-yadda-yadda — and the truth is I learned many life lessons from these poor teachers.

2The dates are important, because I refuse to believe that the concept of criterion-referenced assessment is “new” and “progressive“. The dates, although applicable only to my personal experience and not bodies of research, further give credence to my personal belief that education is painfully, mind-bogglingly slow to change.

Photo Credits: Nice Hat by cwalkatron; Question mark by Leo Reynolds

Like this? You might also enjoy these:

Oct 102009
 

From Clark and Salomon (1986):

General media comparisons and studies pertaining to their overall instructional impact have yielded little that warrants optimism. Even in the few cases where dramatic changes in achievement or ability were found to result from the introduction of a medium such as television, . . .  it was not the medium per se that caused the change, but rather the curricular reform that its introduction enabled.

I am Here for the Learning Revolution by Wesley Fryer
Attribution-ShareAlike License

This is why, in my opinion, the state of education is so sucky today. Our (educators’) use of technology for learning is hampered by the glass ceiling of curriculum. Only when the curriculum changes will dramatic changes in learning occur. Currently, too many schools are trying to fit square pegs into round holes; that is, teachers are using fabulous technology (IWBs, Tablet PCs, iPod Touch, VoiceThread, and more) to teach curriculum that is still content-based.

These technologies should be reforming curriculum. Why aren’t they?

How can we move this forward? How can we change curricula so that it allows teachers and students “dramatic change”? What is standing in the way, and how can we overcome this obstacle?

Clark, R.E., & Salomon, G. (1986). Media in teaching. In M. Wittrock (Ed.), Handbook of Research on Teaching (3rd ed., pp.464-478). New York: Macmillan.

Like this? You might also enjoy these:

 10 October, 2009  Posted by at 4:42 pm change, Education Philosophy Tagged with: , , , , , ,  3 Responses »
Feb 232009
 

I was reading a recent post on Bridging Differences about assessment, and in particular, testing. I respect Deborah Meier and Diane Ravitch greatly, and will take a short minute first to say that if you’re an educator and you don’t follow their epistolary-style blog, you really should.  Anyway, the post is about testing and the need for data in schools.  Deborah talks about how to address the “data problem” and how teachers can (and should) avoid turning their classrooms into testing settings. 


070305 by COCOEN daily photos
Attribution-NonCommercial-ShareAlike License

I always read posts like these with only half-interest, I must admit. Why? Because I am philsophically opposed to standardized testing, particularly as it is used in American schools. Where I am from (Canada), standardized tests are linked directly to curriculum and used in an entirely different manner. I had no idea what US-style standardized tests were about until I moved overseas and began having conversations with my American colleagues. They later took on a whole new meaning for me when I had to write one myself: the GRE was required for applying to my top choice graduate schools. Ugh! I learned very quickly in my preparation that these kinds of standardized tests have nothing whatsoever to do with teaching and learning.

I’ve been lucky, I guess, that I’ve also never had to teach in a school where standardized testing has been emphasized. In Canada, my students wrote mandatory government exams in grades 3, 6, 9, and 12 (or 4, 7, and 10, and 12 in B.C.) — but again, these are always connected to the provincial curriculum. And my students wrote the Canadian Achievement Tests in grade 7, but schools never used this to “pin” teachers. In fact, such tests (in my experience) were never about the teachers at all. Schools I taught in used the CAT to help identify students who might need learning support, or a gifted & talented program. And such is the way international schools I have worked in have used standardized tests like the ITBS and the ISA.


slide.012-002 by keepps
Attribution-NonCommercial-ShareAlike License

Internationally, I have only ever taught at MYP schools. And this comment, left on the Bridging Differences post I mention above, is one of the reasons why:

To get the kind of reliabillity that a multiple choice test delivers, the kids would have to spend a week to answer all the open-ended response questions, rather than the hour or two that the multiple choice test takes.

The writer of this comment, ceolaf (who leaves no URL with his/her comment), wrote a lengthy explanation as to why we need, whether we like them or not, some kind of standardized test because of the reliability issue. He further states: 

The failure of THOSE tests that we hate does not in any way prove the superiority of our assessments. Our assessments have their own flaws.

I have two things to say in response to these two bits:

  1. I beg to differ.  And, 
  2. This is why I love MYP.
MYP assessment, while certainly not perfect, is doing exactly what the ceolaf’s first comment implies: they are project-based, for the most part, and so they DO have that kind of reliability. Our students are taking a week (if not longer) to “answer” (I prefer the word “respond to”) oodles of open-ended questions. Further, they are criterion-referenced, with specific descriptors for each criterion and each task so that the student knows exactly where s/he fits on the achievement level. And, as if that’s not enough — in MYP, no single assessment is an indicator of a student’s achievement! As teachers, we must see multiple pieces of evidence before we can report on a student’s achievement.
 

Image by in da mood
Attribution-NonCommercial License
Lest you start thinking, “Wait a minute. So the teachers are doing everything? Doesn’t that make it unreliable?” allow me to go on. In MYP, although teachers are adapting given criteria (set out in each subject guide) to be grade-specific and task-specific, we are not left to our own devices, so to speak, to assess our students randomly or unchecked. About two-thirds of the way through each school year, we send our Grade 10 work (Grade 10 is the last year of MYP, year 5 of MYP) to be moderated by a complete stranger, also an educator, somewhere else in the world. The moderator’s job: to make sure that the assessments we are doing, as teachers, is in-line with the standards set by the IBO world-wide.

 

Of course, all of what I’ve said above is really the nutshell version. It’s slightly more complicated than what I’ve described here (yes, there is paperwork and there are discussions, and more), but this is the quick-and-dirty explanation that basically emphasizes one of the many reasons I love MYP: We assess for learning, and of learning, and in ways that *are* reliable but don’t rely on tests!  And that is completely in-line with my philosophy.

Like this? You might also enjoy these: