Showing posts with label assessment practices. Show all posts
Showing posts with label assessment practices. Show all posts

Wednesday, December 4, 2013

The Varied Functions of Digital Badges in the Educational Assessment BOOC


by Dan Hickey and Tara Kelley


This extended post details how open digital badges were incorporated into the Education Assessment Big Open Online Course.  In summary there were four types of badges:
  •  Assessment Expertise badges for completing peer-endorsed wikifolios and an exam in each of the sections of the course (Practices, Principles, and Policies)
  •  Assessment Expert badge for earning the three expertise badges and succeeding on the final exam
  • Leader versions of the Expertise and Expert badges for getting the most peer-promotions in the networking group
  • A Customized Assessment Expert badge for completing a term paper by assembling all of the insights gained across the 11 wikifolios assignments into a coherent professional paper.  This badge allows earners to indicate the state, domain, or context in which they have will have developed local expertise about assessment.
Along the way, this post explores (a) how open badges are different than grades and other static (i.e., non-networked, evidence-free) credentials, (b) how we incorporated evidence of learning directly into the badges, and (c) the role of badges in making claims about general, specific, and local expertise.

Previous posts describe the BOOC, the peer promotion and endorsement features, the role of the textbook, and how one student experienced the course and the badges.  Future posts will describe the code and interface used to issue them in Course Builder, the entire corpus of badges issued, how earners shared them, and what we learned by analyzing the evidence they contained, and the design principles for recognizing, assessing, motivating, and studying learning that the BOOC badges illustrate.

Sunday, October 21, 2012

Initial Questions About Digital Badges and Learning

by Daniel Hickey
This post suggests some initial questions about learning that you might want to ask if you are considering using digital badges.  A version of this post is being prepared for the November 2012 edition of EvoLLLution magazine.  That article will consider how digital badges can be used to both enhance learning and recognize learning in ways that might help colleges and universities attract larger numbers of adult learners back to school.  This post poses these same questions in a more general context.

Wednesday, October 3, 2012

Incorporating Open Badges into a Hybrid Course Context

By Dan Hickey
I recently incorporated digital badges into the online aspects of my doctoral course on educational assessment (“Capturing Learning in Context”).  There are two aspects of this effort that readers might find useful.  The first aspect concerns the way students award simple “stamps” to highlight significant contributions or insights from classmates. I use those stamps to award three “one-star” badges each week; I will use the one-star badges to determine how to award three two-star badges at the end of the semester.  I will elaborate on this in a later post.  I also removed the section on using the Mozilla Open Badge backpack to another post as well. This post is already going to be pretty long! 

In this post I want to describe how I used ForAllBadges (from ForAllSystems, a small Chicago firm) to issue digital badges within a typical online course management system (CMS).  Anyone who wants to issue badges that comply with Mozilla’s Open Badge Infrastructure (OBI) can easily sign up for a free account at http://www.forallbadges.com/.  The account can be used as a stand-alone site, or it can be accessed from within any CMS that lets you access outside websites.  I am using OnCourse, the Sakai-based open-source CMS that Indiana University helped develop.

Wednesday, July 4, 2012

Responding to Michael Cole’s Question about Badges


By Dan Hickey

Michael Cole
I was involved in an exchange on XMCA, the listserv established by Michael Cole’s Laboratory of Comparative Human Cognition and the journal Mind, Culture, and Activity.  I mentioned digital badges in the post, and Mike wrote back to ask:

You know there appear to be several people who appear from time to time on xmca involved in the Mac Arthur initiatives where badges are all the rage. For anyone interested in multi-modal representational practices, it is certainly interesting as a subject of CHAT analysis.

Question:  if you are right in your assumption that the BADGE movement will start a trend what do you think that the trend promises or portends more broadly?

I have taken some time to respond, in part because I wanted to get caught up on the latest work by Cole and his students regarding their successes and challenges around the Fifth Dimension after-school computer clubhouses.  The Fifth Dimension is precisely the kind of educational innovation that should be easier to create, sustain, and study when digital badges are widely used.


Wednesday, June 13, 2012

Three Firsts: Bloomington’s First Hackjam, ForAllBadges’ App, and Participatory Assessment + Hackasaurus


Dan Hickey and Rebecca Itow
On Thursday, June 7, 2012, the Center for Research on Learning and Technology at Indiana University in conjunction with the Monroe County Public Library (MCPL) in Bloomington, IN put on a Hackjam for resident youth. The six hour event was a huge success. Students were excited and engaged throughout the day as they used Hackasaurus’ web editing tool X-Ray Goggles to “hack” Bloomington’s Herald Times. The hackers learned some HTML & CSS, developed some web literacies, and learned about writing in different new media contexts. We did some cool new stuff that we think others will find useful and interesting. We are going to summarize what we did in this post. We will elaborate on some of these features in subsequent posts, and try to keep this one short and readable.

WHY DID WE DO A HACKJAM?
We agreed to do a Hackjam with the library many months ago. MCPL Director Sara Laughlin had contacted us in 2011 about partnering with them on a MacArthur/IMLS proposal to bring some of Nicole Pinkard’s YouMedia programming to Bloomington. We concluded that a more modest collaboration (like a Hackjam) was needed to lay the groundwork for something as ambitious as YouMedia.

Our ideas for extending Mozilla’s existing Hacktivity Kit were first drafted in a proposal to the MacArthur Foundation’s Badges for Lifelong Learning initiative. Hackasaurus promised to be a good context to continue our efforts to combine badges and participatory assessment methods. While our proposal was not funded, we decided to do it anyways. MCPL initially considered making the Hackjam part of the summer reading program sponsored by the local school system. Even though we were planning to remix the curriculum to make it more “school friendly,” some school officials could not get past the term “hacking.”


Sunday, June 10, 2012

Digital Badges as “Transformative Assessment”

                                                            By Dan Hickey
               The MacArthur Foundation's Badges for Lifelong Learning competition generated immense
interest in using digital badges to motivate and acknowledge informal and formal learning. The
366 proposals submitted in the first round presented a diverse array of functions for digital
badges. As elaborated in a prior post, the various proposals used badges to accomplish one or
more of the following assessment functions:

               Traditional summative functions. This is using badges to indicate that the earner
               previously did something or knows something. This is what the educational assessment
               community calls assessment of learning.

               Newer formative functions. This is where badges are used to enhance motivation,
               feedback, and discourse for individual badge earners and broader communities of earners.
               This is what is often labeled assessment for learning.

               Groundbreaking transformative functions. This is where badges transform existing
               learning ecosystems or allow new ones to be created. These assessment functions impact
               both badge earners and badge issuers, and may be intentional or incidental. I believe we
               should label this assessment as learning.

This diversity of assessment functions was maintained in the 22 badge content awardees who were
ultimately funded to develop content and issue badges, as well as the various entities associated with HIVE collectives in New York and Chicago, who were funded outside of the competition to help their members develop and issue badges.  These awardees will work with one of the three badging platform awardees who are responsible for creating open (i.e., freely-available) systems for issuing digital badges.
            Along the way, the Badges competition attracted a lot of attention.  It certainly raised some eyebrows that the modestly funded program (initially just $2M) was announced by a cabinet-level official at a kickoff meeting attended by heads of numerous other federal agencies.  The competition and the idea of digital badges were mentioned in articles in the Wall Street Journal, New York Times, and The Chronicle of Higher Education.  This attention in turn led to additional interest and helped rekindle the simmering debate over extrinsic incentives.  This attention also led many observers to ask the obvious question: “Will it work?” 
This post reviews the reasons why I think the various awardees are going to succeed in their stated goals for using digital badges to assess learning.  In doing so I want to unpack what “success” means and suggest that the initiative will provide a useful new definition of “success” for learning initiatives.  I will conclude by suggesting that the initiative has already succeeded because it has fostered broader appreciation of the transformative functions of assessment.

Thursday, March 1, 2012

Open Badges and the Future of Assessment

Of course I followed the roll out of MacArthur’s Badges for Lifelong Learning competition quite closely. I have studied participatory approaches to assessment and motivation for many years.  

EXCITEMENT OVER BADGES
While the Digital Media and Learning program committed a relatively modest sum (initially $2M), it generated massive attention and energy.  I was not the only one who was surprised by the scope of the Badges initiative.  In September 2011, one week before the launch of the competition, I was meeting with an education program officer at the National Science Foundation.  I asked her if she had heard about the upcoming press conference/webinar.  Turns out she had been reading the press release just before our meeting.  She indicated that the NSF had learned about the competition and many of the program officers were asking about it.  Like me, many of them were impressed that Education Secretary Duncan and the heads of several other federal agencies were scheduled to speak at the launch event at the Hirshhorn museum,

THE DEBATE OVER BADGES AND REWARDS
As the competition unfolded, I followed the inevitable debate over the consequences of “extrinsic rewards” like badges on student motivation.  Thanks in part to Daniel Pink’s widely read book Drive, many worried that badges would trivialize deep learning and leave learners with decreased intrinsic motivation to learn. The debate was played out nicely (and objectively) at the HASTAC blog via posts from Mitch Resnick and Cathy Davidson .   I have been arguing in obscure academic journals for years that sociocultural views of learning call for an agnostic stance towards incentives.  In particular I believe that the negative impact of rewards and competition says more about the lack of feedback and opportunity to improve in traditional classrooms.  There is a brief summary of these issues in a chapter on sociocultural and situative theories of motivation that Education.com commissioned me to write a few years ago.  One of the things I tried to do in that article and the other articles it references is show why rewards like badges are fundamentally problematic for  constructionists like Mitch, and how newer situative theories of motivation promise to resolve that tension.  One of the things that has been overlooked in the debate is that situative theories reveal the value of rewards without resorting to simplistic behaviorist theories of reinforcing and punishing desired behaviors.

Monday, November 16, 2009

Join this discussion on Grading 2.0

Over at the HASTAC forum, a conversation has begun around the role of assessment in 21st-century classrooms.

The hosts of this discussion, HASTAC scholars John Jones, Dixie Ching, andMatt Straus, explain the impetus for this conversation as follows:
As the educational and cultural climate changes in response to new technologies for creating and sharing information, educators have begun to ask if the current framework for assessing student work, standardized testing, and grading is incompatible with the way these students should be learning and the skills they need to acquire to compete in the information age. Many would agree that its time to expand the current notion of assessment and create new metrics, rubrics, and methods of measurement in order to ensure that all elements of the learning process are keeping pace with the ever-evolving world in which we live. This new framework for assessment might build off of currently accepted strategies and pedagogy, but also take into account new ideas about what learners should know to be successful and confident in all of their endeavors.

Topics within this forum conversation include:
  • Technology & Assessment ("How can educators leverage the affordances of digital media to create more time-efficient, intelligent, and effective assessment models?");
  • Assignments & Pedagogy ("How can we develop assignments, projects, classroom experiences, and syllabi that reflect these changes in technology and skills?");
  • Can everything be graded? ("How important is creativity, and how do we deal with subjective concepts in an objective way, in evaluation?"); and
  • Assessing the assessment strategies ("How do we evaluate the new assessment models that we create?").

The conversation has only just started, but it's already generated hundreds of visits and a dozen or so solid, interesting comments. If you're into technology, assessment and participatory culture, you should take a look. It's worth the gander.

Here's the link again: Grading 2.0: Assessment in the Digital Age.

Tuesday, October 27, 2009

The Void Between Colleges of Education and the University Teaching and Learning

In this post, I consider the tremendous advances in educational research I am seeing outside of colleges of education and ponder the relevance of mainstream educational research in light of the transformation of learning made possible by new digital social networks.

This weekend, the annual conference of the International Society for the Scholarship of Teaching and Learning took place at Indiana University. ISSOTL is the home of folks who are committed to studying and advancing teaching and learning in university settings. I saw several presentations that are directly relevant to what we care about here at Re-Mediating Assessment. These included a workshop on social pedagogies organized by Randy Bass, the Assistant Provost for Teaching and Learning at Georgetown, and several sessions on open education, including one by Randy and Toru Iiyoshi, who heads the Knowledge Media Lab at the Carnegie Foundation. Toru co-edited the groundbreaking volume Opening up Education, of which we here at RMA are huge fans. (I liked it so much I bought the book, but you can download all of the articles for free—ignore the line at the MIT press about sample chapters).

I presented at a session about e-Portfolios with John Gosney (Faculty Liaison for Learning Technologies at IUPUI) and Stacy Morrone (Associate Dean for Learning Technologies at IU). John talked about the e-Portfolio efforts within the Sakai open source collaboration and courseware platform; Stacy talked about e-Portfolio as it has been implemented in OnCourse, IU’s instantiation of the Sakai open source course collaboration platform. I presented about our efforts to advance participatory assessment in my classroom assessment course using newly available wikis and e-Portfolio tools in Oncourse (earlier deliberation on those efforts are here; more posted here soon). I was flattered that Maggie Ricci of IU’s Office of Instructional Consulting interviewed me about my post on positioning assessment for participation and promised to post the video this week (I will update here when I find out).

I am going to post about these presentations and how they intersect with participatory assessment as time permits over the next week or so. In the meantime, I want to stir up some overdue discussion over the void between the SOTL community and my colleagues in colleges of education at IU and elsewhere. In an unabashed effort to direct traffic to RMA and build interest in past and forthcoming posts, I am going to first write about this issue. I think it raises issues about the relevance of colleges of education and suggests a need for more interdisciplinary approaches to education research.

I should point out that I am new to the SOTL community. I have focused on technology-supported K-12 education for most of my career (most recently within the Quest Atlantis videogaming environment). I have only recently begun studying my own teaching in the context of developing new core courses for the doctoral program in Learning Sciences and in trying to develop online courses that take full advantage of new digital social networking practices (initial deliberations over my classroom assessment course are here). I feel sheepish about my late arrival because I am embarrassed about the tremendous innovations I found in the SOTL community that have mostly been ignored by educational researchers. My departmental colleagues Tom Duffy, who has long been active in SOTL here at IU, and Melissa Gresalfi have recently gotten seriously involved as well. The conference was awash with IU faculty, but I only saw a few colleagues from the School of Education. One notable exception was Melissa’s involvement on a panel on IU’s Interdisciplinary Teagle Colloquium on Inquiry in Action. I could not go because it conflicted with my own session, but this panel described just the sort of cross-campus collaboration I am aiming to promote here. I also ran into Luise McCarty from the Educational Policy program who heads the school’s Carnegie Initiative on the Doctorate for the school.

My search of the program for other folks from colleges of education revealed another session that was scheduled against mine and that focused on the issue I am raising in this post. Karen Swanson of Mercer University and Mary Kayler of George Mason reported on the findings of their meta-analysis of the literature on the tensions between colleges of education and SOTL. The fact that there is enough literature on this topic to meta-analyze points out that this issue has been around for a while (and suggests that I should probably read up before doing anything more than blogging about this issue.) From the abstract, it looks like they focused on the issue of tenure, which I presume refers to a core issue in the broader SOTL community: that SOTL researchers outside of schools of education risk being treated as interlopers by educational researchers, while treated as dilettantes by their own disciplinary communities. This same issue was mentioned in other sessions I attended as well. But significantly from my perspective, it looks like Swanson and Kayler looked at this issue from the perspective of Education faculty, which is what I want to focus on here. I have tenure, but I certainly wonder how my increased foray into the SOTL community will be viewed when I try to get promoted to full professor.

I will start by exploring my own observations about educational researchers who study their own university teaching practices. I am not in teacher education, but I know of a lot of respected education faculty who seem to be conducting high quality, published research about their teacher education practices. However, there is clearly a good deal of pretty mediocre self-study taking place as well. I review for a number of educational research journals and conferences. When I am asked to review manuscripts or proposals for educational research carried out in classrooms in the college of education, I am quite suspect. Because I have expertise in motivation and in formative assessment, I get stacks of submissions of studies of college of education teaching that seem utterly pointless to me. For example, folks love to study whether self______ is correlated with some other education relevant variables. The answer is always yes, (unless their measures are unreliable), and then there is some post hoc explanation of the relationships with some tenuous suggestions for practice. Likewise, I review lots of submissions that examine whether students who get feedback on learning to solve some class of problems learn to solve those problems better than students whose feedback is withheld. Here the answer should be yes, since this is essentially a test of educational malpractice. But the studies often ignore the assessment maxim that feedback must be useful and used, and instead focus on complex random assignment so that their study can be more “scientific.” I understand the appeal, because they are so easy to conduct and there are enough examples of them actually getting published to provide some inspiration (while dragging down the over effect size of feedback in meta-analytic studies). While it is sometimes hard to tell, these “convenience” studies usually appear to be conducted in the author’s own course or academic program. So, yes, I admit that when that looks to be the case, I do not expect to be impressed. I wonder if other folks feel the same way or if perhaps I am being overly harsh.

Much of my interest in SOTL follows from my efforts to help my college take better advantage of new online instructional tools and to help take advantage of social networking tools in my K-12 research. While my colleagues in IU Bloomington and IUPUI are making progress, I am afraid that we are well behind the curve. While I managed to attend a few SOTL sessions, I saw tremendous evidence of success that I will write about in subsequent posts. Randy Bass and Heidi Elmendorf (also of Georgetown) showed evidence of deep engagement on live discussion forums that simply can’t be faked; here at IU, Phillip Quirk showed some very convincing self-report data about student engagement in our new interdisciplinary Human Biology Program, which looks like a great model of practice for team-teaching courses. These initial observations reminded me of the opinion of James Paul Gee, who leads the MacArthur Foundation’s 21st Century Assessment Project (which partly sponsors my work as well). He has stated on several occasions that “the best educational research is no longer being conducted in colleges of education.” That is a pretty bold statement, and my education colleagues and I initially took offense to it. Obviously, it depends on your perspective; but in terms of taking advantage of new digital social networking tools and the movement towards open education and open-source curriculum, it seems like it may already be true.

One concern I had with SOTL was the sense that the excesses of “evidence-based practice” that has infected educational research was occurring in SOTL. But I did not see many of the randomized experimental studies that set out to “prove” that new instructional technology “works.” I have some very strong opinions about this that I will elaborate on in future posts; for now I will just say that I worry that SOTL researchers might get are too caught up in doing controlled comparison studies of conventional and online courses that they completely miss the point that online courses offer an entirely new realm of possibilities for teaching and learning. The “objective” measures of learning normally used in such studies are often biased in favor of traditional lecture/text/practice models that train students to memorize numerous specific associations; as long as enough of those associations appear on a targeted multiple-choice exam, scores will go up. The problem is that such designs can’t capture the important aspects of individual learning and any aspects of the social learning that is possible in these new educational contexts. Educational researchers seem unwilling to seriously begin looking at the potential of these new environments that they have “proven” to work. So, networked computers and online courses end up being used for very expensive test preparation…and that is a shame.

Here at RMA, we are exploring how participatory assessment models can foster and document all of the tremendous new opportunities for teaching and learning made possible by new digital social networks, while also producing convincing evidence on these “scientific” measures. I will close this post with a comment that Heidi Elmendorf made in the social pedagogies workshop. I asked her why she and the other presenters were embracing the distinction between “process” and “product.” In my opinion, this distinction is based on outdated individual models of learning; it dismisses the relevance of substantive communal engagement in powerful forms of learning, while privileging individual tests as the only “scientific” evidence of learning. I don’t recall Heidi’s exact response, but she immediately pointed out that her disciplinary colleagues in Biology leave her no choice. I was struck by the vigorous nods of agreement from her colleagues and the audience. Her response really brought be me back down to earth and reminded me how much work we have to do in this regard. In my subsequent posts, I will try to illustrate how participatory assessment can address precisely the issue that Heidi raised.

Thursday, October 1, 2009

Positioning Portfolios for Participation

Much of our work in our 21st Century Assessment project this year has focused on communicating participatory assessment to broader audiences whose practices we are trying to inform. This includes:

  • classroom teachers whose practices we are helping reshape to include more participation (like those we are working with in Monroe County right now);

  • other assessment researchers who seem to dismiss participatory accounts of learning as “anecdotal” (like my doctoral mentor Jim Pellegrino who chaired the NRC panel on student assessment);

  • instructional innovators who are trying to support participation while also providing broadly convincing accounts of learning (like my colleagues Sasha Barab and Melissa Gresalfi whose Quest Atlantis immersive environment has been a testbed for many of our idea about assessment);

  • faculty in teacher education who are struggling to help pre-service teachers build professional portfolios while knowing that their score on the Praxis will count for much more (and whose jobs are being threatened by efforts in Indiana to phase out teacher education programs and replace them with more discipline-based instruction);

  • teachers in my graduate-level classroom assessment course who are learning how to do a better job assessing students in their classrooms, as part of their MA degree in educational leadership.


It turns out that participatory approaches to assessment are quite complicated, because they must bridge the void between the socially-defined views of knowing and learning that define participation, and the individually-defined models of knowing and learning that have traditionally been taken for granted by the assessment and measurement communities. As our project sponsor Jim Gee has quite succinctly put: Your challenge is clarity.

As I have come to see most recently, clarity is about entry. Where do we start introducing this comprehensive new approach? Our approach itself is not that complicated really. We have it boiled down to a more participatory version of Wiggins' well known Understanding by Design. In fact we have taken to calling our approach Participation by Design (or if he sues us, Designing for Participation). But the theory behind our approach is maddeningly complex , because it has to span the entire range of activity timescales (from moment-to-moment classroom activity to long-term policy change) and characterizations of learning (from communal discourse to individual understanding to aggregated achievement).

Portfolios and Positioning
Now it is clear to me that the best entry point is the familiar notion of the portfolio. Portfolios consist of any artifacts that learners create. Thanks to Melissa Gresalfi, I have come to realize that the portfolio, and the artifacts that they contain, are ideal for explaining participatory assessment. This is because portfolios position (where position is used as a verb). Before I get to the clarity part, let me first elaborate on what this means.

It turns out that portfolios can be used to position learners and domain content in ways that bridges this void between communal activity and aggregated attainment. In a paper with Caro Williams about the math project that Melissa and I worked on together, Melissa wrote that

“positioning, as a mechanism, helps bridge the space between the opportunities that are available for participation in particular ways and what individual participants do”

Building on the ideas of her doctoral advisor Jim Greeno (e.g., Greeno and Hull, 2002) Melissa explained that positioning refers to how students are positioned relative to content (called disciplinary positioning) and how they are positioned relative to others (called interpersonal positioning). As I will add below, positioning also refer to how instructors are positioned relative to the students and the content (perhaps called professorial positioning). This post will explore how portfolios can support all three types of positioning in more effective and in less effective ways.

Melissa further explained that positioning occurs at two levels. At the more immediate level positioning concerns the moment-to-moment process in which students take up opportunities that they are presented with. Over the longer term, students become associated with particular ways of participating in classroom settings (these ideas are elaborated by scholars like Dorothy Holland and Stanton Wortham). This post will focus on identifying two complementary functions for portfolios helps them support both types of positioning.

Portfolios and Artifacts
Portfolios are collections of artifacts that students created. Artifacts support participation because they are where students apply what they are learning in class to something personally meaningful. In this way they make new meanings. In our various participatory assessment projects, artifacts have included

  • the “Quests” that students complete and revise in Quest Atlantis’ Taiga world where they explain, for example, their hypothesis for why the fish in the Taiga river are in decline;
  • the remixes of Moby Dick and Huck Finn that students in Becky Rupert’s class at Aurora Alternative High School create in their work with the participatory reading curricula that Jenna McWilliams is creating and refining.
  • the various writing assignments that the English teachers in Monroe and Greene County have their students complete in both their introductory and advanced writing classes;
  • the wikifolio entries that my students in my graduate classroom assessment course complete where they draft examples of different assessment items for a lesson in their own classrooms, and state which of the several item writing guidelines in the textbook they found most useful.

  • In each case, various activities scaffold the student learning as they create their artifacts and make new meanings in the process. As a caveat, this means that participatory assessment is not really much use in classrooms where students are not asked to create anything. More specifically, if your students are merely being asked to memorize associations and understand concepts in order to pass a test, stop reading now. Participatory assessment won’t help you. [I learned this the hard way trying to do participatory assessment with the Everyday Mathematics curriculum. Just do drill and practice. It works.]


Problematically Positioned Portfolios
Probably the most important aspect of participatory assessment has to do with the way portfolios are positioned in the classroom. We position them so they serve as a bridge between the communal activities of participatory classroom and the individual accountability associated with compulsory schooling. If portfolios are to serve as a bridge, they must be firmly anchored. On one side they must be anchored to the enactment of classroom activities that support students’ creation of worthwhile portfolios. On the other side they must be anchored to the broader accountability associated with any formal schooling.



To keep portfolio practices from falling apart (as they often do) it is crucial that they rest on these two anchors. If accountability is placed on the portfolio, the portfolio practice will collapse. In other words, don’t use the quality of the actual portfolio artifacts for accountability. Attaching consequences to the actual artifacts means that learners will expect precise specifications regarding those artifacts, and then demand exhausting feedback on whether the artifacts meet particular criteria. And if an instructor’s success is based on the quality of the artifacts, that instructor will comply. Such classrooms are defined by an incessant clamor from learners asking “Is this what you want???”

When portfolios are positioned this way (and they often are), they may or may not represent what students actually learned and are capable of. When positioned this way, the portfolio is more representative of of (a) the specificity of the guidelines, (b) their ability to follow those guidelines, and (3) the amount of feedback they get from the instructor. Accountability-oriented portfolios position disciplinary knowledge as something to be competitively displayed rather than something to be learned and shared, and portfolios position students as competitors rather the supporters. Perhaps most tragically, attaching consequences to artifacts positions instructors (awkwardly) as both piano tuners and gatekeepers. As many instructors (and ex-instructors) know, doing so generates massive amounts of work. This is why it seems that many portfolio-based teacher education programs rely so heavily on doctoral students and adjuncts who may or may not be qualified to teach courses. The more knowledgeable faculty members simply don’t have the time to help students with revision after revision of their artifacts as students struggle to create the perfect portfolio. This is the result of positioning portfolios for production.

Productive Positioning Within Portfolios
Portfolio are more useful when they are positioned to support reflection. Instead of grading the actual artifacts that students create, any accountability should be associated with student reflection on those artifacts. Rather than giving students guidelines for producing their artifact, students need guidelines for reflecting on how that artifact illustrates their use of the “big ideas” of the course. We call these relevant big ideas, or RBIs. The rubrics we provide students for their artifacts essentially ask them to explain how their artifact illustrates (a) the concept behind the RBI, (b) the consequences of the RBI for practice, and (c) what critiques others might have of this characterization of the RBI. For example:

  • Students in my classroom assessment course never actually “submit” their wikifolios of example assessments. Rather, three times a semester they submit a reflection that asks them to explain how they applied the RBIs of the corresponding chapter.
  • Students in Taiga world in Quest Atlantis submit their quests for review by the Park Ranger (actually their teacher but they don’t know that). But the quest instructions (the artifact guidelines) also include a separate reflection section that asks students to reflect on their artifact. The reflection prompts are designed to indirectly cue them what their quest was supposed to address.
  • Students in Becky Rupert’s English class are provided a rubric for their remixes that ask them to explain how that artifact illustrates how an understanding of genre allows a remix to be more meaningful to particular audiences.
Assessing the resulting reflections positions portfolios, students, and teachers in ways that strongly support participation. For example, if the particular student’s artifact actually does not lend itself to applying the RBIs, my classroom assessment students can simply indicate that in their assignment. This is important for at least three reasons:

  1. it allows full individualization for students and avoids a single ersatz assignment that is only half-meaningful to some students and mostly meaningless to the rest;
  2. understanding if and how ideas from a course do not apply is a crucially important part of that expertise.
  3. The reflection itself provides more valid evidence of learning, precisely because it can include very specific guidelines. We give students very specific guidelines asking them to reflect on the RBIs conceptually, consequentially, and critically.

For example, the mathematics teachers in the classroom assessment course are going to discover that it is very difficult to create portfolio assessments for their existing mathematical practices. Rather than forcing them to do so anyways (and giving them a good grade for an absurd example), they can instead reflect on what it is about mathematics that makes it so difficult, and gain some insights into how they might more readily incorporate project-based instruction into their classes. The actual guidelines for creating good portfolios are in the book when they need them; reflecting on those guidelines more generally will set them up to use them more effectively and meaningfully in the future.

Another huge advantage of this way of positioning portfolios is that it greatly eliminate a lot of the grading busywork and allows more broadly useful feedback. In the Quest Atlantis example, our research teacher Jake Summers of Binford Elementary discovered that whenever the reflections were well written and complete, the actual quest submission would also be well done. In the inevitable press for time, he just started looking at the artifacts. Similarly in my classroom assessment course, I will only look need to go back and look at the actual wikifolio entries when a reflection is incomplete or confusing. Given that the 30 students each have 8 entries, it is impossible to carefully review all 240 entries and provide meaningful feedback. Rather throughout the semester, each of the students have been getting feedback from their group members and from me (as they specifically request and as time permits). Because the artifacts are not graded, students understand the feedback they get as more formative than summative, and not as instructions for revision. While some of the groups in class are still getting the hang of it, many of the entries are getting eight or nine comments along with comments on comments. Because the entries are wikis it is simple for the originator go in and revise as appropriate. These students are starting to send me messages that, for me, suggest that the portfolio has indeed been positioned for participation: “Is this what you meant?” (emphasis added). This focus on meaning gets at the essence of participatory culture.

In a subsequent post, I will elaborate on how carefully positioning portfolios relative to (a) the enactment of classroom activities and (b) external accountability can further foster participation.

Wednesday, September 9, 2009

Q & A with Henry Jenkins' New Media Literacies Seminar

New media scholar Henry Jenkins is teaching a graduate seminar on new media literacies at the University of Southern California's Annenberg School for Communication. The participants had raised the issues of assessment and evaluation, especially related to educational applications of new media. Henry invited Dan Hickey to skype into their class to field questions about this topic. They perused some of the previous posts here at re-mediating assessment and proceeded to ask some great questions. Over the next few weeks, Dan and other members of the participatory assessment team will respond to these and seek input and feedback from others.


The first question was one they should have answered months ago:



Your blog post on what is not participatory assessment critiqued prevailing assessment and testing practices. So what is participatory assessment?

The answer to this question has both theoretical and practical elements. Theoretically, participatory assessment is about reframing all assessment and testing practices as different forms of communal participation, embracing the views of knowledgeable activity outlined by media scholars like Henry Jenkins, linguists like Jim Gee, and cognitive scientists like Jim Greeno. We will elaborate on that in subsequent posts, hopefully in response to questions about this post. But this first post will focus more on the practical answer.

Our work in participatory assessment takes inspiration from the definition of participatory culture in the 2006 white paper by Project New Media Literacies:
not every member must contribute, but all must believe they are free to contribute when ready and that what they contribute will be appropriately valued.

As Henry, Mimi Ito, and others have pointed out, such cultures define the friendship-driven and interest-driven digital social networks that most of our youth are now immersed in. This culture fosters tremendous levels of individual and communal engagement and learning. Schools have long dreamed of attaining such levels but have never even come close. Of course, creating (or even allowing) such a culture in compulsory school settings requires new kinds of collaborative activities for students. Students like those in Henry’s class, and students in our Learning Sciences graduate program are at the forefront of creating such activities. Participatory assessment is about creating guidelines to help students and teachers use those activities to foster both conventional and new literacy practices. Importantly, these guidelines are also intended to produce more conventional evidence of the impact of these practices on understanding and achievement that will always be necessary in any formal educational context. Such evidence will also always be necessary if there is to be any sort of credentialing offered for learning that takes place in less formal contexts.


Because successful engagement with participatory cultures depends as much on ethical participation (knowing how) as it does on information proficiency (knowing what), At the most basic practical level participatory assessment is intended to foster both types of know-how. More specifically, participatory assessment involves creating and refining informal discourse guidelines that students and teachers use to foster productive communal participation in collaborative educational activities, and then in the artifacts that are produced in those activities. Our basic idea is that before we assess whether or not individual students understand X (whatever we are trying to teach them), they must first be invited to collectively “try on” the identities of the knowledge practices associated with X. We do this by giving ample opportunities to “try out” discourse about X, by aggressively focusing classroom discourse towards communal engagement in X, and discouraging a premature focus on individual students’ understanding of X (or even their ability to articulate the concept of X). Premature focus on individual understanding leaves the students who are struggling (or have perhaps not even been trying) self-conscious and resistant to engagement. This will make them resist talking about X. Even more problematically, they will resist even listening to their classmates talk about X. Whatever the reason the individual is not engaging, educators must help all students engage with increased meangingfulness.
To do participatory assessment for activity A, we first define the relevant big ideas (RBIs) of the activity (i.e., X, Y, and perhaps, Z). We then create two simple sets of Discourse Guidelines to ensure that all students enlist (i.e., use) X, Y, and Z in the discourse that defines the enactment of that activity. Event reflections encourage classrooms to reflect on and critique their particular enactment of the activity. These are informal prompts that are seamlessly embedded in the activities. A paper we just wrote for the recent meeting of the European Association for Research on Learning and Instruction in Amsterdam discussed examples from our implementation of Reading in a Participatory Culture developed by Project New Media Literacies. That activity Remixing and Appropriation used new media contexts to conventional literary notions like genre and allusion. One of the Event Reflection prompts was

How is the way we are doing this activity helping reveal the role of genre in the practice of appropriation?


Given that the students had just begun to see how this notion related to this practice, the students struggled to make sense of such questions. But it set the classroom up to better appreciate how genre was just as crucial to Melville’s appropriation of the Old Testament in Moby-Dick as it was to the music video "Ahab" by nerdcore pioneer MC Lars. The questions are also worded to introduce important nuances that will help foster more sophisticated discourse (such as the subtle distinction between a concept like genre and a practice like appropriation)
Crucially, the event guidelines were aligned to slightly more formal Activity Reflections. These come at the end of the activity, and ask students to reflect on and critique the way the particular activities were designed, in light of the RBIs:

How did the way that the designers at Project New Media Literacies made this activity help reveal the role of genre in the practice of appropriation?


Note that the focus of the reflection and critique has shifted from the highly contextualized enactment of the activity, the more fixed design of the activity. But we are still resisting the quite natural tendency to begin asking ourselves whether each student can articulate the role of genre in appropriation. Rather than ramping up individual accountability, we first ramp up the level of communal discourse by moving from the rather routine conceptual engagement in the question above, and into the more sophisticated consequential and critical engagement. While these are not the exact questions we used, these capture the idea nicely:

Consequential Reflection: How did the decision to focus on both genre and appropriation impact the way this activity was designed?

Critical Reflection: Can you think of a different or better activity than Moby-Dick or Ahab to illustrate genre and appropriation?


We are still struggling to clarify the nature of these prompts, but have found a lot of inspiration in the work of our IU Learning Sciences colleagues Melissa Gresalfi and Sasha Barab, who have been writing about consequential engagement relative to educational video games.


The discourse fostered by these reflections should leave even the most ill-prepared (or recalcitrant) participant ready to meaningfully reflect on their own understanding of the RBIs. And yet, we still resist directly interrogating that understanding, in order to continue fostering discourse. Before jumping to assess the individual, we first focus on the artifacts that the individual is producing in the activity. This is done with Reflective Rubrics that ask the students to elaborate on how the artifact they are creating in the activity (or activities) reflects consequential and critical engagement with the RBI. As will be elaborated in a subsequent post, these are aligned to formal Assessment Rubrics of the sort that teachers would use to formally assess and (typically) grade the artifacts.

Ultimately, participatory assessment is not about the specific reflections or rubrics, but the alignment across these increasingly formal assessments. By asking increasingly sophisticated versions of the same questions, we can set remarkably high standards for the level of classroom discourse and the quality of student artifacts. In contrast to conventional ways of thinking about how assessment drive curriculum, former doctoral student Steven Zuiker help us realize that we have to thing impact of these practices using the anthropological notion of prolepsis. It helps us realize that anticipation of the more formal assessments motivates communal engagement in the less formal reflective process. By carefully refining the prompts and rubrics over time, we can attain such high standards for both that any sort of conventional assessment of individual understanding or measure of aggregated achievement just seems…well…. ridiculously trivial.
So the relevant big idea here is that we should first focus away from individual understanding and achievement if we want to confidently attain it with the kinds of participatory collaborative activities that so many of us are busily trying to bring into classrooms.

Wednesday, July 22, 2009

I'm bringing sexyback: some thoughts on formative assessment

Immersed as I am lately in the world of participatory assessment, I go through cycles of forgetting and then remembering and then forgetting again that not everybody in educational research thinks assessment is sexy.

I was reminded of this again recently while reading Lorrie Shepard's excellent 2005 paper, "Formative Assessment: Caveat Emptor." The piece argues that the notion of "formative assessment" has been twisted in unfortunate ways as a result of the excessive hammering kids get from high-stakes standardized tests.

I helpfully plugged the entire paper into the wordle machine for you and got this:


In theory, then, assessment should be easy to understand: All of the most frequently used words in Shepard's paper are fairly common and comprehensible. In practice, though, assessment research is complicated by the impulse to put a fine point on things. Here's a sample paragraph from Shepard's piece, which starts out okay but descends into chaos before the end:
“Everyone knows that formative assessment improves learning,” said one anonymous test maker, hence the rush to provide and advertise “formative assessment” products. But are these claims genuine? Dylan Wiliam (personal communication, 2005) has suggested that prevalent interim and benchmark assessments are better thought of as “early-warning summative” assessments rather than as true formative assessments. Commercial item banks may come closer to meeting the timing requirements for effective formative assessment, but they typically lack sufficient ties to curriculum and instruction to make it possible to provide feedback that leads to improvement.


I'm not saying the language is unnecessary; I'm not saying that assessment types are putting too fine a point on things. What I will argue here is that assessment research has, for lots of good and not-so-good reasons, been divorced so thoroughly from other aspects of educational research that it's decontextualized itself right into asexuality. It's like that guy in the corner booth at the bar on Friday night who wants to talk about Marxism when everybody else just wants to make sure everybody gets the same amount of beer before closing time.

Think about that guy for a second. Let's call him Jeff. Jeff has been single for a long time now, and he's spent a lot of that time reading. Maybe he's grown nostalgic for the early days before his girlfriend cheated on him and then moved in with some guy she met in her Econ class. His friends miss those days, too, mainly because he was so much goddamn fun back then. They're nice enough; they want to take him out and help him snap out of it. But the minute the beers come he's back on the Marxism soapbox again and NOBODY. FREAKING. CARES. It's Friday night, late July, and everybody just wants to get stupid drunk. They drop him some hints. Sully slaps him on the back and asks him to tell that one joke he told last week.

"In a minute," Jeff says. "I'm explaining where Marxism went wrong."

Eventually his friends will tell him to either cut it out or go home. If he wants to keep hanging out with these guys, he'll shut up. Or maybe he'll tell that one joke he executes so well. If the girls around him laugh, he might tell another one. Girls like funny guys, he'll suddenly remember. They don't necessarily like Marxists.

All of this is what we might call "formative assessment." This guy wants to be accepted by his friends, which means he needs to pay attention to his behavior. He learns (or re-learns) how to act at the bar on Friday night by paying attention to the feedback he gets from his friends, from other people at the bar, from his memories of having a social life all those years ago.

If we wanted to, we could spend some time talking about better ways to help Jeff learn the social skills he needs. For example, his friends could have sat him down before they went out and explained that his primary goal was to be the funniest guy in the room. "Because girls like funny guys," his buddy Rufus might remind him. They might also set deadlines: By 11:30 you better have told at least three jokes. Then, over the course of the evening, they could check in with him and get a joke-count.

The point is that everybody's on board with the evening's goals. Everybody--Jeff, his friends--wants Jeff to have a good time, and they want to have a good time with him.

Haha! I tricked you into caring about formative assessment.

This is what assessment is, even if it doesn't always feel that way to students, teachers, or researchers. There is an end goal, an objective, and formative assessment is a way of getting everyone on board with this goal and keeping them on board. When it works right, everybody involved actually wants to achieve the objective and the assessment is valuable because it helps them get where they want to go.

But as Shepard's piece points out, too often the insanity of NCLB substitutes test scores for real, intrinsic motivation. Too often and too easily, students learn skills it takes to attain high test scores without actually learning anything. Though "(the) idea of being able to do well on a test without really understanding the concepts is difficult to grasp," Shepard writes, she gives as evidence a 1984 study performed by M.L. Koczor, which focused on two groups of children learning about Roman numerals:
One group learned and practiced translating Roman to Arabic numerals. The other group learned and practiced Arabic to Roman translations. At the end of the study each group was randomly subdivided again (now there were four groups). Half of the subjects in each original group got assessments in the same format as they had practiced. The other half got the reverse. Within each instructional group, the drop off in performance, when participants got the assessment that was not what they had practiced, was dramatic. Moreover, the amount of drop-off depended on whether participants were low, middle, or high achieving. For low-achieving students, the loss was more than a standard deviation. Students who were drilled on one way of translation appeared to know the material, but only so long as they were not asked to translate in the other direction.

Because NCLB and other insane policies that mandate high-stakes testing for accountability have pushed assessment out of its natural home--as Jim Gee explains it, "in human action"--assessment researchers have themselves been backed into a separate corner of the room.

This is not okay. It doesn't help anybody to take the sexy out of assessment by tossing it into a corner. What we need, more than anything, is to push assessment back where it belongs: inside of the participation structures that support authentic learning.

Participatory assessment is, at its core, about social justice, about narrowing the participation gap that keeps our society stratified by race and class, about motivating learners to achieve real goals and overcome real obstacles to their own learning. Participatory assessment, if we do it right, can make almost anything possible for almost anyone.

Monday, July 20, 2009

making universities relevant: the naked teaching approach

I feel sorry for college deans, I really do*. They face the herculean task of proving that the brick-and-mortar college experience offers something worth going into tens of thousands of dollars of debt for, a task made even more difficult by the realities of a recession that's left nearly a quarter of Americans either unemployed or underemployed.

Then there's the added challenge of proving colleges have anything other than paper credentials to offer in a culture where information is free and expert status is easily attainable. Only in a participatory culture, for example, would it be possible for time-efficiency guru Timothy Ferriss to offer a set of instructions on "How to Become a Top Expert in 4 Weeks." "It's time to obliterate the cult of the expert," Ferriss writes in his mega-bestseller, The Four-Hour Workweek. He argues that the key is to accumulate what he calls "credibility indicators." It is possible, he writes,
to know all there is to know about a subject--medicine, for example--but if you don't have M.D. at the end of your name, few will listen.... Becoming a recognized expert isn't difficult, so I want to remove that barrier now. I am not recommending pretending to be something you're not... In modern PR terms, proof of expertise in most fields is shown with group affiliations, client lists, writing credentials, and media mentions, not IQ points or Ph.D.s.

Ferriss then offers five tips for becoming a "recognized expert" in your chosen field. None of them include earning the credential through formal education.

Just like that, we've gone from the position that expertise takes a decade, at minimum, to develop, to the argument that a person can become an expert in just four weeks.

In the face of this qualitative shift in how we orient to expertise, colleges--the educational institutions that have made their bones on offering a sure path to credentialing--are struggling to remain viable. One strategy--and the one chosen by José A. Bowen, dean of the Meadows School of the Arts--is to offer "naked teaching." Bowen's approach, as described in a recent piece in the Chronicle of Higher Education, is to actually remove networked technologies from the classroom. The article makes it clear that Bowen is not anti-technology; he just thinks technologies are being misused by faculty who overrely on PowerPoint and technology-supported lecturing techniques. He favors using technologies like podcasting for delivering lecture materials outside of the classroom, then using the class itself to foster group discussion and debates.

To support this approach, all faculty were recently given laptops and support for creating podcasts and videos.

According to the Chronicle piece, the group that's most upset about the shift away from the traditional lecture format is...students. According to Kevin Heffernan, an associate professor in the school's division of cinema and television, students

are used to being spoon-fed material that is going to be quote unquote on the test. Students have been socialized to view the educational process as essentially passive. The only way we're going to stop that is by radically refiguring the classroom in precisely the way José wants to do it.


For all the griping we do about No Child Left Behind, test-centered accountability practices, and high-stakes assessment practices, the roaring success of decontextualized accountability structures is their astounding ability to keep formal education relevant. "Success" at the primary and secondary level means high achievement on high-stakes tests; and, achievement depends on the learner's ability to internalize the value systems and learning approaches implicit in the approach of this kind of testing structure. Do well on a series of state-mandated tests and you'll probably also do well on the SAT; do well on the SAT and you're well positioned for the lecture-style, knowledge-transfer and, in general, highly decontextualized experience of most undergraduate-level classes. We gravitate toward the kind of experience that make us feel successful, which means the testing factory churns out its own customer base.

While Bowen's experiment (one that he's been moving toward for years; see this 2006 piece in the National Teaching and Learning Forum) may garner attention for an apparent anti-technology stance, the impetus behind his "naked teaching" approach is an effort to reshape the role of institutions of higher education. In truth, learning can happen anywhere, and Bowen's embrace of this truth through his embrace of technologies for supporting out-of-class information transfer seems like a low-risk and high-yield slant on the role of the university.

If learning can happen anywhere, then the physical community of learners gathered together within four walls, engaged in the act of collaborative knowledge-building: That's the rare commodity. In a world where everyone can be an expert, the promise of credentials become just another strategy for bringing that community together.



*jk I really don't.

Thursday, July 9, 2009

Participatory Assessment for Bridging the Void between Content and Participation.

Here at Re-Mediating Assessment, we share our ideas about educational practices, mostly as they relate to innovative assessment practices and mostly then as they relate to new media and technology. In this post, I respond to an email from a colleague about developing on-line versions of required courses in graduate-level teacher education courses.

My colleague and I are discussing how we ensure coverage of “content” in proposed courses that focuses more directly on “participation” in the actual educational practices. This void between participation (in meaningful practices) and content (as represented in textbooks, standards, and exams) is a central motivation behind Re-Mediating Assessment. So it seems worthwhile to expand my explanation of how participatory assessment can bridge this void and post it here.

To give a bit of context, note that the course requirements of teacher education programs are constantly debated and adjusted. From my perspective it is reasonable to assume that someone with a Master’s degree in Ed should have taken a course on educational assessment. But it also seems reasonable to have also had a course on, say, Child Development. But it simply may not be possible to require students to take both classes. Because both undergraduate and graduate teacher educator majors have numerous required content area courses (i.e., math, English, etc.), there are few slots left for other courses that most agree they need. So the departments that offer these other required courses have an obvious obligation to maintain accountability over the courses that they offer.

I have resisted teaching online because previous courseware tools were not designed to foster participation in the meaningful discourse that is what I think is so important to a good course. Without a classroom context for discourse (even conversations around a traditional lecture), students have few cues for what matters. Without those cues, assessment practices become paramount in communicating the instructor values. And this is a lot to ask of an assessment.

This is why, in my observation, online instruction heretofore has mostly consisted of two equally problematic alternatives. The first is the familiar on-line tools for pushing content out to students: “Here is the text, here are some resources, and here is a forum where you can post questions, and here is the exam schedule.” The instructors log on to the forums regularly and answer any questions, students take exams, and that is it. Sometimes these courses are augmented with papers and projects and perhaps with collaborative projects; hopefully students get feedback, and they might even use that feedback to learn more. But many many on-line course are essentially fancy test prep. My perceptions are certainly biased by my experiences back in the 90s in the early days of on-line instruction. The Econ faculty where I was working could not figure out why the students who took the online version of Econ 101 always got higher exam scores than the face-to-face (FTF) students, but almost always did far worse in the FTF Econ 201. This illustrates the problem with instruction that directly preparing students to pass formal exams. Formal exams are just proxies for prior learning, and framing course content entirely around tests (especially multiple choice ones) is just a terrible idea. Guessing which of four associations is least wrong is still an efficient way of reliably comparing what people know about a curriculum or a topic. But re-mediating course content to fit into this format makes it nearly useful for teaching.

The other extreme of on-line instruction is “project based” classes that focus almost entirely on developing a portfolio of course-related projects. These approaches seem particularly popular in teacher education programs. The problem with on-line portfolios is that the lack of FTF contact requires the specifications for the portfolios to be excruciatingly detailed. Much of the learning that occurs tends to be figuring out what the instructor wants in order to get a good grade. The most salient discourse in these classes often surrounds the question “Is this what you want?” These classes are usually extremely time-consuming to teach because the accountability associated with the artifacts leads students to demand, and instructors to provide, tons of detailed feedback on each iteration of the artifacts. So much so that the most qualified faculty can’t really afford to teach many of these courses. As such, these courses are often taught by graduate students and part-time faculty who may not be ideal for communicating the “Relevant Big Ideas” (RBIs, or what a learning scientist might call “formalisms") behind the assignments, and instead just focus on helping students create the highest quality artifacts. This creates a very real risk that students in these classes may or may not actually learn the underlying concepts, or may learn them in a way that they are so bound to the project that they can’t be used in other contexts. In my observation, such classes seldom feature formal examinations. Without careful attention, lots of really good feedback, and student use of feedback, students may come away from the class with a lovely portfolio and little else. Given the massive investment in e-Portfolios in e-learning platforms like Sakai, this issue demand careful attention. (I will ask my friend Larry Mikulecky in Indiana’s Department of Culture, Communication, and Language Education who I understand has been teaching non-exam online courses for years and has reportedly develops considerable evidence of student’s enduring understanding.)

A Practical Alternative
I am teaching on-line for the first time this summer. The course is P540, Cognition and Learning, a required course for many M. Ed programs. I am working like crazy to take full advantage of the new on-line resources for social networking that are now available in OnCourse, IU’s version of Sakai (an open-source collaborative learning environment designed for higher education). In doing so I am working hard to put into place an on-line alternative that balances participation and content. I also plan to use some of the lessons I am learning in my Educational Assessment course this Fall—which is partly what prompted that aforementioned conversation with my colleague. I want to put some of my ideas as they are unfolding in that class out there and seek input and feedback, including from my current students who are (so far) patiently hanging with me as I refine these practices as I go.

In particular I am working hard to incorporate the ideas about participatory culture that I have gained from working with Henry Jenkins and his team at Project New Media Literacies over the last year. Participatory assessment assumes that you can teach more "content" and gather more evidence that students “understand” that content by focusing more directly on participation and less directly on content. Theoretically, these ideas are framed by situative theories of cognition that say participation in social discourse is the most important thing to think about, and that individual cognition and individual behavior are “secondary” phenomena. These ideas come to me from three Jims: Greeno (whose theorizing has long shaped my work) Gee (who also deeply influences my thinking about cognition and assessment and whose MacArthur grant funded the aforementioned collaboration and indirectly supports this blog) and Pellegrino (with whom I did my doctoral studies of assessment, transfer, and validity with but who maintains an individual differences approach to cognition).

Per the curriculum committee that mandated a cognition and learning course for most masters degrees for teachers, my students are just completing ten tough chapters on memory, cognition, motivation, etc. I use Roger Bruning’s text because he make is quite clear and puts 5-7 “implications for teaching” at the end of each chapter. But it is a LOT of content for these students to learn, especially if I just have them read the chapters.

I break students up into domain groups (math science, etc.) and in those groups they go through the 5-7 implications for teaching. Each group must use the forum to generate a specific example of that implication, and then rank order the implications in terms of relevance and warrant those rankings and post them to the OnCourse wiki. The level of discourse in the student-generated forums around the content is tremendous. Then the lead group each week synthesizes the postings of all five groups to come up with a single list. I also have now asked them to do the same with “things worth being familiar with” in the chapter (essentially the bolded items and any highlighted research studies). What I particularly like about the discussions is the way that the discourse around agreeing that an implication or topic is less relevant actually leads to a pretty deep understanding of that implication or idea. This builds on ideas I have learned from my colleague Melissa Gresalfi about “consequential engagement.” By struggling to conclude that the implication is least likely to impact practice makes it more likely that they will remember that implication if they find themselves is a situation that makes it more relevant.

This participatory approach to content is complemented by four other aspects of my class. Illustrating my commitment to content, I include three formal exams that are timed and use traditional MC and short answer items. But I prioritize the content that the class has deemed most important, and don't even include the content they deem least important.

The second complement is the e-Portfolios each student has to post each week in OnCourse. Students have to select the one implication they think is most relevant, warrant the selection, exemplify and critique it, and then seek feedback on that post from their classmates. Again following Melissa’s lead, the e-Portfolio asks students for increasingly sophisticated engagement with the implication relative to their own teaching practice: procedural engagement (Basically explain the implication in your own words), conceptual engagement (give an example that illustrates what this implication means), consequential engagement (what are the consequence of this implication for your teaching practice, what should you do differently now that you understand this aspect of cognition?) and critical engagement (why might someone disagree with you and what would happen if you took this implication too far?). I require them to request feedback from their classmates. While this aspect of the new on-Course e-Portfolio tools is still quite buggy, I am persevering because the mere act of knowing that a peer audience is going to read it pushes them to engage more deeply. Going back to my earlier point, it is hard for me to find time to review and provide detailed feedback on 220 indivdiual submissions across the semester. When I do review them (students submit them for formal review after five submissions), I can just look at the feedback from other students and the students' own reflection on what they have learned for pretty clear evidence of consequential and critical engagement.

The third complement is the e-Portfolio that each student completes during the last five weeks of class. While each of the groups leads the class in the chapter associated with their domain (literacy, comprehension, writing, science and math), students will be building an e-portfolio in which they critique and refine at least two web-based instructional resources (educational videogames, webquests, the kind of stuff teachers increasingly are searching out and using in their classes). They select two or more of the implications from that chapter to critique the activities and provide suggestions for how it should be used (or if it should be avoided), along with one of the implications from the chapter on instructional technology, and one of the implications from the other chapters on memory and learning. If I have done my job right, I don’t need to prompt them to the consequential and critical engagement at this stage. This is because they should have developed what Melissa calls a “disposition” towards these important forms of engagement. All I have to do is include the requirement that they justify why each implication was selected, the feedback from their classmates, and their reflection on what they learned from feedback. It turns out the consequential and critical engagement is remarkably easy to recognize in discourse. That seems partly because it is so much more interesting and worthwhile to read than the more typical class discourse that is limited to procedural and conceptual engagement. Ultimately, that is the point.