Monday, June 6, 2011

664 Blog 5: New Assessment Practices for New Texts

Yancey, Kathleen Blake. “Looking for Sources of Coherence in a Fragmented World: Notes toward a New Assessment Design.” Computers and Composition 21 (2004): 89-102. Rpt. in Computers in the Composition Classroom: A Critical Sourcebook. Ed. Michelle Sidler, Richard Morris, and Elizabeth Overman Smith. Boston: Bedford/St. Martin’s, 2008. 293-307. Print.

Kathleen Blake Yancey speaks to a gap (though one that is shrinking now) between the common promotion of intertextuality in the digital age and our assessment of the resulting multimodal texts. We often still rely on print sensibilities to evaluate and assess digital texts and are, as she says, “held hostage to the values informing print, values worth preserving for that medium, to be sure, but values incongruent with those informing the digital” (293). Therefore, she calls for a new language of assessment for the new types of texts our students may be producing.

The key area she addresses here is the idea of coherence. She first emphasizes that coherence refers to relationships. In print texts, these relationships rely mostly on the relationship of words to each other and to the context in which they appear. Coherence in such texts often remains relatively stable, and this stability establishes certain values about what constitutes “good” writing. This leads her to a slight (but useful) tangent regarding word-processing software’s effects on assessment, including surface correctness promoted by grammar- and spell-checkers and instructor dominance of texts aided by the ease of commentary allowed by such software. Recognizing these matters is an important move toward stronger assessments of digitally-produced texts.

After developing this awareness, instructors must be aware that coherence in digital texts is more complex than what we discover in most print texts. Yancey’s solution to assessing these multiple and multimodal coherences is a heuristic model that establishes a fixed schema but one that remains flexible for the various types of multimodal texts in may need to respond to. Her heuristic consists of four questions:

1. What arrangements are possible?
2. Who arranges?
3. What is the intent?
4. What is the fit between the intent and the effect? (301)

She then applies this model heuristic to examine the coherence of emails and, more valuable to instructors, to a digital portfolio to demonstrate (convincingly, at least in her terms) how these questions emphasize digital values of coherence.

Yancey’s article is certainly valuable to instructors who are planning to assign writing projects that ask students to compose with various digital tools and produce multimodal texts. First, those instructors do need to consider how they are assessing digital texts and whether their values still come from print or if they are making the shift to more digital values. Second, her heuristic provides a starting template instructors can tweak to fit more local contexts as needed and one that reminds instructors to focus on the multiple coherences of digital texts. However, Yancey does not provide much detail regarding assessments of content or language. Should our assessment of these areas rely on print values, or should this change as well? And if so, what would those changes look like? While these questions are beyond the scope of her article, instructors still must remember to consider more than coherence as they reevaluate their assessment practices.