Wednesday, June 13, 2012

A Week's Worth of Articles--What They Suggest about Digital Writing Research in Practice


This week, I emphasized articles from Computers and Composition in my blog, for two reasons. First, as my attention for the past two weeks has been on digital writing research methods, Computers and Composition was an obvious choice for research articles related to this subfield of writing studies. Second, and more important, this journal’s articles often utilize numerous research designs. Having such variety to analyze, I felt, would enable me to address (if rather simplistically and generally) how research in this area adheres to the methodological theories and approaches that I have been examining for the past month. What I have noticed in reading these articles and thinking about their methods is that they utilize traditional research methods in ways appropriate for the complexity of digital writing research.

I want to begin by discussing how some of these articles adhere to more traditional research methods. The studies I selected for this week’s readings covered a variety of research methods, but as an example of how these studies rely on and differ from traditional formulations, I will address Jin and Zhu’s case study approach. The goals of a case study are exploratory in nature, not attempting to develop cause/effect notions but to establish potential variables to explore more fully (Lauer and Asher 23). In addition, case studies tend to rely on either broad or representative samples of populations to study, a variety of sources for data, careful coding and reliability examinations, and “descriptive accounts” to report their findings (Lauer and Asher 25, 26-27, 31, 32). Jin and Zhu are clear that their study is exploratory, especially in terms of discovering potential ways computer-mediated tools affect students’ activity (286). Furthermore, they also demonstrate the use of selective sampling by choosing two students who have different levels of computer skills/knowledge (287-88). However, while Jin and Zhu do address their coding procedures, they do not address reliability issues related to their coding. This lapse does not seem to weaken their study. Their focus on motives and the numerous streams of data they use (video observations, interviews, IM transcripts, etc. [288]) they triangulated to better understand and validate participants’ claims about their motives (289) suggest an effective methodology for making sense of the data they collected.

These studies also adhere to the principles addressed throughout Heidi A. McKee and Danielle Nicole DeVoss’s anthology on digital writing research, especially approaches to theory, ethics, and collecting data effectively. As is clear in McKee and DeVoss’s collection, many scholars in digital writing studies rely on theory as a guiding principle to understand and explore their research topics (e.g., Hilligoss and Williams 232-36;  Kimme Hea 273-74; Romberger 251-54). Jin and Zhu, for example, embrace activity theory to establish and explicate their focus on motivation and how technology can affect these motives (285-86). In addition, some articles in McKee and DeVoss’s text emphasize the complexity of ethical issues related to digital research (see Banks and Elbe; Pandey for a couple of examples). In my readings this week, Stedman’s article addresses most explicitly the complexity of these issues, noting his concerns about the treatment of fan communities by previous researchers, his decisions that being ethical in the eyes of this community meant being explicit about his intentions, and his sense that his IRB permissions did not precisely address the ethical considerations he had to make (110-11, 117). Finally, many of the authors in McKee and DeVoss’s anthology note the need for considerable flexibility in terms of their research methods to adapt to the malleability of digital research (e.g., DePew; Rickly). One particularly representative example of this from my reading this week was Jin and Zhu’s article. Though they did not rely on quantitative methods, they did use multiple approaches to collect their data (see above). Many of these methods (video recording and chat transcripts, for example) allowed them to gain insight into their participants’ motives less obtrusively than traditional observation and possibly even yielded more accurate information.

What these articles demonstrate in terms of research methods, then, is both an attention to the research tradition in maintaining high intellectual standards and a willingness to add to or tweak these practices in response to changing research contexts as a result of computer-mediated technologies. As I have noted previously, such flexibility is important for research in the digital age if it is to be ethical and rigorous, and despite some lapses in methods or analysis (see my entry on Garrison’s article), the research I examined this week seems to illustrate such ethics and rigor.

Works Cited
Banks, Will, and Michelle Eble. “Digital Spaces, Online Environments, and Human Participant Research: Interfacing with Institutional Review Boards.” McKee and DeVoss 27-47. Print.

Garrison, Kevin. “An Empirical Analysis of Using Text-to-Speech Software to Revise First-Year College Students’ Essays.” Computers and Composition 26.4 (2009): 288-301. Print.

Hilligoss, Susan, and Sean Williams. “Composition Meets Visual Communication: New Research Questions.” McKee and DeVoss 229-47. Print.

Jin, Li, and Wei Zhu. “Dynamic Motives in ESL Computer-Mediated Peer Response.” Computers and Composition 27.4 (2010): 284-303. Print.

Kimme Hea, Amy. “Riding the Wave: Articulating a Critical Methodology for Web Research Practices.” McKee and DeVoss 269-86. Print.

Lauer, Janice M., and J. William Asher. Composition Research: Empirical Designs. New York: Oxford UP, 1988. Print.

McKee, Heidi A., and Danielle Nicole DeVoss, eds. Digital Writing Research: Technologies, Methodologies, and Ethical Issues. New Dimensions in Computers and Composition. Ed. Gail E. Hawisher and Cynthia Selfe. Creskill: Hampton, 2007. Print.

Pandey, Iswari. “Researching (with) the Postnational ‘Other’: Ethics, Methodologies, and Qualitative Studies of Digital Literacy.” McKee and DeVoss 107-25. Print.

Romberger, Julia E. “An Ecofeminist Methodology: Studying the Ecological Dimensions of the Digital Environment.” McKee and DeVoss 249-67. Print.

Stedman, Kyle D. “Remix Literacy and Fan Compositions.” Computers and Composition 29.2 (2012): 107-23. Print.

Research Ethics Remix


Stedman, Kyle D. “Remix Literacy and Fan Compositions.” Computers and Composition 29.2 (2012): 107-23. Print.

Because of participatory Web technologies, remixing has become a more common creative enterprise. As Stedman defines it, a remix is “any act of composition that involves the deliberate manipulation of previous passages, clips, or samples throughout a majority of the work" (108). Stedman argues that studying remixes is not especially new, but the focus tends to be on an analysis of the product. He wants to explore how the remixers do what they do—how they function in their communities and what remix literacy might entail.

This study is more ethnographic in nature, attempting to capture the processes of remixers in the context of their communities, in this case online fan communities. Stedman initially began his work with surveys distributed online to various fan communities that he was familiar with. He followed the surveys up with more in-depth interviews (via email, phone, or private online messages) with certain respondents and analysis of their texts these to highlight their rhetorical and aesthetic considerations.  But perhaps one of the most important parts of his methodology is how he positioned himself as a participant-observer within these communities. He discusses at some length how online fan communities responded to a survey dubbed “SurveyFail.” This survey was dishonest in its objectives, and the fan communities quickly labeled these researchers as outsiders, stifling their study. Stedman, instead, made the very conscious choice to be clear in his intent and to use his knowledge of these fan communities to position himself as both a researcher and an insider. Through his results, he found deep feelings of creativity and originality, significant attention to detail to create particular effects or reach certain goals, a strong sense of community and collaboration, various sources of inspiration, considerations of the relation of form/medium and content, use of a variety of appeals to audiences’ intellect and emotions, and attention to multiple purposes in composing (119). Ultimately, such considerations might be something instructors could incorporate into their classrooms to illustrate to students how some of them already employ rhetorical principles and/or how these principles exist outside the academic essay.  

Stedman’s ethnographic approach certainly suited his goals of learning what remixers do and why/how they do it, relying on the producers’ insights rather than his interpretations. Had he not done so, he might not have discovered how deeply committed these authors are to originality and creativity, for example. Though we might say his sample population may not quite allow him to postulate remix literacy’s characteristics definitively, his conclusions about their classroom applicability are tempered by his claims that remix literacy is one skill among many needed by students in the digital age. Finally, his attention to ethics provides other researchers with a host of considerations if they are to conduct online research effectively and ethically.

Studying Teaching Time


Reinheimer, David A. “Teaching Composition Online: Whose Side Is Time On?” Computers and Composition 22.4 (2005): 459-70. Print.

Reinheimer is responding to the (still) common concern about the amount of time online courses require of their instructors, especially compared to the time requirements of face-to-face (F2F) courses. Through his literature review, he explains that the evidence that online courses require more time of instructors is too sparse, too anecdotal to be administratively useful. While he says time-use research in distance education fields is a bit more detailed, these studies focus on teacher-centered pedagogies instead of the student-centered pedagogies more typical of composition instruction. Therefore, he hopes a quantitative study will produce some firmer conclusions about composition faculty workload related to online courses.

To collect his data, Reinheimer relied on the time-use recording of his participants. The first participant (the F2F instructor) was a teaching assistant who had taught in two previous semesters; the second participant (the online instructor) was the researcher. The data collection began in the semester the F2F course and the researcher’s first online section were offered. Reinheimer continued compiling data from his online courses in two subsequent semesters. The participants kept track of time spent only on student contact activities (e.g., direct communication with students, grading/assessing work, office hours). His initial results showed the F2F course requiring almost one third more time than online classes. Since the F2F course had set course meetings (and thus a pre-established minimum of contact time) and more students, he divided the time spent on each course by the number of students, arriving at time spent per student. This new calculation demonstrated that in the first semester online faculty time spent more than twice the time per student than did F2F faculty. This disparity did shrink dramatically in the second and third semesters of the online course. Reinheimer attributes this decrease in subsequent semesters to improved technology, course maturity, and instructor experience. Based on these results, he ultimately argues that faculty and administrators should discuss possible solutions to these workload concerns.

Anyone who has taught an online course and tried to demonstrate to others the amount of work and time needed to develop and deliver such courses can appreciate what Reinheimer is attempting to do here. However, I do have a few quibbles with his methods. First, the fact that his quantitative data comes from the self-reporting of his participants makes assessing their accuracy difficult. (On the other hand, I’m not entirely sure how else one might measure this more objectively.) Second, he only used one F2F course as a basis for comparison. Having data from the same number of online and F2F courses might allow one to make some more definitive claims. Finally, since the F2F course and online courses were taught by different instructors, we could easily argue that the differences in time spent per student were based wholly or in part on the instructor and not necessarily on the method of delivery. That being said, understanding these as concerns may serve to improve future studies.

Peer Review and IM


Jin, Li, and Wei Zhu. “Dynamic Motives in ESL Computer-Mediated Peer Response.” Computers and Composition 27.4 (2010): 284-303. Print.

Often, technology’s pedagogical effectiveness is examined in terms of the products students create (e.g., see my posting on Garrison’s article). Jin and Zhu are more interested in the process for this study. Their study explores how a tool like instant messaging (IM) affects L2 students’ participation in and their motives during and across three peer review sessions. L2 students and their interactions in peer review have received plenty of scholarly attention, but as Jin and Zhu note, these studies often focus on face-to-face interactions instead of computer-mediate ones. Their study attempts to examine both how L2 students interact in peer review activities mediated by IM and how these interactions and the mediating technology affect their motives and participation.

They relied on a case study design mediated by activity theory. Activity theory “provides a cultural historical view of human behaviors that result from socially and historically constructed forms of mediation through mediational artifacts in all human activities” (286). It is a means to explore how and why humans act as they do in a certain context or system and how certain tools/artifacts affect their actions and is quite well suited to explore how IM affects students’ actions in a peer review context. Jin and Zhu chose two students for their case studies who worked together on two of the three peer review tasks and whose technology skills represented two ends of a continuum: Anton, who had considerable computer and IM experience, and Iron, whose experiences with technology were somewhat limited. Jin and Zhu relied on multiple streams of data to collect their information: Web cams to capture their facial expressions and off-screen activities, transcripts of IM exchanges, observation, screen capture software, click-tracking software, and extensive interviews with the participants. Through these data, the authors discovered that the students both came in with the desire to improve their English skills, but these motives changed because of their first interaction. For example, Iron’s poor typing skills frustrated Anton. For the second session, Iron hoped to improve those skills, while Anton, who wanted to have fun in the IM sessions, became condescending and rude. In short, the artifact (IM) became a determining factor in how the students worked (or did not work) together.

Discovering and examining human motives is a complicated task, as the authors note. Utilizing a case study approach with its multiple streams of data and the researchers’ triangulation of that data to attempt to discover what the students’ motives might have been seems to be a sensible way to approach this issue. However, the limits of the approach are also apparent in terms of possible conclusions the researchers can suggest. Since they only addressed the experiences of two students, they are left with suggestions of what issues instructors might consider in terms of computer-mediated peer review. At the same time, this study does provide a number of variables for future researchers to consider, such as computer experience/skills. This in itself is a valuable element of this case study, despite the limited conclusions we can draw from it directly.

Text-to-Speech as a Revision Tool? Well, Kind Of


Garrison, Kevin. “An Empirical Analysis of Using Text-to-Speech Software to Revise First-Year College Students’ Essays.” Computers and Composition 26.4 (2009): 288-301. Print.

In this article, Garrison explores the computerized version of reading essays aloud as part of the revision process. He says that text-to-speech (TTS) programs may provide some new possibilities in this regard. However, such technology had been underutilized and under-researched previously because of cost, ease of access, technological inadequacies (particularly the quality of the computerized voices), despite improvements in all three of these areas. Furthermore, most of the previous studies of TTS programs in written communication contexts focused on younger students (K-5) or students with disabilities and addressed how TTS programs helped these students address proofreading errors. Garrison’s study seeks to fill these gaps in experimental research on the effectiveness of TTS programs for global and local revisions of college-age students without disabilities.

His study employs a quasi-experimental design. He began with a pilot test with six students from one of his classes. After revisions, the study expanded to 52 students (again from his classes). The students were assigned to either a control or a test group. Those in the control group simply used Microsoft Word to revise while the test group used a TTS program. Once the test was over, he coded the data, comparing original and revised drafts. The results showed that the control group and the test group made about the same number of positive proofreading changes, but the control group made more positive global and local changes. The control group did make more neutral changes, suggesting the TTS program added some efficiency, but since the control group did better in terms of positive changes, the limited efficiency added by TTS programs seems to me a small reason to incorporate them (unless, of course, it meets a specific need of the students). But Garrison clearly notes that, based on his results, we cannot assume TTS programs to be a sort of revision magic wand. For TTS programs to be successful, they must be made a clear part of the pedagogy and their benefits/uses clearly explained to students before they are adopted.

Given the narrow scope of his study (52 students from his own classes), Garrison is appropriately cautious about his results. He neither rejects nor whole-heartedly endorses TTS programs. He was also careful in his methodology in many ways: for example, he conducted a pilot study and tested the reliability of his codes. But his sampling was maybe a little suspect. I wonder how free his students felt to volunteer or not for the study? He also discussed some concerns at the end of the study that he might have considered at the beginning. For instance, when he discusses the use of Microsoft Word for the control group, he explains students’ familiarity with the program to mitigate any concerns unfamiliarity might create. However, he gave the test group only a few minutes’ explanation of the TTS program and noted that their unfamiliarity with the program might have skewed some of the results. Such a variable might have been addressed in the preparation for the study.

Tubing, Theoretically


Carter, Geoffrey V., and Sarah J. Arroyo. “Tubing the Future: Participatory Pedagogy and YouTube U in 2020.” Computers and Composition 28.4 (2011): 292-302. Print.

Carter and Arroyo’s article is from a special edition of Computers and Composition on the future of computer-mediated writing and its pedagogy. In this selection, the authors focus on the participatory nature of online composing and how this may affect instructional methods in the next decade. Their primary exigency is to address what they see as the need for more “theorizing about the participatory practices found in online video culture” (292).

To fill this gap in theorizing, Carter and Arroyo begin by developing their sense of participatory pedagogy. They see participatory pedagogy “as a productive collision of post-critical, postpedagogical, and participatory thought” (293). This collision puts special emphasis on creativity and invention, relying on an emergent rather than a fixed pedagogy. They add to this a discussion of Ulmer’s notion of “electracy,” or electronic literacy. Electracy is open-ended, flexible, and exploratory—precisely the kinds of participatory elements promoted by online spaces like YouTube. To explore how online video culture expands participation and its potential significance for composition pedagogy, the authors next address the development and alterations of memes. Memes are “viral content that is interactive and re-purposed” whose “goal is to create more content to with which other users will connect and invest time in re-purposing, thus participating in spreading ideas and making them more complex” (296). Memes, then, represent the possibility for an individual to take already existing content and alter it in ways that make it meaningful to her while still maintaining some of its original content so that others can recognize it as an extension/recreation of the original. For Carter and Arroyo, though, to be effective pedagogically, participatory pedagogy based on online video culture cannot stop at simply performance; it must also have an element of critique. They claim that memes have the ability to “raise awareness, and elicit massive cultural participation that expands in a malleable network” (297). By adding critique to the performative nature of memes, Carter and Arroyo add the possibility for stronger intellectual work than what some might see in simply creating memes or participating in online video activities “for fun.” In short, participatory pedagogy that relies on online video culture for its framework can serve academic, even civic purposes.

Since this study is a prediction (but not a predictive study) of possible directions for composition pedagogy in the future, I think the authors do well to devote their research to constructing a theoretical framework for “tubing” (i.e., the participatory actions associated with online video culture). They establish quite clearly what this pedagogy will entail (open-endedness, critique combined with performance) and how considering issues like the balance of humor with a serious issue can add to the rhetorical awareness of students. However, some of their claims about how educators might enact this or why this is especially necessary for the pedagogy of the future seemed a little hazy. In some ways, they maybe emphasized the theory at the expense of the practice. 

Wednesday, June 6, 2012

Research Methods and Digital Writing


Few (if any) would be willing to deny that digital technologies are changing writing and writing studies research persistently and significantly. Because of this, we cannot ignore the consequences for research methods and methodologies. (Citing Patricia Sullivan and James Porter, Stuart Blythe describes methods as “procedures, techniques” and methodologies as more epistemological viewpoints [205], a distinction I will maintain here.) Addressing these consequences is precisely the focus of Digital Writing Research: Technologies, Methodologies, and Ethical Issues, edited by Heidi A. McKee and Danielle Nicole DeVoss. What this text emphasizes throughout its chapters are ways by which researchers might build upon and adapt more traditional research methods to work more effectively with digital technologies and sites.

Underwriting much of the chapters by various authors in this text is a general sense of research methods tradition. Given the more humanistic/social sciences approaches discussed throughout, the emphasis here is certainly on qualitative methods. John Creswell describes qualitative research as that which “involves emerging questions and procedures, data typically in the participant’s setting, data analysis inductively building from particulars to general themes, and the researcher making interpretations of the meaning of the data” (4). These methods are at the fore throughout the majority of the articles here, as many authors discuss the importance of engaging in research in the particular context in which text production or interactions occur (e.g., Sapienza 90; Smith 134). In addition, many of the scholars in Digital Writing Research talk discuss their research in digital spaces and on digital texts in terms of traditional qualitative research practices. Perhaps one of the more common practices discussed in the text is ethnography. As Janice Lauer and William Asher explain, “ethnographers observe many facets of writers in their writing environments over long periods of time in order to identify, operationally define, and interrelate variables of the writing act in its context” (39). Beatrice Smith’s chapter, for example, seeks to bring ethnographic practices into a hybrid workplace (where work occurs at a physical office as well as an online space). This chapter, and indeed all of the others, raises the key questions of the text. Rebecca Rickly puts these questions perhaps most succinctly: “Are traditional research methods sufficient to capture the complexity when studying writing/writing scenarios? What happens when we add technology to the mix? Are traditional methods (or our understanding and/or application of them) enough, or do we need new ones” (381)?

The short version is that many of the authors represented here feel that some of the traditional methods as often taught and some of the rigidity prescribed (or implied) in their traditional manifestations indeed are not adequately meeting the needed of researchers exploring digital technologies and writing. In nearly every chapter, the author or authors point to the complexities digital technologies and spaces create relative to traditional methods. In their introduction to the anthology, McKee and DeVoss note the vast changes that have occurred in what constitutes “writing,” how writers and audiences interact, and how collaboration takes place (9-10). As a consequence, researchers must approach traditional practices more critically and reflexively.

One area that requires more careful attention is in ethical considerations. Working in digital spaces creates a number of complications related to treating participants ethically. First, Will Banks and Michelle Eble remind us that though these spaces are virtual, we are still dealing with human participants, and we must consider how our interactions with these participants can have ramifications for them not typical of research conducted in physical environments, especially since online interactions always leave permanent digital traces (31-32, 39). Furthermore, online and digital environments make distinguishing between “published discourse and private discourse” all the more difficult (Sidler 78), and we must wonder if the digital spaces we examine are texts (or products) or sites where people interact (DePew 55). In other words, how we treat the texts we are examining and how we treat the people who produce those texts take on new complications in digital research contexts. The new digital modes of text production also raise ethical concerns in terms of copyright and intellectual property rights. Because of the ease of access to what researchers produce, we must consider ownership in new ways (McIntire-Strasburg 288). Copyright has multiple layers of complexity, and what constitutes fair use and altering to the point of significant difference becomes especially fuzzy (294, 296).

Another area of complexity created by digital technologies and environments is how to address the mercurial nature of these technologies and spaces. As anyone who has saved a link and tried to use it months later or who has relied on a particular tool for something only to have that tool mysteriously go offline can attest, things on the Web change rapidly. Because the technologies are malleable, our methods need to be flexible to adapt to those changes (Rickly 379). Often, this requires a greater commitment to applying multiple methods in our research. Kevin DePew advocates a strategy of triangulation of both the types of data we rely on and the types of methods we employ in our research, which can help researchers avoid the “god trick,” or a biased, one-sided view of the data (53, 54).

In response to such complexities, a number of scholars here suggest the explicit use of theory to guide and frame the work researchers do. Certainly, this is not unique to research in digital writing. Creswell notes that certain worldviews or philosophies guide research (5-10), and Lauer and Asher note that research often uses theory to validate or produce new theories (5). What is most compelling in the discussions of theory in Digital Writing Research is the breadth of the theories employed and their unabashed political nature (which I think is a good thing). Digital technologies are often touted as the great equalizers of knowledge. This has not proven to be the case, and researchers are right to bring critical frameworks to their work, be that the ecofeminism proposed by Julia Romberger, the use of articulation theory that allows critical analysis and re-articulation of digital technologies as discussed by Amy Kimme Hea, or the use of visual and aesthetic theory to analyze digitally produced visuals advocated by Susan Hilligoss and Sean Williams.

These uses and adaptations of some of the traditional methodologies and approaches to the complexities generated by research in digital writing point to a recognition of the value of traditional methods—but that recognition is not an unquestioning devotion. Instead, the chapters in this text point to a discipline that is theory-driven, deeply concerned with ethical considerations (especially as they pertain to the research participants), flexible in response to ever-changing digital technologies, and comfortable with certain levels of indeterminacy. Such an epistemology seems to me well suited to move research methods and methodologies well into the twenty-first century.

Works Cited
Banks, Will, and Michelle Eble. “Digital Spaces, Online Environments, and Human Participant Research: Interfacing with Institutional Review Boards.” McKee and DeVoss 27-47. Print.

Blythe, Stuart. “Coding Digital Texts and Multimedia.” McKee and DeVoss 203-27. Print.

Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 3rd ed. Los Angeles: Sage, 2009. Print.

DePew, Kevin Eric. “Through the Eyes of Researchers, Rhetors, and Audiences: Triangulating Data from the Digital Writing Situation.” McKee and DeVoss 49-69. Print.

Hilligoss, Susan, and Sean Williams. “Composition Meets Visual Communication: New Research Questions.” McKee and DeVoss 229-47. Print.

Kimme Hea, Amy. “Riding the Wave: Articulating a Critical Methodology for Web Research Practices.” McKee and DeVoss 269-86. Print.

Lauer, Janice M., and J. William Asher. Composition Research: Empirical Designs. New York: Oxford UP, 1988. Print.

McIntire-Strasburg, Janice. “Multimedia Research: Difficult Questions with Indefinite Answers.” McKee and DeVoss 287-300. Print.

McKee, Heidi A., and Danielle Nicole DeVoss, eds. Digital Writing Research: Technologies, Methodologies, and Ethical Issues. New Dimensions in Computers and Composition. Ed. Gail E. Hawisher and Cynthia Selfe. Creskill: Hampton, 2007. Print.

McKee, Heidi A., and Danielle Nicole DeVoss. Introduction. McKee and DeVoss 1-24. Print.

Rickly, Rebecca. “Messy Contexts: Research as a Rhetorical Situation.” McKee and DeVoss 377-97. Print.

Romberger, Julia E. “An Ecofeminist Methodology: Studying the Ecological Dimensions of the Digital Environment.”

Sapienza, Fil. “Ethos and Researcher Positionality in Studies of Virtual Communities.” McKee and DeVoss 89-106. Print.

Sidler, Michelle. “Playing Scavenger and Gazer with Scientific Discourse: Opportunities and Ethics for Online Research.” McKee and DeVoss 71-86. Print.

Smith, Beatrice. “Researching Hybrid Literacies: Methodological Explorations of ‘Ethnography’ and the Practices of the Cybertariat.” McKee and DeVoss 127-49. Print.