How to Evaluate Student Writing and Create Writing Success

The Research Shows Series: Responding to and Evaluating Student Writing

How do you respond to and evaluate student writing? Rubrics? Checklists? Peer review and evaluation? A straight 6-traits rubric? A holistic 6-traits rubric? Do you focus on mechanics or content? Are you a red-pen person?

Let’s find out what some of the most important and influential names in teaching writing research have said about evaluating and responding to student writing! Also, be sure to read How to Use the Six Traits of Writing and the Common Core Traits to Teach Writing Across the Curriculum.


Evaluating Student Writing: Methods and Measurement

Persons involved in the field of composition have sought continuously over the past two decades to shape and refine discourse theory and develop more effective classroom methods of evaluation. A careful look at these efforts suggests that the material dealing with evaluating writing is not unlike the body of a hydra: we have one theoretical body supporting two heads. Using one of the heads, we develop various methods to critique or respond to students’ written products (even as these products represent a stage in the writing process); with the other head we devise ways to measure or assess the quality of the written product according to some value system. This digest will consider (1) the methods of response and (2) the measurement of quality as represented by effective classroom teaching methods.

METHODS OF RESPONSE

Responding to student writing is probably the most challenging part of teaching writing. It not only takes a tremendous amount of time and demands a great deal of intellectual activity, it also affects to a large extent how students feel about their ability to write. It becomes increasingly obvious that teachers may become less pressured and more effective in dealing with response only as they are able to redefine their role from that of an examiner who must spend enormous amounts of time grading every paper to that of a facilitator who helps students recognize and work on their own strengths and weaknesses (Grant-Davie, 1987).

Effective time-saving techniques which reflect this philosophy were gathered from research articles by Fuery and Standford and classified by Krest (1987). Peer revision, peer editing, peer grading, computer programs, conferences, and a system of error analysis are presented as effective measures which enhance individual development as well as encourage more student writing.

Noting that research has shown teacher comment has little effect on the quality of student writing, Grant-Davie and Shapiro (1987) suggest teachers should view comments as rhetorical acts, think about their purpose for writing them, and teach students to become their own best readers. To achieve this goal, teachers should respond to student drafts with fewer judgments and directives and more questions and suggestions. Grant-Davie and Shapiro also outline the use of a workshop which utilizes peer editing and revision.

Similarly, Whitlock (1987) explains how Peter Elbow’s concepts of “pointing,” “summarizing,” “telling,” and “showing” can form the basis of an effective method for training students to work in writing groups and give reader-based feedback to peer writing.

MEASURING WRITING QUALITY

According to the “Standards for Basic Skills Writing Programs” developed by the National Council of Teachers of English and reprinted in “National Standards: Oral and Written Communications” (1984), when we measure the quality of students’ writing we should focus on before and after samplings of complete pieces of writing.

To measure growth in the use of these conventions, an analytic scale analysis of skills (Cooper and Odell, 1977) can be developed and used effectively with samples of students’ writing. This instrument describes briefly, in non-technical language, what is considered to be high, mid, and low quality levels in the following areas: (1) the student’s ability to use words accurately and effectively; (2) the ability to use standard English; (3) the ability to use appropriate punctuation; and (4) the ability to spell correctly. Each of these skills is ranked for each paper on a continuum from 1 (low) to 6 (high) (Hyslop, 1983).

In addition to these instruments, various teacher/writers in the field share the following strategies they have developed for measuring writing quality.

Teale (1988) insists that informal observations and structured performance sample assessments are more appropriate than standardized tests for measuring quality in early childhood literacy learning. For example, when young children are asked to write and then read what they write, the teacher can learn a great deal about their composing strategies and about their strategies for encoding speech in written language. Krest (1987) provides helpful techniques of a general nature to show teachers how to give students credit for all their work and how to spend less time doing it. These techniques involve using holistic scoring, using a somewhat similar technique of general comments, and using the portfolio. Harmon (1988) suggests that teachers should withhold measuring students’ progress until a suitable period of time has elapsed which allows for measurable growth, and then measure the quality of selected pieces of writing at periodic intervals.

Cooper and Odell (1977) suggest that teachers can eliminate much of the uncertainty and frustration of measuring the quality of these samples if they will identify limited types of discourse and create exercises which stimulate writing in the appropriate range but not beyond it. In their model, they present explanatory, persuasive, and expressive extremes as represented by the angles of the triangle. Each point is associated with a characteristic of language related to a goal of writing, with assignments and the resulting measure of quality focused on that particular goal.

CURRENT DIRECTIONS

Writing teachers are moving increasingly toward this type of assessment of writing quality. Hittleman (1988) offers the following four-part rating scale to be used after the characteristic to be evaluated is established: (1) little or no presence of the characteristic; (2) some presence of the characteristic; (3) fairly successful communication…through detailed and consistent presence of characteristic; and (4) highly inventive and mature presence of the characteristic.

Krest (1987) presents an interesting modification of this process by measuring the quality of students’ papers with the following levels of concerns in mind: (HOCs) high order concerns: focus, details, and organization; (MOCs) middle order concerns: style and sentence order; and (LOCs) lower order concerns: mechanics and spelling.

SKILLS ANALYSIS

One of the 29 standards for assessment and evaluation in the NCTE report states that control of the conventions of edited American English…spelling, handwriting, punctuation, and grammatical usage…(should be) developed primarily during the writing process and secondarily through related exercises.

All in all, it appears that true growth in writing is a slow, seldom linear process. Writing teachers have a wide variety of responses they can offer students before making formal evaluations of the text (Harmon 1988).

REFERENCES

Cooper, Charles R., and Lee Odell. Evaluating Writing: Describing, Measuring, Judging. Urbana: National Council of Teachers of English, 1977, 37-39. [ED 143 020]

Grant-Davie, Keith and Nancy Shapiro. “Curing the Nervous Tick: Reader-Based Response to Student Writing.” Paper presented at the Annual Meeting of the Conference on College Composition and Communication, March 1987. [ED 282 196]

Harmon, John. “The Myth of Measurable Improvement.” English Journal, 77(5) September 1988, 79-80. [EJ 376 076]

Hittleman, Daniel R. “Developmental Reading, K-8, Teaching from a Whole-Language Perspective” 3rd ed. Columbus, OH: Merrill, 1988.

Hyslop, Nancy B. “A Study to Test the Effects of Daily Writing upon Students’ Skills in Explanatory Discourse at the Eleventh Grade Level.” Unpublished dissertation, 1983.

Krest, Margie. “Time on My Hands: Handling the Paper Load.” English Journal, 76(8) December 1987, 37-42. [EJ 367 295]

National Standards: Oral and Written Communications. Washington Office of the State Superintendent of Public Instruction, Olympia, 1984. [ED 297 351]

Teale, William H. “Developmentally Appropriate Assessment of Reading and Writing in the Early Childhood Classroom.” Elementary School Journal, 89(2) 1988. [EJ 382 620]

Whitlock, Roger. “Making Writing Groups Work: Modifying Elbow’s Teacherless Writing Groups for the Classroom.” Paper presented at the Annual Meeting of the Conference on College Composition and Communication, March 1987. [ED 284 284]

Author: Hyslop, Nancy B.