News

2021年07月07日

[17th UTokyo FFP] Nabetan Journal DAY 3

Chapter 3 “Evaluation: A Hot Topic in High School Education”
(Sorry for writing a long article again!)

“Nabetan Journal” is a series of articles that shows you what the UTokyo FFP classes (conducted every other week) are like. I’m sorry for the extremely belated post since the previous article (DAY 2) due to personal reasons. I’m eager to catch up from now. Let’s go!

DAY 3 was conducted on May 6th and 7th (right after the Golden Week Holidays…). The goals, objectives, and agenda of the class are as follows.

[Goals]
To obtain basic knowledge in evaluating student learning, to understand the significance and features of evaluation, and to be able to apply evaluation to student learning.
[Objectives]
1. To be able to explain the significance of evaluation.
2. To be able to contrast formative evaluation and summative evaluation.
3. To be able to explain any given evaluation method based on the features of evaluation.
4. To be able to create a rubric.
5. To be able to express one’s thoughts on the merits and demerits of rubrics.
[Agenda]
0_Feedback from participants on the previous class (message)
1_Review of two topics (work)
2_Evaluation (lecture)
3_Consultation about evaluation (work)
4_Rubrics (lecture)
5_Exercises in creating rubrics and evaluating with them (work)
6_The merits and demerits of rubrics (work)
7_Today’s class design (i.e., how the instructor designed today’s class) (message)

With respect to the whole structure, the proportion of participants’ activities (as shown as “work” on the agenda) was larger than that of DAY 1 and 2.

As mentioned in the section “The Significance of Active Learning” of “DAY 2 Class Design,” the important thing I would like you to learn is to make sure to output what you learned (i.e., to give it a try) in addition to input. The objective for DAY 3 is to understand “what evaluation is” and what “a rubric,” an important tool for evaluation, is through creating and using it. From now on, the instructor will deliberately reduce the support for learners (i.e., “scaffolding”) little by little and leave more time to them, which will be explained in the next class, “DAY 4 Course Design.”

As mentioned earlier, taking this course itself equals experiencing the knowledge you acquire through the course.

 

The Significance of Evaluation

“Evaluation” is a hot topic in my field recently.

New curriculum guidelines (PDF) will come into effect at high school, where I was engaged in education for a long time, from the next academic year. The guidelines are revised every 10 years, and the next revision is expected to have a large impact on the way of education according to the “Arguing Points” (Central Council for Education) (PDF). The revision will also impact evaluation methods, and I am discussing the issue with teachers working at high schools. Our discussion included the topic “the significance of evaluation,” which appeared on DAY 3.

“The significance of evaluation” is explained from the following three perspectives in UTokyo FFP.

 ◯ Significance for students ① Grasp one’s level, ② Support one’s learning
 ◯ Significance for instructors ③ Check and support students’ comprehension, ④ Reform one’s classes
 ◯ Significance for institutions (schools/boards of education) ⑤ Assure its quality, ⑥ Accountability
 *Evaluation is not only for grading students.
 *Evaluation is not a goal but a starting line.

High school teachers are provided with “The Handbook on the Method of Learning Evaluation: Upper Secondary School Edition” (National Institute for Educational Policy Research, Ministry of Education, Culture, Sports, Science and Technology) as the new curriculum guidelines are implemented. The following sentence is clearly written in the Handbook.

“Teachers should grasp learning outcomes (i.e., what kind of skills students acquired) accurately, improve their guidance, and let students proceed to the next learning by reflecting on their own learning.”

The subject of “grasp,” “improve,” and “let” is “teachers,” so let’s first look at this sentence through the lens of “significance for instructors.”

Significance of Evaluation ③ “Check and support students’ comprehension”
・(Instructors should) grasp learning outcomes (i.e., what kind of skills students acquired) accurately. (check students’ comprehension)
・(Instructors should) let students proceed to the next learning. (support)
Significance of Evaluation ④ “Reform one’s classes”
・Instructors should grasp learning outcomes and improve their guidance.

Next, let’s extract the section where the subject is “students” from this sentence.
・reflecting on their own learning
・students proceed to the next learning
To realize these things, the evaluation must be meaningful to students as described as follows.

Significance of Evaluation ① “Grasp one’s level”
・Students can grasp by themselves what and how much they were able/unable to accomplish.
Significance of Evaluation ② “Support one’s learning”
・Students can identify by themselves how they should improve their next learning by grasping the level of accomplishment.

When applying “evaluation methods” indicated in the Handbook to high school classroom settings, it is important to look at “the significance of evaluation,” the topic of DAY 3, from both perspectives of students and instructors to appreciate the values shown in the sentence: “Evaluation is not only for grading students.” (The significance of evaluation for institutions (⑤&⑥) is also controversial in high school, so I would like to take up the issue in another article.)

Now, let’s get back to UTokyo FFP classes from high school education. “The significance of evaluation” is also connected to “the ADDIE model” we learned on DAY 2.

The fifth stage of this model is “evaluation.” The evaluation here means “the evaluation for class reform” conducted by the instructors in various spans ranging from every class to every unit or every semester. When a series of various activities (Analysis/Design/Development/Implementation) comes to an end, you need to reflect on the respective ADDI activities, and connect the reflection to the next activities, as described as “Close the Loop!” It exactly means that “Evaluation is not a goal but a starting line.” By the way, you need to continually go back and forth between “A-D-D” and reflect on the activities before reaching I (=Implementation), or you need to conduct a “formative evaluation,” as described in the next section.

 

Summative Evaluation and Formative Evaluation

Dr. Kurita says, “You should come up with these two at the same time when you hear the word ‘evaluation.'” Then, how do these two differ from each other? Answering this question is exactly the second item of the objectives: “To be able to contrast formative evaluation and summative evaluation.”

 Summative evaluation: “For measuring the learning outcomes or deciding whether to pass or fail a learner”
  ・It can be used for deciding whether to pass or fail a learner and is conducted after learning.
 Formative evaluation: “For improving the learning process of learners and providing feedback to instructors for small-scale improvement of learning activities”
  ・It mainly functions as feedback which helps learners modify their learning activities one by one and is conducted during learning.

What Dr. Kurita emphasized is as follows: “You should keep in mind that actual evaluation is conducted by combining these two elements. The most important thing for the instructor is to keep thinking about how these two evaluations should be combined to help smoothen students’ learning process.”

I used to distinguish these two by regarding summative evaluation as evaluation for grading, and formative evaluation as evaluation for feedback during classes, and thought that they were incompatible with each other, but Dr. Kurita says, “It’s not that you can only choose either one. Some types of summative evaluation have formative aspects and vice versa.”

Evaluation can of course be used to measure accomplishment, and you can decide whether to pass or fail a learner or grade him/her based on that evaluation, but you can also provide feedback to improve his/her learning for the next step by clarifying what he/she needs. We should not view these two types of evaluation as a binary system, but we should regard them from the perspective of “the purpose of evaluation.” And here, again, the subject of the sentences describing these two types of evaluation may not only be instructors but also students.

What to Evaluate

We started with how to view “evaluation.” Now, considering the specific procedure for conducting evaluation, we are forced to answer the following question: “What should we evaluate?” It became another hot topic in the discussion with the high school teachers.

UTokyo FFP listed what to evaluate as follows: “Knowledge/Comprehension, Thinking/Decision-making,” “Skills/Expression,” and “Interest/Motivation/Attitude.” These are almost perfectly aligned with the evaluation categories indicated in the present curriculum guidelines for elementary and secondary education. These categories are rooted in Bloom’s Taxonomy (as described on DAY 2): “Knowledge (cognitive domain),” “Skills (psychomotor domain),” and “Attitudes (affective domain).”

In the new curriculum guidelines, the evaluation categories on “what to evaluate” have changed into “Knowledge/Skills,” “Thinking/Decision-making/Expression,” and “Attitude for learning actively”; please refer to the “Arguing Points” given at the beginning of this article for background information. “Attitude for learning actively,” in particular, largely overlaps with Fink’s Taxonomy of Significant Learning as described as follows.

【Foundational Knowledge】
   Understanding and remembering:  ・Information ・Ideas
【Application】
   ・Skills ・Thinking: Critical, creative, & practical thinking ・Managing projects
【Integration】

   Connecting: ・Ideas ・People ・Realms of life
【Human Dimension】

   Learning about: ・Oneself ・Others
【Caring】

   Developing new ・Feelings ・Interests ・Values
【Learning How to Learn】

   ・Becoming a better student ・Inquiring about a subject ・Self-directing learners

The “Period for Integrated Studies” and the “Period for Inquiry-Based Cross-Disciplinary Study” are conducted respectively at elementary schools and junior high schools. Carefully look at the new curriculum guidelines, and you will find that these “Periods” should be closely connected to the educational goals of schools, which means that they are placed to have different roles from the traditional learning of subjects. It is also remarkable that what these “Periods” aim aligns with much of the Taxonomy of Significant Learning.

When it comes to practices such as classes, it is necessary to address specific “objectives” as well as “evaluation categories,” a large framework, as shown above.

Even if instructors are the only ones who evaluate the learners, they should identify “what specific objectives” the learners should work on and “what exactly will be evaluated” in the phase of class design (i.e., Design in the ADDIE model). In addition, learners are those who evaluate themselves, so it is necessary for instructors to share “what exactly will be evaluated (and evaluation methods)” as well as “specific objectives” with the learners in an explicit way. This is a requirement stated in elementary and secondary education as “integration of objectives and evaluation”; the topic will be taken up in the next session “DAY 4 Syllabus/Course Design.” It was also referred to in the session of DAY 2. There was a thorough explanation and group activity on “objectives,” saying, “Objectives should be shown with observable verbs (i.e., expressions used for outputting), and will become evaluation categories.”

[By the way…] I recently read a book called, “Your Biology (Hill, S.) (*Japanese translation supervised by Matsuda, R., and Okamoto, T. Hakusuisha Publishing).” It is a textbook for 13–14-year-olds in the Netherlands and has “Objectives” at the end of every Unit as a “Wrap-up,” followed by a “Test.” As an ex-teacher of biology, I am interested in the content of the textbook itself, but what also intrigues me is that it displays “objectives” and “measurement of knowledge comprehension” as a set from the perspective of “evaluation.”

Dr. Kurita explained the effects of the cycle of practice and feedback, which means combining “practice” (activities for knowledge input) based on objectives and accurate “feedback” (information given to the learner based on the outcome of practice that works as a guide to the next step), in enhancing learning quality while delivering specific classes. Chapter 5 of How Learning Works (Japanese translation published by Tamagawa University Press) elaborates on this topic, which I consider important for teaching in the classroom.

 

How to Evaluate

Evaluation methods and learning objectives are two sides of the same coin. It is necessary to adopt evaluation methods that align with the preset learning objectives.

DAY 3 focused on “Exercises in evaluating evaluation” and “Exercises in creating rubrics” as shown below and skipped explaining specific evaluation methods. If you are interested in the details of such methods, please refer to “Learning Assessment Techniques: A Handbook for College Faculty” (A Japanese translation is published by University of Tokyo Press in 2020). Diverse evaluation methods are given based on “Fink’s Taxonomy of Significant Learning.” I’m sure this book will be greatly helpful to teachers when the “Period for Integrated Studies” and the “Period for Inquiry-Based Cross-Disciplinary Study” are about to be positioned in the center of school educational activities. It also shows a lot of rubric examples.

Let’s get back to “Exercises in evaluating evaluation (Consultation about evaluation).”

In this exercise, participants were asked to advise someone saying like this: “I am in charge of XXX course. I give students assignments like this, evaluate them in this way, and I am worried about XXX.” They discussed ideas from the viewpoints they learned in the lecture: evaluation methods, evaluators, and evaluation of evaluation (shown below), and proposed their ideas in a Google Form.

 <Evaluation of evaluation>
 Reliability: To what degree you could obtain the same results no matter how many times you conduct the same examination with the same group (Reproducibility of outcomes and accuracy of tests)
 Validity: Whether the evaluation method you have adopted can really measure the skills and behaviors you are focusing on (Appropriateness of the evaluation method)
 Efficiency: Whether it is easy to conduct and grade (The practicality of the evaluation method in terms of time and economy)

Participants should identify which aspect of the above three was problematic and propose ideas on what the client should do.

After each participant filled in the Google Form, their responses were shared in the classroom. The activity of advising on a clear problem was relatively simple; change evaluation methods, and you can improve reliability, validity, and efficiency. Therefore, participants were all capable of developing appropriate responses to the problem.

And this is what Dr. Kurita said to everyone:
“It’s easy to advise someone else, but once you become the one who delivers classes, many people are likely to overlook these aspects, so please do not leave the basic knowledge you learned here as some kind of trivia, but make sure that you can apply it to your own classes.”
Hearing her words, I broke out in a cold sweat as an ex-teacher at high school, haha!

 

What Is a Rubric?

One hour and 15 minutes out of the class time of three hours and a half were spared for lecture and activities regarding “evaluation,” followed by a break and stretching exercises time, and the remaining two hours were allotted to this topic. Quite a long time was spared for this activity, but as Dr. Kurita says, “Creating something is far from just knowing it.” To spare time for creating rubrics as much as possible, participants learned about the basics of rubrics before the session with videos. (This strategy is called a “flipped classroom.”)

 <Video clips on rubrics>

The “Period for Inquiry-Based Cross-Disciplinary Study” has made rubrics a hot topic in high school, and there is an increasing demand for creating rubrics, so here I would like to elaborate on what I think is important in that context.

“Rubrics divide an assignment into its component and provide a detailed description of what constitutes acceptable or unacceptable levels of performance for each of those parts” (Stevens & Levi, 2013). Dr. Kurita says, “This is where the values of rubrics lie,” and cited the following sentence. It describes the first step when using a rubric.

“The first step in constructing or adapting any rubric is quite simply a time of reflection, of putting into words basic assumptions and beliefs about teaching, assessment, and scholarship.”

Rubrics are for learners, but before that, they are for instructors. The value of rubrics lies in the point that they enable instructors to ask themselves what assumptions and beliefs they have when asking learners to work on an assignment and what they are trying to evaluate by putting them into words by creating a rubric.

Rubrics align with every evaluation method except multiple-choice questions which have distinct right or wrong answers. They can also improve all three aspects used for evaluating evaluation: reliability, validity, and efficiency.

Here’s the typical way of using rubrics:

1. An instructor creates a rubric.
2. The instructor hands out an assignment and a rubric to students.
3. The students complete the assignment while referring to the rubric as a guiding principle.
4. The students submit the assignment. (They can also attach the rubric with which they self-evaluated their assignment.)
5. The instructor or another person grades the submitted assignments. (The rubric enables people besides the instructor to grade the assignments.)
6. The instructor returns the assignments to the students with the rubric attached.

Students can also join the process of creating a rubric. (In that case, creating a rubric itself becomes a learning process.) They can also peer-evaluate the assignments with each other. (Evaluation using a rubric deepens their understanding of the learning material.)

Rubrics are not a tool secretly possessed by an instructor as his/her evaluation criteria. The instructor should present rubrics to the students and involve them in learning activities by using rubrics. I believe that would further enrich student learning in various ways. And of course, that cannot be successful without building a good relationship between the instructors and students, and among students themselves (i.e., creating a good learning environment).

In relation to that, when used amid learning activities, rubrics can become a tool for formative evaluation by putting into words the situation around student learning. On the other hand, they can become a measurement method for summative evaluation, if they are used for assessing the final outcome. Here again, it is important to see things from two perspectives: formative evaluation and summative evaluation.

 

Creating a Rubric and the Merits and Demerits of Using Rubrics

Following the explanation of the points of rubrics, participants created a rubric by following the procedure. For details of the procedure, please refer to the lecture materials available on OCW (PDF).

The activity is not just for creating a rubric, but is also structured as follows:

1. Assign marks to four writing assignments on a scale of 1–10 by briefly considering what dimensions they have. (Google Form)
2. Create a rubric for this assignment in groups.
(Participants work in breakout rooms and create a rubric on Google Slides. They bring the dimensions they came up with individually and narrow down and fix them through discussion.)

3. Walk around (gallery walk) and explore other groups’ rubrics.
4. Refine the rubric in groups.
5. Assign marks to the four writing assignments on a scale of 1–10 by using the final rubric. (Google Form)

Creating and using a rubric and contrasting the evaluation processes between those with or without rubrics helped participants effectively gain materials (i.e., experience) for considering the advantages and disadvantages of rubrics.

Finally, participants discussed the advantages and disadvantages of rubrics in groups and shared their outcomes by filling in Google Forms. Here, again, they examined the significance of evaluation for both “students” and “instructors” and put them into words. This process helped them realize that the subject of the sentences for describing evaluation activities can be both instructors and students.

 

That’s all for today. Sorry for writing another long article, but I didn’t want to separate it into two parts.

A slide called “Design” was presented at the end of the session, as usual, showing how Dr. Kurita designed the session. There were four points:

・Flipped Class
・・Participants learned about the basics of rubrics before the session by watching videos.
・Works based on cases
・・The instructor provided a situation of giving advice on evaluation and evaluating evaluation that allowed the participants to apply highly abstract knowledge to specific cases.
・・The participants created and examined a rubric by assigning marks on particular writing assignments.

・Gallery Walk (variation)
・・The participants were able to explore what other groups worked on online.

・Various ways of sharing ideas
・・The participants experienced various ways of exchanging their ideas online.

Participation in this session as a learner itself becomes an opportunity for him/her to learn as an instructor, so I think this slide is highly meaningful to the participants.

 

To be continued in the next article on the session “DAY 4 Course Design (Syllabus).”
(By the way, we have already finished DAY 7 “Microteaching session (FINAL)”…)

See you next time!

 

Here are the recommended websites related to UTokyo FFP. For more details on the course materials and AY2020 course schedule, please click the following links!
(Official) UTokyo FFP Website
UTokyo OCW “Teaching Development in Higher Education” (UTokyo FFP AY2020)  Interactive Teaching (Video Clips)
Osami Nabeta
Research Support Staff
Center for Research and Development of Higher Education

Back to list