COVID-19 AND THE SCHOOL ASSESSMENT REFORM? THE MISSED EFFECTS ON FORMATIVE ASSESSMENT PRACTICE
University of Bari A. Moro, Italy
The present paper focuses on formative assessment, and more specifically on the ways in which teacher education and educational policy have responded, over the last year, to the challenge of Covid-19.
The unprecedented times of the Covid-19 pandemic have had a relevant impact on schooling. In this framework, educational assessment has been one of the most evident criticalities. For the sake of the truth, assessment, for teachers, has always represented a challenging task in their professional practice (especially formative assessment practice). However, the difficulties related to the implementation of remote instructional activities have made more evident how teachers, despite the broad literature on educational assessment, struggle to navigate old and new instructional circumstances in their assessment practice. While researchers and teacher educators demonstrate a strong commitment to helping teachers face new difficulties resulting from the global pandemic (first of all, the practical issues of transferring teaching into the classroom online), the illiteracy in the assessment domain, and more specifically in the formative assessment practice, remains a crucial and, sometimes, unresolved problem.
During the school year 2019-2020, a new reform has been launched in Italy: formative assessment practices have been introduced as mandatory for primary and middle school.
However, despite the emphasis on formative assessment, teachers in Italy tend to avoid using this kind of assessment and continue to perceive it as improper and ineffective. Reporting a qualitative study on teachers’ conceptions of formative assessment, this paper aims to shed light on the teacher education policy reform and on its implementation from macro to micro.
COLLECTING EVIDENCE OF LEARNING: INCREASING STUDENT ENGAGEMENT IN THESE TIMES
Educators work hard every day on behalf of their students. They understand that helping learners to take ownership and be actively involved is important for everyone’s success and this is even more true in these times. Deliberately engaging students in collecting evidence of learning beyond products to include conversations and observations is one way to make this happen.
And yet, in these times of hybrid/blended or online/virtual learning and teaching environments, teachers are asking, ‘How can both students and teachers collect reliable and valid evidence of learning during the learning when my students are not physically sharing the same space?'
As a result, considering student evidence of learning beyond products such as tests and assignments has taken on a new level of urgency.
Educators work hard every day on behalf of their students. During this session some of the powerful, practical, and classroom-tested strategies students and teachers are using in blended/hybrid as well as online/virtual learning environments will be shared.
EFFECTS OF ONLINE SELF-ASSESSMENT ON PLANNING IN THE SELF-REGULATED LEARNING PROCESS
The Education University of Hong Kong, Hong Kong S.A.R. (China)
While self-regulated learning (SRL) models differ, most share a cyclical process with at least three phases of sub-processes: preparatory, performance, and appraisal. Successful SRL requires actively engaging in all the three phases with appropriate cognitive and metacognitive strategies, and sustainable SRL requires using feedback from the previous SRL to initiate and plan the next round of SRL. However, most SRL research focused on the performance phase rather than the preparatory or appraisal phases—with even less attention to links between SRL phases. This study explored how self-assessment at the appraisal phase influenced students’ propensities to plan for future learning in an English course. The two specific research questions include 1) Does self-assessment affect propensity to plan for learning on similar tasks? And 2) Do demographics or other SRL aspects influence propensity to plan?
25 eighth‑grade students completed a survey measuring their self-assessment practice and self-efficacy in learning English, followed by a five-week intervention. During the intervention, the students were asked to complete seven online self-assessment worksheets, which guided them to self-assess and plan. During English class, they completed each online self-assessment worksheet in 5-8 minutes. Due to absences or neglect, these 25 students only completed 158 of the 175 worksheets (90%).
Multi-level structural equation modeling of the pre-intervention survey and the online worksheets showed that five factors were linked to whether or not students plan for future learning: age, self-efficacy, self-assessment criterion, and reflections on their strengths and weaknesses. Older students were more likely than others to plan for learning. By contrast, those with more self-efficacy, used their own standard to self-assess, viewed analysis as their best skill or viewed reading as their worst skill were less likely than others to plan for learning. Hence, a comprehensive model of demographics, self-beliefs, criteria, and self-knowledge is needed to inform strengthening of links between SRL phases for sustainable SRL.
FORMATIVE ASSESSMENT IN REMOTE AND HYBRID LEARNING ENVIRONMENTS
UCL Institute of Education, United States of America
Formative assessment in remote and hybrid learning environments
The widespread closure of schools around the world caused by the SARS-CoV-2 pandemic has thrown into sharp focus issues that were always relevant—but more easily ignored—in face-to face instructional settings. Teachers have always used evidence from their students to make decisions about the next steps in instruction, for example by using “checks for understanding” but the evidence that they collect for those decisions is generally of poor quality in terms of both depth (defined as the extent to which the evidence reveals important misconceptions that students have) and breadth (defined as the proportion of students in an instructional group from whom the data is elicited).
This presentation will offer a general theoretical framework for understanding formative assessment in any instructional setting as an aspect of the regulation of learning processes, showing why formative assessment is a subset of what is generally called ‘Assessment for Learning’ and will illustrate why equating formative assessment with assessment for learning (or assessment as learning) is unhelpful (Bennett, 2011). The framework involves consideration of whether the regulation of learning processes is proactive, interactive or reactive (Allal, 1998), the length of the formative assessment cycle of evidence, inference and action, and the roles of different agents in the classroom (teacher, learner, peer).
Consideration of the role of the agents, in terms of three key pedagogical processes—determining where the learning is going, establishing where the learner is currently, and how to move the learning forward—yields a model that operationalizes formative assessment as consisting of five ‘key strategies’:
Clarifying, sharing and understanding learning intentions and criteria for success
Eliciting evidence of learning
Providing feedback that moves learning forward
Activating students as learning resources for one another
Activating students as owners of their own learning
The presentation will provide a number of examples of practical techniques for each of the five strategies, focusing in particular on those techniques that are particularly relevant in remote and hybrid instruction, and will conclude by offering protocols that have been useful in supporting teachers in their development of their formative assessment practice.
Allal, L. (1988). Vers un élargissement de la pédagogie de maîtrise: processus de régulation interactive, rétroactive et proactive. In M. Huberman (Ed.), Maîtriser les processus d'apprentissage: Fondements et perspectives de la pédagogie de maîtrise(pp. 86-126). Paris, France: Delachaux & Niestlé.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles Policy and Practice, 18(1), 5-25.
SECONDARY EDUCATION STUDENTS’ SELF-ASSESSMENT: WHAT ACTIONS DO STUDENTS FOLLOW WHEN THEY SELF-ASSESS IN SECONDARY EDUCATION?
1Universidad de Deusto; 2Universidad Autónoma de Madrid; 3Universidad Complutense de Madrid
Evidence has shown the influence of self-assessment on students’ academic performance (Brown & Harris, 2013) and self-regulation (Andrade, 2018). The research on self-assessment has mainly focused on the accuracy of students’ self-assessment and its effect on academic performance (Andrade, 2018). However, the process that the students follow when they self-assess remains unexplored (Panadero et al., 2016; Yan & Brown, 2017). This study aims to describe the actions that students follow when they self-assess Spanish and Mathematics tasks in Secondary Education and identify the predictors for each profile. A total of 64 students participated in this study and they represent three different levels (first year K7, fourth year K10, and first year of university preparatory level K11). Participants, while being video-recorded, were asked to self-assess a set of Mathematics and Spanish tasks two times: (1) after they performed it and (2) after receiving feedback. Right after both self-assessments, they filled the self-efficacy and emotion scales. Data came from the video-recordings – think-aloud protocols and observations- self-reported instruments and academic performance grades. The findings revealed four different self-assessment profiles for Spanish and Mathematics subjects: no self-assessment, superficial, intermediate, and advanced profiles. Each profile is described with a pattern of different actions performed by the students; no self-assessment profile consists on an absence of information processing, superficial profile follows the actions of reading the task and assign a value to it, intermediate profile includes the actions of reading, assign a value and explain it and finally, the advanced profile implies a deeper processing of the information (read, redo the task, assign a value and explain it). The relationship between the profiles and year level was studied; in Mathematics a higher percentage of students in K7 year level has a “superficial” profile while the higher percentage in K11 was “no self-assessment” profile. The explicative variables analysed as predictors were academic performance, self-regulation and motivation. The ordinal logistic regression models to predict the profiles indicate that no significant explicative variables were found for Spanish profiles but for the Mathematics profiles, the Deep Learning Strategies Questionnaire (DLS-Q) score was a significant predictor. Importantly, this study breaks down the self-assessment into more specific actions, opening a new line for the self-assessment research, as historically it has been considered as a unified strategy.
Andrade, H. (2018). Feedback in the context of self-assessment. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 376–408). Cambridge University Press.
Brown, G., & Harris, L. R. (2013). Student self-assessment. SAGE handbook of research on classroom assessment.
Panadero, E., Brown, G. T. L., & Strijbos, J. W. (2016). The future of student self-assessment: A review of known unknowns and potential directions. Educational Psychology Review, 28(4), 803-830. https://doi:10.1007/s10648-015-9350-2
Yan, Z., & Brown, G. T. (2017). A cyclical self-assessment process: Towards a model of how students engage in self-assessment. Assessment & Evaluation in Higher Education, 42(8), 1247–1262. https://doi.org/10.1080/02602938.2016.1260091
FORMATIVE PEER ASSESSMENT IN SUPPORT OF REMOTE LEARNING
1University at Albany, SUNY, United States of America; 2Brooklyn NY Public Schools, USA
Formative peer assessment engages students in providing feedback to each other on their works-in-progress for the purposes of revision and improvement. It is not a matter of students assigning grades or scores that contribute to the official grades assigned by teachers: That is summative assessment, which is not the subject of this presentation because of the inappropriate role it assigns to students. Formative peer assessment is about feedback followed by revision, and has been shown to promote learning by both the giver and receiver of feedback in a wide variety of disciplines (Topping, 2021). Peer feedback on academic work is especially effective when students are trained to provide constructive critique (Li et al., 2020). Two key elements of training in peer assessment are the use of rating criteria or a scoring guide by which a peer’s work-in-progress is to be judged, and a constructive process of critique.
These features of useful peer assessment has been understood for a long time. In 1998, Edward White wrote about the value of rating criteria for peer assessment of writing:
When students have both a vocabulary and a scale to use in discussing and evaluating the writing they examine, they... can (and in fact do) hold the other students’ essays to the standards set out in the [scoring] guide. Moreover, by learning how to read and evaluate the papers written by other students, they learn how to read their own. This procedure has the magical double value of increasing student learning at the same time that it decreases the teacher’s paper load…. Many teachers find that students write better drafts for peer groups than for the instructor and that they gain more from a peer group’s critique (when it is related to a scoring guide) than from the instructor’s comments. (pp. 18-19)
Feedback protocols have also been in use for a long time. In 2003, David Perkins introduced a process called the Ladder of Feedback. When students are taught how to use the Ladder (or a similar protocol), their feedback tends to be supportive. In combination, rating criteria and feedback protocols like the Ladder can be used by students to provide feedback that is more frequent and timely than their teacher’s, while also of high quality.
The challenge now is to implement peer feedback in remote, virtual settings, where lack of student engagement is often a problem. In this presentation, we provide an overview of peer assessment and how it is implemented, including a detailed example from an asynchronous visual arts class.
Li, H., Xiong, Y., Hunter, C., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis, Assessment & Evaluation in Higher Education, 45(2), 193-211. doi: 10.1080/02602938.2019.1620679
Perkins, D. (2003). King Arthur’s round table: How collaborative conversations create smart organizations. John Wiley & Sons.
Topping, K. (2021). Peer assessment: Channels of operation. Education Sciences, 11(91). doi.org/10.3390/educsci11030091
White, E. (1998). Teaching and assessing writing: Recent advances in understanding, evaluating, and improving student performance (2nd ed.). Calendar Islands.
USING LEARNING TARGETS AND SUCCESS CRITERIA IN REMOTE LEARNING
Duquesne University, United States of America
A lesson has a learning target when students know what they are trying to learn during the lesson and are working to get there. Students may ask three questions: “Where am I going? Where am I now? Where to next?” (Hattie & Timperley, 2007), sometimes called the formative learning cycle (Brookhart & McTighe, 2017). The formative learning cycle is the principal way that students regulate their learning (Andrade & Brookhart, 2020) during classroom lessons. Formative assessment improves learning (Lee et al., 2020). The presence of clear and focused success criteria is important in this process. Success criteria can be performance criteria or product criteria in any of several formats (Andrade & Heritage, 2018). Importantly, success criteria must refer to evidence of learning, not of simply following directions for an assignment.
The New York City Department of Education Office of Arts and Special Projects (OASP) has been working on student-involved formative assessment for over a decade. In the 2020-2021 school year, a Professional Learning Community (PLC) of six arts teachers took on the challenge of studying how learning targets and success criteria can be used in remote learning. They met monthly, and their discussions were videotaped. They concluded that using learning targets and success criteria in lessons were especially important in remote learning for at least five reasons:
- Learning targets and success criteria help focus students and teachers on learning, instead of just getting something done, which can be an issue in remote learning.
- Learning targets and success criteria make lessons more active and engaging because students use them throughout their lessons.
- Success criteria help make the learning targets more explicit to students, which helps them envision what they are supposed to learn.
- Success criteria are an agent for equity because all students have access to the keys to successful work. Explicit success criteria help students take the next step in their creative work, no matter what their entry level, and improve on their previous knowledge.
- Learning targets and success criteria help teachers feel more grounded, which helps address some of the loss of control of lessons teachers may feel in remote learning.
The PLC and OASP produced a resource for teachers demonstrating strategies for using learning targets and success criteria in remote lessons. The proposed presentation will describe this work and its research base and show a video example from the resource the PLC created.
Andrade, H., & Brookhart, S. M. (2020). Classroom assessment as the co-regulation of learning. Assessment in Education, 27, 350-372.
Andrade, H., L., & Heritage, M. (2018). Using formative assessment to enhance learning, achievement, and academic self-regulation. New York: Routledge.
Brookhart, S. M., & McTighe, J. (2017). The formative assessment learning cycle. Alexandria, VA, ASCD.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112.
Lee, H., Chung, H. Q., Zhang, Y., Abedi, J., & Warschauer, M. (2020). The effectiveness and features of formative assessment in US K-12 education: A systematic review. Applied Measurement in Education, 33, 124-140.