During the last two decades, there has been a relevant interest in providing evidence-based quality educational coverage for every child and teenager through programs such as the United Nations Millennium Development Goal 2015. For instance, this program aimed to ensure that by 2015, children everywhere would be able to complete a full course of primary schooling.
In 1970 the percentage of primary-aged children who were attending school worldwide was 72%. However, today the percentage has increased to 91% (Roser & Ortiz-Ospina, 2018). While the previous seems promising, we must also focus on the quality of education. It is undeniable that there has been a significant increase in learning outcomes throughout the years. Nevertheless, there is still much to do to ensure everyone receives the education needed to live a happy and successful life (Hanushek & Wößmann, 2012).
Evidence in learning: key factors to consider.
One of the critical factors that could aid the progression and advancement of students’ learning outcomes is evidence. Data-based decision-making (DBDM) is the collection and analysis of data that guides all the parties involved in the educational process towards improving education quality and learner outcome (Van Geel, Keuning, Visscher, & Fox, 2016; Schildkamp, Poortman, & Handelzalts, 2016).
Another important key factor is assessment for learning (AFL), which focuses on the quality of the learning and feedback process rather than the learners’ outcomes. In other words, DBDM focuses on what students must learn, and AFL focuses on how students learn (Van den Kleij, Vermeulen, Schildkamp, & Eggen, 2015).
Different types of evidence.
There are two different forms of evidence: summative and formative. The former usually involves using standardized tests such as monthly, semester, or final exams, the Programme for International Student Assessment (PISA) or the Trends in International Mathematics and Science Study (TIMSS).
On the other hand, formative assessments focus on gaining insights into the teaching-learning process. Hence, they meet teachers’ and students’ needs to analyze, redesign, implement and monitor continuous improvement. Subsequently, they rely more heavily on observational and work sampling techniques that continually focus on students’ learning processes over selected periods of time and in specific contexts (Brown et al., 2000; Van Geel et al., 2016; Heitink, Van der Kleij, Veldkamp, Schildkamp, & Kippers, 2016).
Further reading on formative and summative assessment.
Why data-based assessment?
In recent years, studies have taken a more formative and formal approach into account. Educational researchers and practitioners have understood the importance of making decisions to improve education. For instance, they base formative assessments on a cumulous of specific formative instruments, processes, and strategies, which will provide them with a better insight into teachers’ and students’ needs. Thus, being able to continuously redesign and adapt the teaching-learning process (Brown, Schildkamp & Hubers, 2017).
As an educator, you can argue that you already collect data from your students. However, you might want to ask yourself if your process is systematic. What type of data are you collecting, qualitative or quantitative? How do you collect your data? Where do you store it? How do you categorize it? What techniques and tools do you use to analyze it? How do you interpret it? What channels of communication do you use to share it? These specific types of questions will help us reflect on the importance of formal assessments.
Now that you know why evidence-based assessment is an essential approach to improving students’ learning process and outcomes let’s focus on effectively implementing a DBDM and AFL process. First, you must become self-aware of your attitudes and behaviors toward data-driven decision-making.
As Datnow and Hubbard (2015) accurately assert, building educators’ capacity for data use has sparked controversy throughout the years. This might be due to teachers feeling unprepared or unwilling to use data because of their prior knowledge and beliefs.
Examples of successful DBDM implementation
A study by Poortman and Schildkamp (2016) describes how by developing an extensive and intensive professional development program focused on data use intervention, schools could solve the identified student achievement problems. Teachers (4-6 per school) and school leaders (1-2 per school) worked together in teams. They used a systematic procedure. Additionally, a data coach guided them to learn how to use data to solve educational problems related to student achievement. Consequently, teachers and leaders understood how to use the data they collected and analyzed. They then developed a hypothesis about the cause of the learning issue and evaluated its actions.
Another example of a successful DBDM implementation is the study developed by Van Geel et al. (2016). Specifically, it analyzed the effects of a DBDM intervention on student achievement growth. The intervention was a long-term (2 years) training course for primary school teams (all teachers, school leaders, and deputy director) to acquire the necessary competencies related to DBDM and implement and sustain it in the school organization.
The intervention focused on the four components of data-based decision making: analyzing results, setting goals, determining a strategy for goal accomplishment, and executing the chosen method. The findings of this study indicate that DBDM can improve student achievement.
According to the previous studies, the use of data teams has positive effects, such as improving students’ achievements and promoting teacher collaboration. More than that, it enhances leaders’ and teachers’ knowledge and skills in handling data, prompting a collective focus, and it creates a sense of ownership. Furthermore, it encourages reflective dialogues and aids in designing context-specific solutions. However, for data teams to be successful, the development plan should be of high quality and implemented as a long-term plan (Poortman & Schildkamp, 2016; Brown et al., 2017).
How can you implement evidence-based assessment in your own context?
You now might be asking yourself how you can implement this in your own context. We will now provide certain characteristics, guidelines, and strategies that you could follow for an effective implementation.
Guidelines/Characteristics for a successful DBDM implementation.
- Professional development intervention of high quality in its action and design is indispensable for a DBDM process to be effective.
- Professional development is more effective when it’s developed in teams and in collaboration.
- The participants (directors, leaders, and teachers) should be competent (a professional development plan can accomplish this) in utilising evidence to transform it into useful information.
- DBDM should be a long-term commitment. Generally, it takes one to two years for teachers to understand the promoted beliefs and practices and change behaviors; it takes 5 to 10 years for a school to completely reform.
- The approach should promote a reflective process, rather than a judgmental one.
- DBDM should prompt a collective focus on a shared goal.
- There should be a planned collection, analyzes and interpretation of data.
- There should be formulations and re-formulations of hypotheses concerning the underlying causes of learning problems and outcomes.
- School organizational characteristics, user characteristics, and data characteristics influences the process of data use.
- The use of specific protocols, documents, and planning aids should be established.
The ideal scenario would be to have an external expert coach in DBDM that would provide initial training and development courses throughout the ongoing process. However, for practical purposes, we will furnish some of the things you can start implementing with your fellow teachers.
Further reading on data-driven decision making in education.
Recommendations.
- Involve your coordinator/academic leaders.
- Create a team of 4-6 teachers (don’t forget to include your team leader).
- As a team, create a timetable and schedule meetings approximately every three weeks.
- Arrange a specific place for the meetings to take place. Ideally, a quiet place where you can work collaboratively and have all the technological required resources at your disposal.
- Design and establish specific meeting protocols, documents, and planning aids.
After the initial collection, analysis, and interpretation of evidence, you will follow an 8 step data intervention (Poortman & Schildkamp, 2016).
8 Step Data Intervention.
- During the first meetings, focus on formulating a concrete and measurable problem statement backed up by the prior related data. For example: ‘We are not satisfied with the final examination grades in mathematics from the last three years for our students in the fourth grade, because these grades are lower than 6.0 on average; we would like this to be at least 6.5 in three years (Poortman & Schildkamp, 2016 p. 426).
- In the following meeting(s) discuss possible causes for the problem you defined in Step 1. After that, choose the most plausible causes that you can influence and research.
- Collect all the prior quantitative (e.g. standardized exam results, term papers, projects) and qualitative (e.g. observation sheets, portfolios, anecdotal records) evidence at your disposal related to the hypothesis formulated in step 2.
- Most importantly, check the quality of the data collected in Step 3. After that, if the data are not valid and reliable, additional or new data need to be collected.
- Develop descriptive analyses or correlational analyses of the data to test your hypothesis.
- Interpret and conclude. For example, if the hypothesis has to be rejected, the team needs to formulate a new hypothesis. In other words, continue with a ‘new round’ starting again with Step 2. The team follows Step 7 if the hypothesis was accepted.
- Design and implement improvement measures based on the conclusions drawn in Step 6, and the evidence gathered. The team formulates the measures needed to address the cause of the problem and the goals related to these measures.
- Evaluate the measures taken. This step makes the ‘circle’ complete by evaluating the effectiveness of the measures taken in Step 7. To measure effectiveness, gather the same type of evidence as collected for Step 1 to determine whether you have solved the problem.
Final remarks.
To conclude, remember, this is an ongoing process. After these 8 steps, you start all over reformulating the hypothesis if it was rejected. Conversely, in case the implementation works, you formulate a new hypothesis that focuses on another issue. Bear in mind that you have to focus on formative assessments and the learning process to have better probabilities of improved learner outcomes.
We hope this article helps you provide a better learning experience for your learners. Don’t forget to share this article if you found it useful and subscribe to our newsletter to receive tips like this.