Evidence Use in Education.

Evidence Use in Education.

  • Post author:
  • Post category:Teaching
Why is Formal Assessment important?

During the last two decades, there has been a relevant interest in providing quality educational coverage for every child, teenager, and adult through programs such as the Education for All Movement of UNESCO or the United Nations Millennium Development Goal 2015. For instance, the main goal of the latter was to ensure that by 2015, children everywhere would be able to complete a full course of primary schooling. This goal furnished significant progress in terms of quantity. In 1970 the percentage of primary-aged children who were attending school worldwide was 72%. Today the percentage has increased to 91% (Roser & Ortiz-Ospina, 2018). While the previous seems promising and providing that education for everyone is an important aspect that governments and policymakers have been focusing on for many years. As teachers, we must not forget that it’s not only about the quantity but also about the quality of education. Even though there has been a significant increase in learning outcomes throughout the years, there is still much to do (Hanushek & Wößmann, 2012).

One of the key factors that could aid the progression and advancement of students’ learning outcomes is the use of data. Data-based decision making (DBDM) is the collection and analysis of data which guides all the parties involved in the educational process towards improving education quality and learner outcome (Van Geel, Keuning, Visscher, & Fox, 2016; Schildkamp, Poortman, & Handelzalts, 2016). Another important key factor is assessment for learning (AFL), which focuses on the quality of the learning and feedback process, rather than the learners’ outcomes. In other words, DBDM focuses on what students must learn and AFL focuses on how students learn (Van der Kleij, Vermeulen, Schildkamp, & Eggen, 2015).

There are two different forms of data: summative and formative. Summative assessments usually involve the use of standardized tests such as the Programme for International Student Assessment (PISA) or the Trends in International Mathematics and Science Study (TIMSS). On the other hand, formative assessments are focused on gaining insights of the teaching-learning process so as to provide teachers’ and students’ needs in order to analyze, redesign, implement, and monitor continuous improvement. These are mainly viewed as informal assessments because they rely more heavily on observational and work sampling techniques that continually focus on students’ learning processes over selected periods of time and in specific contexts (Brown et al., 2000; Van Geel et al., 2016; Heitink, Van der Kleij, Veldkamp, Schildkamp, & Kippers, 2016). 

In recent years a more formative and formal approach has been taken into account. Educational researchers and practitioners have understood the importance of making decisions based on a cumulous of precise formative instruments, processes, and strategies that provide a better insight of teachers’ and students’ needs to continuously redesign and adapt the teaching-learning process (Brown, Schildkamp & Hubers, 2017).

As an educator, you can argue that you already collect data from your students. However, you might want to ask yourself how systematic this process is. What type of data are you collecting, qualitative or quantitative? How do collect your data? How and where do you store it? How do you categorize it? How do you analyze it? How do you interpret it? How do you share it? Questions that are not easily answered, but will definitely help us reflect on the importance of formal assessments. 

Now that you know why evidence-based assessment is an important approach to improve students’ learning process and outcomes, let’s now focus on how to effectively implement a DBDM and AFL process. Before we begin, it’s important that you become self-aware of your attitudes and behaviors toward data-driven decision-making. As Datnow and Hubbard (2015) accurately assert, building educators’ capacity for data use has been a taxing issue throughout the years. This might be the result of teachers feeling unprepared or unwilling to use data because of their prior knowledge and beliefs.  

A study by Poortman and Schildkamp (2016) describes how by developing an extensive and intensive professional development program focused on data use intervention, schools were able to solve the student achievement problem they worked on. Teachers (4-6 per school) and school leaders (1-2 per school) worked together in teams. They used a systematic procedure and were guided by a data coach to learn how to use data to solve educational problems related to student achievement. Teachers and leaders would have to be able to use the data they collected, analyzed, and interpreted in order to accept or reject their hypothesis about the cause of the learning issue and evaluate the actions to be taken. 

Another example of a successful DBDM implementation is the study developed by Van Geel et al. (2016), which analyzed the effects of a DBDM intervention on student achievement growth. The intervention was a long-term (2 years) training course for primary school teams (all teachers, school leaders, and deputy director) aimed at acquiring the knowledge and skills related to DBDM, as well as implementing and sustaining it in the school organization. The intervention was focused on the four components of data-based decision making: analyzing results, setting goals, determining a strategy for goal accomplishment, and executing the chosen strategy. The findings of this study indicate that DBDM can improve student achievement.

The use of data teams has positive effects, such as improving students’ achievements, promoting teacher collaboration, enhancing leaders’ and teachers’ knowledge and skills in handling data, prompting a collective focus, creates a sense of ownership, encouraging reflective dialogues, and aids in designing context-specific solutions. However, for data teams to be successful, the development plan should be of high quality and implemented as a long-term plan (Poortman & Schildkamp, 2016; Brown et al., 2017).

You now might be asking yourself how you can implement this in your own context. We will now provide certain characteristics, guidelines, and strategies that you could follow for an effective implementation.

Guidelines/Characteristics for a successful DBDM implementation.

  • Professional development intervention of high quality in its action and design is indispensable for a DBDM process to be effective. 
  • Professional development is more effective when it’s developed in teams and in collaboration.
  • The participants (directors, leaders, and teachers) should be competent (this can be accomplished by a professional development plan) in the utilization of data to transform it into useful information. 
  • DBDM should be a long-term commitment (it usually takes one to two years for teachers to fully understand the promoted beliefs and practices and to change behaviors; it usually takes 5 to 10 years for a school to completely reform).
  • DBDM should be inquiry-based
  • DBDM should promote a reflective process, rather than a judgmental one. 
  • DBDM should prompt a collective focus on a shared goal. 
  • There should be a planned collection, analyzes and interpretation of data. 
  • There should be formulations and re-formulations of hypothesis in relation to the underlying causes of learning problems and outcomes throughout.
  • The process of data use is influenced by school organizational characteristics, user characteristics, and data characteristics. 
  • The use of specific protocols, documents, and planning aids should be established. 
  • Steps for a successful DBDM implementation.

The ideal scenario would be to have an external expert coach in DBDM that would provide initial training and development courses throughout the ongoing process. However, for practical purposes, we will furnish some of the things you can start implementing with your fellow teachers.


  • Involve your coordinator/academic leader. 
  • Create a team of 4-6 teachers (don’t forget to include your team leader).
  • As a team, create a timetable and schedule meetings approximately every three weeks. 
  • Arrange a specific place for the meetings to take place. Ideally, a quiet place where you can work collaboratively and have all the technological required resources at your disposal.
  • Design and establish specific meeting protocols, documents, and planning aids.

   After the initial collection, analysis, and interpretation of data, you will follow an 8 step data intervention (Poortman & Schildkamp, 2016).

8 Step Data Intervention

  1. During the first meetings focus on formulating a concrete and measurable problem statement that is backed up by the prior related data. For example: ‘We are not satisfied with the final examination grades in mathematics from the last three years for our students in the fourth grade, because these grades are lower than 6.0 on average; we would like this to be at least 6.5 in three years’ time (Poortman & Schildkamp, 2016 p. 426).
  2. In the following meeting(s) discuss possible causes for the problem you defined in Step Choose the most plausible cause that can be influenced and researched.
  3. Collect all the prior quantitative (e.g. standardized exam results, term papers, projects) and qualitative (e.g. observation sheets, portfolios, anecdotal records) at your disposal related to the hypothesis formulated in step 2. 
  4. Check the quality of the data collected in Step 3. If the data are not valid and reliable, additional or new data need to be collected.
  5. Develop descriptive analyses or correlational analyses of the data to test their hypothesis.
  6. Interpret and conclude: If the hypothesis has to be rejected, the team needs to formulate a new hypothesis, that is, continue with a ‘new round’ starting again with Step 2. The team continues with Step 7 if the hypothesis can be accepted.
  7. Design and implement improvement measures, based on the conclusions drawn in Step 6. The team formulates the measures that are needed to address the cause of the problem, and the goals related to these measures.
  8. Evaluate the measures taken. This step makes the ‘circle’ complete, by evaluating the effectiveness of the measures taken in Step 7. To measure effectiveness, the same type of data as collected for Step 1 to determine the problem are collected to determine whether the problem is solved.

  Remember this is an ongoing process, so this implementation does not end here. After these 8 steps, you start all over reformulating the hypothesis if it was rejected or formulating a new one that focuses on another issue. Furthermore, bear in mind that in order to have better probabilities of improved learner outcomes, you have to focus on formative assessments and the learning process.