Can Big Data Bridge the Gap between Knowing and Doing?

By Susan Robertson

With the World Education Forum for 2019 just around the corner, I took a quick look at its organising theme: What we should do with what we know: developing educational policy for implementation, impact and exponential success. There’s little doubt that were our knowledge of education systems around the world to be stacked on each other, article to article, book to book, we’d be able to chart a path to the moon and back, and still have copy left. Is doing something with what we know the issue here? And given that we have at our disposal large amounts of data and knowledge and computational capability, what really are the issues here?

I was reminded of the World Bank’s efforts to max up its use of knowledge in the early 2000s. The World Bank, still smarting from widely shared criticism over its heavy-handed implementation of structural adjustment programmes in low-income countries, sought to reinvent itself as a Knowledge Bank. With the line – if the World Bank knew what the World Bank knew – it would be in a better position to respond more effectively to client needs, in particular the Bank’s major mission, the eradication of poverty. The Knowledge Sharing Program that emerged was aimed at trying to capture and systematically organize the wealth of knowledge amongst the Bank’s staff, clients and partners, to make this knowledge available, and to encourage the right links between individuals and groups in order to address similar development challenges. This new knowledge agenda for the Bank would deliver speed, quality and innovation.

New initiatives in the Bank, including the Knowledge for Development – or K4D – programme, also placed knowledge at the centre. The promise here is that through the use of data driven indicators, countries around the world would be able to use this knowledge to move toward their realisation as knowledge-based economies and societies.

The Bank was not alone in prioritising the idea of knowledge and its management. In the late 1990, the Paris-based Organisation for Economic Co-operation and Development (OECD) had also launched its own knowledge management initiative. This included the development of a similar set of indicators as the Bank on what being a knowledge-based economy meant, and how this knowledge could be used to help a country learn and develop into one. Research and knowledge management was to become a key means of both promoting and realising innovation.

Since those early days, the Bank and the OECD have invested considerable resources into the development of indicators aimed at measuring, representing and shaping policy in national settings, so as to achieve impact. In the education world, the Bank’s SABER project, and the OECD’s expanding programme of large-scale assessments on students, teachers and adults in up to 80 countries, is seen as providing data to inform conversations about policy priorities and practice.

Given further ballast by the ‘big data’ trope – that with big data we can resolve ‘big’ problems in our systems, including education, we seem to have returned to the dreamy promise of the early 2000s – that as long as we can ‘manage’ knowledge we can more effectively solve the problems ‘out there’ that continue to remain what some in the sector call wicked problems. That the black box of tricks now has BIG next to the word called ‘data’ has somehow managed to seduce us into believing that we are very close to finding the solution.

Don’t get me wrong. I’m not against data, or big data for that matter. Rather, what is missing from the Bank and the OECD’s analyses include what claims to representation are made, the tendency toward convergence of notions of competence and policy solutions amongst what are incredibly diverse systems, and the distinct effects that the instruments themselves have on the problem. Take for example the use of ranking countries from the top to the bottom – whether on the OECD’s PISA survey of 15-year olds performance on mathematics, science and language, and more recently critical thinking and global competences. To begin with, education systems are represented as national, and whilst some might be, many are not. Constitutional responsibility lies at a different scale. How these rankings might then be used to shape policies for impact and success is the missing middle.

But the bigger, and more important set of issues, as I see it, is that ‘what is known’ about education systems via these large data sets is a considerable distance from the complex realities on the ground. This is because what finally gets included in the assessment tool is something that all countries can live with; the lowest common denominator. In one fell swoop the education world is flattened, and worlds as astonishing different from each other are created as equivalent and measurable. In one fell swoop, too, the complex knowledge about an education system is reduced to that which is measurable, and thus measured, and from there measured globally. No amount of bigness of the data can get over that issue.

And then there is the problem of the policy instrument – as a tool of governing – which has its own effects. Researchers, Lascoumes and Le Gales have pointed this out. We might compare a barometer of student satisfaction with a vertically organised ranking system of levels of student satisfaction. One measures the ‘temperature’, or health, of an education system – but this measure typically shows a country or institution when it is in a zone that warrants attention. A vertically organised system of ranking using ordinal scales, will represent 1,2,3 and so on in descending order (best to worst) when in truth the difference between 1 and 2 and 3 may be so marginal as to in effect not make a difference. The lack of any real differences has now been replaced with a new ‘truth’; that there are distinct differences between those who have been ranked. Further, that it is the education system or institution at the top that is the model to be emulated. Through comparing we learn that it is a competition to get to the top, and there is only one set of rules that will count. The policies and practices to be emulated in this new science of data-driven education governance is conveniently packaged up by the OECD and the Bank, and delivered via reports, peer learning, and so on.

Can big data bridge the gap between knowing and doing? It could. But only if those collecting and using big data to shape national conversations fess up to the limitations of these instruments. They might also reflect on the damage that competition has on education systems as a tool for governing. If there are only winners and losers in a vertically organised race, then those at the bottom are likely to remain at the bottom no matter how hard they try. Being ‘strategically ignorant’ – a term coined by Linsey McGoey – of the effects of excessive competition in education systems suggests that some knowledge is for agencies like the Bank and the OECD an inconvenient truth. Let’s genuinely consider the consequences of an excess of competition in the sector, acknowledge its impact, and think of other ways in which success might be secured. Now there’s a real challenge. To really do something impactful and consequential with what we know.

This was originally published on the Education World Forum