By Morten Hansen
Automation in education is a hot area of investment for venture capital. This is because automation holds the promise of taking existing education processes and delivering them “better, faster, cheaper,” which is a favoured maxim among early-stage funders. But what does the automation of education softwares and platforms look like, and how might we go about studying the commercial rationales of its development, and its wider effects on education and learning? These are some of the questions I explore with my co-author in Automating learning situations in edtech: techno-commercial logic of assetisation, as part of a forthcoming special issue on education and automation.
Automation on education platforms often rely on simple computations. Learning platforms may for example aggregate engagement data such as mouse movement, screen activity, and attendance, for the purposes of ranking students. Students in the lowest scoring percentiles could then ‘automatically’ be sent an email, with the hopes that it would prompt a change in behaviour. Sociologists have correctly warned us that changing the conditions for how organisations group and assign worth to people and activities, gives rise to a new moral economy. The specific contours of moral economies in concrete education technology companies is of course an empirical question. But, based on long and influential histories from the United States, Watters has shown how contemporary sounding ideas of personalisation and automation can be traced back to pre-digital efforts among behaviourists such as Skinner and Pressey to improve the quality and efficiency of education. These days, behaviourist learning theories tend to be cloaked in more fashionable sounding concepts and approaches such as nudging and machine learning, raising the need for researchers to stay very attentive to what is and is not being automated, why, how, and grounded in what justificatory ideologies.
Digitalisation—the process of constructing and capturing things or processes in the analogue world, and transforming them into bits and bytes in the digital world—is the first step in automating learning situations. The abilities of education technology companies to digitise things and processes are not just technical, they are shaped by the powers of the groups whose things or activities they aim to digitise: powerful groups can contest digitisation while more marginalised groups have to play with the cards they are dealt. Wealthy private publishers, for example, are more able and willing to mount a legal defence of the commercial rights associated with the digitalisation of its text books, than individual students are in defending the capture of their essays. Our research suggests that this is one of many factors influencing what, how, and why education technology companies digitise.
Many types of automation further hinges on the idea that software can be designed in such ways that students can continuously, meaningfully, and autonomously engage with it, in ways that is seen to limit the need for additional input from other salaried humans. Education researchers have questioned the veracity of such claims suggesting that automation of computations do not necessarily result in less work for teachers or better learning for students. However, even in such cases, this does not mean that automated interventions do not occur.
In our research, we show that it is analytically useful to organise automatic interventions along two axes: computing temporality (i.e., do computational judgment emerge instantaneously through live cloud-based analytics, or does it rely on pre-emptive judgments?) and computing architecture (i.e., does the software rely on simple computations, or are more complex computational approaches used?). Examples of simple computations include decision trees and percentiles, more complex computations are associated with various forms of machine learning. Pre-emptive computing, which currently is more common among education technology companies than instantaneous computing is, can only make computational judgment on past data that have been batch-aggregated, for example, the evening before. Examples of instantaneous computing include real-time relational dashboards [see the paper for a more detailed discussion, including our discussion of Figure 2].
The two computing dimensions are helpful because they can help guide educationalists who are seeking to understand the limits and possibilities of actually existing automation approaches. This helped us in thinking through the differences and relationships between actual automation (the things that software is currently doing) and imagined automation (the things people claim software is doing or will be able to do in the future). The relationship between actual and imagined computational approaches are constitutive for the strategic decisions and behaviours education technology companies engage in. As such, imaginations about tomorrow are informed by what is technically possible and impossible today, highlighting the importance of going beyond discursive approaches when studying the role of fictional expectations in modern markets and firms.
The commercial rationales for education automation are tricky to trace because they are dynamic (i.e., they change over time), are modulated by different timelines (e.g., six-month plans and five-year plans), and are aimed at different actors (e.g., regulators, educators, consumers, investors, and more). In our work, we considered how education technology was assetised, i.e., turned into an asset. While there are multiple aspects to such processes, we focused on the role automation played in framing education technology in such ways that it appeared meaningful for students, teachers, or institutions to engage with software in the asset state rather than consume it in the commodity state. This is a perspective that suggest that we can think about digital products and processes as being designed to encourage: continuous user engagement, centralised coordination, and extraction of user data through surveillance practices, which in turn affects how control and ownership relations are structured. Going forward more work is needed to unpack the spatial nature of data flows on education technology platforms, and the nexus between platforms’ centralised coordination and surveillance powers. Zuboff calls this the fundamental duality of technology:
“On the one hand, the technology can be applied to automating operations according to a logic that hardly differs from that of the nineteenth-century machine system—replace the human body with a technology that enables the same processes to be performed with more continuity and control. On the other, the same technology simultaneously generates information about the underlying productive and administrative processes through which an organization accomplishes its work. It provides a deeper level of transparency to activities that had been either partially or completely opaque” (Zuboff, 1988, p. 9).
With the rise of big tech, and like many others, Zuboff has only grown more alarmed about the role of surveillance in technology. However, the other part of the duality (i.e., ‘coordination’) never went away. It is a cornerstone of automation in education and its promise of ‘efficiency.’
At present, and on balance, it looks as if applications of artificial intelligence in education technology will also be reliant on centralised data-infrastructures in order to amass the scale of user data needed for training algorithms. While companies do anonymize personal data, this way of learning still constitutes surveillance of students on an epic scale, raising a host of uncomfortable questions: What kind of citizens are we forming students to become if we as education institutions normalise and even shape our activities around a surveillance imperative? How will the decisions we make today in procurement and digital delivery shape the hearts and minds of students in the years to come? What kind of world are we co-creating here? A potential answer could be that we are socialising and desensitising the future workforce to incredible degrees of surveillance in their day-to-day jobs, as has been reported by journalists in the United States.
How does generative AI figure into this story?
We did not consider the implications of generative AI in the research because the companies we studied did not use the technology at the time we interviewed them. However, I would be amiss not mentioning recent developments in generative AI, which have been popularised by the launch of ChatGPT, Dalle-E 2, and more. The former can answer questions based on natural language (such as, ‘can you write a 4,000 word essay on the automation of education?’), the latter can develop pictures based on natural language (such as, ‘make a picture of a dog eating a house’). While it is too early to say what the impact on these technologies will be on education, automation, and associated new moral economies, there is little doubt that new education assets will be built on the back of these technologies and that this will impact how education institutions organise and assess teaching. While many of the dilemmas will be practical in nature, such as how are institutions supposed to assess essays that could easily have been written by an algorithm? Others will be didactical, pedagogical, and philosophical in nature: what will the implications be for learner agencies in a context where algorithmic expression take on a more proactive role in how students demonstrate comprehension, think, and solve problems? The importance of research about computational judgment will increase and we will need to reconsider old questions such as the value of human creation, as well as what skills we need to have, what skills are nice to have, and who should decide.
About the author: Morten is a KPP member and he Tweets on @Hansen_edu