The final speakers in this session at the ZeMKI 20th anniversary conference in Bremen are Annekatrin Bock and Dan Verständig, whose focus is on programmed futures in education. We use complex technological systems everyday, but must be aware of when they are dysfunctional; as routines break and crises happen, this is when education happens. The promise of education is to address and enable us to navigate uncertainty, but what education provides also serves to negate certain possibilities.
Uncertainty is the starting-point for pedagogical action: it requires such action. But it is also an outcome of such action: education opens up a variety of options for the learner, through which they may regain certainty, but should never proscribe a single option to them. This provides a point of unpredictability, negating intention, and such unpredictability also reproduces inequalities. This unpredictability also changes our perceptions of the world we are living in: it shapes those perceptions.
In education, AI tools all promise to optimise grading processes, but they are all trained on past data and thereby reproduce past inequalities; this is an algorithmic reproduction of inequality that makes claims of objectivity. But when students are seen only as data points rather than as questioning subjects, this negates the possibility of change, and results in a technocratic fantasy; we must find better concepts of futures.
Problematic concepts of the future include that what we learn from the past can shape our future for the better – but this historical determinism is deeply problematic as it builds on survivor bias and creates path dependencies; but AI systems do not know this when they are trained on historical data – they assume that the past was rational and objective, and free of biases and marginalisation that targeted specific groups in society.
A second problematic concept of the future is that we must prepare ourselves well for the future that is to come; this presents future developments as linear and predictable, positioning certain outcomes as inescapable. But the future is plural, and we must be open to many possibilities.
A third problematic concept is that educational technologies can predict such futures: but in reality, they create them by embedding certain biases towards particular highly valued learning styles or outcomes, and in doing so exclude students who learn and express themselves in different ways.
We should be interested in this because the promise of education is ultimately in the courage to question – yet data-driven systems often undermine this. So we should not ask how such systems can predict the future, but focus instead on who gets to predict, challenge, and build the future. Researchers must intervene in the making of better possible futures, and ask whose futures are being built, supported, or erased in the process. We must reject the myth of the neutral algorithm, the singular past, and the predictable future.











