When systems collide: The cautionary tale of Robodebt

Image generated by Amy Mowle using Midjourney

Labelled as a 'crude and cruel mechanism, neither fair nor legal' by Royal Commissioner Catherine Holmes AC SC, the notorious Robodebt Scheme stands as a glaring example of a public administration system failure 1. But beyond the surface level issues of legality and efficiency, it prompts us to ask deeper questions as to the assumptions that underpin technological systems, and the values we implicitly endorse when such schemes are put into action.

In an increasingly complex world, public policy is faced with monumental challenges. The relentless digitisation of our social, political and economic lives has only accelerated and intensified this complexity, further transforming the landscape of policymaking. As governments grapple with this volatile context, the ability to understand and adapt to systemic effects becomes critical.

For many, the advent of automated decision-making (the process of making decisions using algorithms, artificial intelligence, or other computational methods, rather than through human judgement) has come to represent a promising advancement in fair, effective and efficient public administration2. Indeed, governments across the world are increasingly implementing automated decision-making tools to improve the delivery of various processes and services, optimise the allocation of resources, and minimise human error and subjectivity.

Yet despite the promises of objectivity and impartiality, automated decision-making systems are still developed and implemented within the prevailing political, social and cultural milieu, and as a result, are not immune to the broader systems of power and privilege that organise and give shape to our everyday lives3.

The recent release of the Royal Commission’s report into the Robodebt Scheme serves as a cautionary tale about the promises and pitfalls of automated decision-making in the public service. Implemented in 2016, the Robodebt Scheme utilised an automated data-matching system to identify discrepancies between income reported to Centrelink (Australia's social services provider) and the Australian Taxation Office. The system then issued debt notices to individuals who were suspected of owing money to the government in a bid to recover supposed welfare overpayments.

Initially lauded for its efficiency, it soon became apparent that the Robodebt system was producing a high number of false debts against those in already precarious economic positions. Of course, the rigid and automated nature of the system, coupled with a lack of human oversight, were central to the failure of the Scheme to adapt and respond to the nuances of the social system. Yet, as highlighted in the scathing report handed down by the Royal Commission, the enduring political narratives that work to frame welfare recipients as ‘second-class citizens, criminals, and dole cheats’ created fertile ground for the implementation of this unlawful method of ‘income averaging’ in the first place1.

Recognising the role played by political narratives in shaping the mental models, attitudes and values that give rise to norms in a system, one of the key recommendations outlined in the report suggests that the design of policy and process should avoid ‘language and conduct which reinforces stigma and shame associated with the receipt of government support when it is needed’1.

Mental models, from a systems perspective, are essentially the cognitive frameworks we use to process and understand our experiences4. These can be likened to internal maps that help us navigate and make sense of the world, filtering and organising incoming information based on various factors such as culture, education, and socialisation. While these models aid in interpreting and responding to stimuli, they can also inadvertently omit certain data or introduce biases, as they are shaped by our individual and collective experiences5.

Political narratives and discourse play a powerful role in shaping the mental models of citizens, providing a structure through which we can make sense of complex political issues. This can shape how individuals perceive these issues, anchoring their understanding to the presented narrative, and normalising or sanctioning certain policies and practices, including those that could potentially be harmful6.

In her report, Royal Commissioner Holmes criticised the ‘easy populism’ of anti-welfare rhetoric which endures on both sides of the Australian political divide, and highlighted the role played by political narratives in sanctioning and normalising the implementation of punitive, often discriminatory systems like Robodebt1. Often, these narratives capitalise on the 'us versus them' divide, forging in-groups and out-groups, paving the way for the dehumanisation of those labelled as 'others', thereby making it simpler to rationalise potentially detrimental policies. This underscores a growing acknowledgment that wider public discourses and narratives considerably influence behaviours within a system, including determining what actions are accepted or sanctioned.

While the transformative potential of automated decision-making is clear, the fallout from the Robodebt Scheme should serve as a shot across the bow for governments attempting to grapple with an increasingly complex world. It's crucial to remember that the implementation of automated decision-making is not a mere technological consideration, but one intricately interwoven with social, political, and economic threads. In essence, a more holistic approach is needed, one that goes beyond technical fixes to consider the wider context in which these systems operate. It's essential that we scrutinise the potential consequences for those who rely on such services, acknowledging that technology, however advanced, is still deployed within the context of our deeply rooted and often unexamined mental models4.

 

Dr Amy Mowle, Research Officer

Dr Therese Riley, Associate Professor of Complex Community Interventions

Mitchell Institute, Victoria University


References

  1. Holmes, C., 2023, Royal Commission into the Robodebt Scheme, Online. Available from: <https://robodebt.royalcommission.gov.au/system/files/2023-07/report_of-the-royal-commission-into-the-robodebt-scheme.pdf>.

  2. van der Voort, H.J., Klievink, A.J, Arnaboldi, M., Meijer, A.J, 2019, ‘Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?’ Government Information Quarterly, 36:27-38

  3. Kuziemski, M., Misuraca, G., 2020, ‘AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings’, Telecommunications Policy, 44(6):101976.

  4. Senge, P.M., 1990, The Fifth Discipline: The Art and Practice of the Learning Organisation, Doubleday, New York.

  5. Jones, N.A., Ross, H., Lynam, T., Perez, P., Leitch, A., 2011, ‘Mental models: An interdisciplinary synthesis of theory and methods’, Ecology and Society, 16(1):46.

  6. World Bank, 2015, ‘Thinking with mental models’, In: World Development Report 2015: Mind, Society & Behaviour, Washington, DC: World Bank.

Previous
Previous

Harnessing social capital: What the writers’ and actors’ strike teaches us about the power of networks

Next
Next

The seduction of programmatic thinking