The Next Frontier: Transparent, Explainable, and Contestable AI for Social Good
PhD project
PhD student:
Supervisors:
Lachlan Urquhart (Edinburgh Law School), Burkhard Schafer (Edinburgh Law School), James Garforth (School of Informatics)
‘Amanda M. Horzyk. 2026. The Next Frontier: Transparent, Explainable and Contestable AI for Social Good. Plenary Talk, 25th International Conference on Artificial Intelligence and Soft Computing (ICAISC), June 2026. https://doi.org/10.13140/RG.2.2.10170.66249
Amanda M. Horzyk. 2025. (In)adequate Data Protection and Algorithmic Predictive Analysis: An Unresolved Battle. In Proceedings of the 2025 International Joint Conference on Neural Networks (IJCNN), Rome, Italy, 1–8. https://doi.org/10.1109/IJCNN64981.2025.11228804
Amanda M. Horzyk. XAI Section Presentation: Policy, Regulation and Ethics in Explainable AI. Workshop Presentation. 2025 International Joint Conference on Neural Networks (IJCNN), Rome, Italy, 1–8. https://doi.org/10.13140/RG.2.2.33349.56805
Amanda M. Horzyk. 2025. Challenging Obligations: Investigating the ‘AI Literacy’ Mandate for Human Oversight. In AI Governance Ethics: Artificial Intelligence with Shared Values and Rules. Christoph Stückelberger, María Rocamora Merchán, Pavan Duggal, and Divya Singh (Eds.). Globethics, 2025. https://doi.org/10.58863/20.500.12424/4318987
Forthcoming:
Amanda M. Horzyk. 2025. How You Implement Matters: Navigating the XAI Implementation Journey. Manuscript for review, November 2025.
Patrice Chazerand, Virginia Ghiara, Amanda M. Horzyk, Rónán Kennedy, Camilah Leclère, Xinpeng Liu, Flavia Massucci, Burkhard Schafer, Jacob Livingston Slosser, and Nicole Tschap. 2025. The AI4People Playbook: Implementing a Proportional Approach to Ethical AI Requirements. Whitepaper (work-in-progress). AI4People.
In today’s saturated black-box AI market, it is time to challenge the status quo and envision new architectures of trust. On this road, speculating futures for responsible NLP, this project pulls fresh insights drawn from global expert and diplomatic convenings at the intersection of law, policy, and ethics in AI. Under the influence of political, legal, and technical pressures, our practices and collective agenda continue to evolve. Through a series of expert-centered studies we explore how these forces drive the emerging mandate for transparency, explainability, and contestability—principles that have become close to non-negotiable in responsible AI development. System design mirrors a decision-making pipeline: a sequence of architectural and organisational choices, each carrying its own value assumptions. How we make these choices determines what we optimise for and who participates in shaping outcomes. The project aims to develop a holistic framing of explainability that responds to today’s regulatory, societal, and technical momentum. Breakthroughs emerge through co-creation that is multi-sectoral, multilateral, multidisciplinary, and multi-stakeholder. Continuous improvement through implementation, feedback, and literacy is attainable when our systems are auditable, verifiable, and improvable. This forms the foundation for societal progress and adoption grounded in trust.
Collaborators: Royal Botanic Garden Edinburgh (RBGE), Woodland Trust Nature’s Calendar (WTNC) and Joint Nature Conservation Committee (JNCC)
Funder: UKRI AI Centre for Doctoral Training in Designing Responsible NLP
Project dates: 2024 –







