UKRI Trustworthy Autonomous Systems Node in Governance and Regulation

Outputs from this project:

Lachlan D. Urquhart, Glenn McGarry, and Andy Crabtree. 2022. Legal Provocations for HCI in the Design and Development of Trustworthy Autonomous Systems. In Nordic Human-Computer Interaction Conference (NordiCHI ’22). Association for Computing Machinery, New York, NY, USA, Article 75, 1–12. https://doi.org/10.1145/3546155.3546690

Andy Crabtree, Glenn McGarry and Lachlan Urquhart. AI and the Iterable Epistopics of risk. AI and Society 40, 1425–1438 DOI: https://doi.org/10.1007/s00146-024-02021-y

Glenn McGarry, Andrew Crabtree, Alan Chamberlain, and Lachlan D Urquhart. 2024. Responsibility and Regulation: Exploring Social Measures of Trust in Medical AI. In Proceedings of the Second International Symposium on Trustworthy Autonomous Systems (TAS ’24). Association for Computing Machinery, New York, NY, USA, Article 27, 1–5. https://doi.org/10.1145/3686038.3686041

Yiwei Lu, Zhe Yu, Yuhui Lin, Burkhard Schafer, Andrew Ireland, Lachlan Urquhart. 2022 An argumentation and ontology based legal support system for AI vehicle design. Legal Knowledge and Information Systems. 213-218. IOS Press DOI: https://ebooks.iospress.nl/doi/10.3233/FAIA220469

Yiwei Lu, Zhe Yu, Yuhui Lin, Burkhard Schafer, Andrew Ireland, Lachlan Urquhart. 2022. Handling inconsistent and uncertain legal reasoning for AI vehicles design. Proceedings of Workshop on Methodologies for Translating Legal Norms into Formal Representations (LN2FR 2022) 76-89

This Node will delve into the development of a framework for regulation of autonomous systems, enabling them to effectively adapt to the intricacies of the environment in which they operate.  

For autonomous systems to become trustworthy, we will need frameworks of informal and formal governance that must be co-designed with the systems themselves. Our Node will explore the design of the frameworks for responsibility and accountability, and tools to help the ecosystem of developers and regulators embed these values in systems. 

Our Node brings together computer science and AI specialists, legal scholars, AI ethicists, as well as experts in science and technology studies and design ethnography. Together we are developing a novel software engineering and governance methodology that includes: 

  • New frameworks that help bridge gaps between legal and ethical principles (including emerging questions around privacy, fairness, accountability, and transparency) and an autonomous systems design process that entails rapid iterations driven by emerging technologies (including, e.g., machine learning in-the-loop decision making systems). 
  • New tools for an ecosystem of regulators, developers and trusted third parties to address not only functionality or correctness, but also questions of how systems fail, and how one can manage evidence associated with this to facilitate better governance. 
  • Evidence based from full-cycle case studies of taking autonomous systems through regulatory processes, as experienced by our partners, to facilitate policy discussion regarding reflexive regulation practices. 

Funder: EPSRC

Project dates: 1st November 2020 – 30th April 2023