AI Ethics for Researchers

Trustworthy AI AI Ethics

How can we make AI more transparent?

As AI becomes embedded in science, earning and maintaining trust is a crucial requirement. This trust can span anything from an AI model's technical performance, all the way through to procedural alignment with human values and social responsibility. Trustworthy AI systems are all of: reliable, safe, fair, inclusive, transparent, accountable and secure. In the research sector, this means that models are not only accurate but also intelligible and scrutinised. Trust is built when researchers can explain how an AI system arrived at its output, and when they can test a model's robustness across diverse datasets. There must be a checking mechanism in place to note its limitations.

In certain applications, AI can perform consequential decisions, leading to a cascade of further outcomes and decisions made; all of which collaboratively produce a final result or outcome. If any one part of this chain of decisions is erroneous, all subsequent components of the workflow could be disrupted. In medical diagnostics, for example, prescribed treatments could benefit from a network of collaborative AI models, each constrained and highly trained to a specific task, on specific data. Having multiple instances and AI models cooperate in this way can drastically reduce the chance of a single model hallucinating, following an unfavourable sequence of decisions that might arise from processing a multitude of different tasks, at once.

Summary of Trustworthy AI in Research

  • Explainability: The system can clarify how it reached conclusions
  • Robustness: Reliable performance across varying conditions
  • Transparency: Clear documentation of data sources, methods, and limitations
  • Specificity: Models trained for particular tasks rather than general-purpose use
  • Human oversight: Meaningful human review of critical decisions