AI Ethics for Researchers

AI Ethics Overview AI Ethics

"When we don't include everyone, we lose ideas, we lose innovation, we lose perspective."

- Joy Buolamwini

As with any data-based application, AI is also subject to bias, over- and underrepresentation. Ethical AI therefore means actively identifying and correcting for bias. Algorithmic bias, for instance, can be a concern when models are trained on datasets where certain populations are underrepresented. For example, we could make unrealistic conclusions about disease prevalence, were we to train our AI models mostly on minorities within a given population. It, therefore, goes without saying that generalising findings across populations also requires diversity in the training datasets being used. Inclusivity and breadth make for important ethical concerns.

AI may also shape access to scientific tools. If only well-funded institutions can afford the compute power to run large-scale AI models, the gap between resource-rich and resource-poor research settings may widen. Ethical deployment of AI in science includes advocating for accessible tools, fair resource allocation, and open-source alternatives wherever possible.