Successful AI Ethics & Governance at Scale: Bridging The Interpretation Gap | by Jason Tamara Widjaja | Oct, 2024


AI ethics and governance has become a noisy space.

At last count, the OECD tracker counts over 1,800 national-level documents on initiatives, policies, frameworks, and strategies as of September, 2024 (and there seems to be consultants and influencers opining on every one).

However, as Mittelstadt (2021) succinctly puts in a way that only academic understatement can, principles alone cannot guarantee ethical AI.

Despite the abundance of high-level guidance, there remains a notable gap between policy and real-world implementation. But why is this the case, and how should data science and AI leaders think about it?

In this series, I aim to advance the maturity of practical AI ethics and governance within organizations by breaking down this gap into three components, and draw from research and real world experience to propose strategies and structures that have worked in implementing AI ethics and governance capabilities at scale.

The first gap I cover is the interpretation gap, which arises from the challenge of applying principles expressed in vague language such as ‘human centricity’ and…

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here