ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Experts Discuss How International AI Principle Adopted by OECD are Yielding Trustworthy AI

Experts Discuss How International AI Principle Adopted by OECD are Yielding Trustworthy AI

In a recent webinar through George Washington University’s Digital Trade and Data Governance Hub, panelist discussed how the Organization for Economic Cooperation and Development’s (OECD’s) has implemented and monitored its new artificial intelligence (AI) principles over the past year. Back in May of 2019, the Organization for Economic Cooperation and Development (OECD), an intergovernmental economic organization with 37 member countries, first released a set of AI principles with the goal of fostering a “policy ecosystem for trustworthy AI that benefits people and the planet.” These principles, developed by the first multi-stakeholder AI expert group, intended to serve as a non-binding but political commitment to implement AI. Since then, the principles have been widely accepted by 37 members of OECD plus 7 partner countries and are the first internationally accepted principles on AI. They were also endorsed by other organizations including the G20.

The OECD principles are a set of high-level principles that enable a directionally correct development and deployment of guidance on AI for countries and further the ability for research to identify gaps and additional areas not previously covered in AI research and implementation. However, as countries are considering developing and employing AI technology systems, it is important to consider that risks to the principles can happen throughout the entire process of implementation. To that end, policy makers and independent organizations should create a system that identifies the potential risks as well as risk mitigation measures to improve transparency and accountability of the AI system.

The implementation of the principles is monitored through a peer review process of OECD countries and currently, the OECD is forming three working groups to help move from implementation to practice. The first working group will look at classification of AI systems so that governments can measure the development of AI. The Second group will focus on implementation guidance for trustworthy AI systems focusing on the governance processes that are implemented in different organizations, researching different standards, practices and codes that are already being used as well as identifying gaps in regulations. The third working group is to provide implementation guidance on national policies to gain an understanding of what is there and needs to be done in regards to AI. The Panelist spoke about how the principles could be better move from implementation to practice through trusted third party contributors such as the National Institute of Science and Technology (NIST), who have played a big role to benchmark performances over complex processing to create a level playing field of understanding across to provider and industry applications.

While the G7 does not currently have principles yet, they have a plan that stems from the best practices of the OECD principles. Nicolas Mialhe, Founder and President of The Future Society, spoke how the G7 principles will come from like-minded democratic, liberal, and social values with a strong emphasis on human rights and democracy. As all G7 countries are a part of the OECD, these principles will serve a framework for guidance to identify best practices in the development and deployment of AI technologies throughout the world. Furthermore, Mialhe explained how the OECD principles not necessarily universal and implementation and interpretation of the principles will look different at all levels and different AI actors.

The webinar can be found at: https://www.youtube.com/watch?v=ssA_l5iXPeM.

More on the OECD AI principles can be found at: https://www.oecd.org/going-digital/ai/principles/.

You are now leaving ASME.org