Aligning on a Vision & Purpose for AI

Aligning on a Vision & Purpose for AI

It all starts with creating the right vision

When we look at the global conversations around AI taking place among governments and the private sector, there is a lack of clarity and consistency on what the guiding principles should be for lawmakers, researchers, and businesses. There is general agreement on the need for regulatory guidance and for protections to be put in place regarding the use of AI/ML for health; however, before effective regulation can be created, there needs to be alignment on the vision and a purpose for the technology.

Regulation is created on the premise of the pre-established values and principles surrounding what the technology is, what it hopes to accomplish, and how it should be used. This includes identifying ways in which it will not be used and considering harms that should be guarded against.

OECD AI principles

There are understandable challenges in trying to craft a vision for a technology of which we do not yet know its full capacity. However, what we can determine are the principles and values we want this technology to operate from. So, what should the principles be for AI? To answer this question beyond just healthcare, the organization for Economic Cooperation and Development (OECD) established five value-based AI principles for the responsible stewardship of trustworthy AI.¹ Over 20 governments and leaders across all major sectors contributed to developing this set of principles with a focus on human-centered values. Many countries, including the U.S., have officially adopted these principles. A key quality of the principles is that they are specific enough that they can be applied in practice, but broad and flexible enough to be timeless, regardless of how the field evolves. The principles are:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

  • AI systems should be designed in a way that respects the rules of law, human rights, democratic values and diversity, and they should include appropriate safeguards — for example, enabling human intervention where necessary — to ensure a fair and just society.

  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

  • AI systems must function in a robust, secure and safe way throughout their life cycles, and potential risks should be continually assessed and managed.

  • Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

The OECD also provides specific recommendations to governments on how to put these value-based principles into action:

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.

  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.

  • Ensure a policy environment that will open the way to deployment of trustworthy AI systems.

  • Empower people with the skills for AI and support workers for a fair transition.

  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

Given the broad nature of the OECD principles, how are they being applied by governments and other institutions? Many organizations have released their own set of principles for AI use. Among 36 prominent guidelines on the governance of AI, reviewers identified eight overarching themes. These themes call out key areas that need to be addressed in any future policy or law governing the use of AI. They are privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility and promotion of human values.² However, among the various white papers and strategy documents from government agencies, stakeholder organizations, and businesses, there remain inconsistent priorities across these major themes and some are not addressed at all. Let's take a look now at the overarching themes among different AI strategic frameworks.

  1. Recommendation of the Council on Artificial Intelligence - OECD/LEGAL/0449. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

  2. Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. Published online January 15, 2020. doi:10.2139/ssrn.3518482