Home > News & Insights > Insights & Publications

5/13/19 - Requirements for AI Governance and the Role for Digital Ethics

By Leigh Feldman and Erik Larson

Deployment of advanced technology, including artificial intelligence, within organizations, is driving innovation and growth. At the same time, AI and related tools are creating new challenges for the responsible collection, use, and sharing of information. While some new and emerging regulations, such as the EU General Data Protection Regulation, are beginning to place some rules around AI and automated algorithmic processing, guidance for organizations can best be described as nascent.

In the rush to deploy new AI-enhanced products and services, we have seen many high-level missteps and unintended consequences. AI presents special challenges to other risk and governance functions that differ in type or degree from non-AI models. Consequently, as companies deploy AI more easily and widely, organizations need to review and modify risk frameworks to ensure that they meet both customer and regulator expectations around the management of those areas in which AI risk may manifest, including but not limited to:

  • Data governance and privacy
  • Operational, model, and third-party risk
  • Technology governance and information security
  • Strategy management and business planning
  • Compliance, legal, and reputation risk
  • Financial risk

One approach to managing this new risk and uncertainty is for organizations to stake out their own principles regarding use of these new tools, and ensure that these principles are adequately reflected in a statement of risk appetite. In developing these principles for AI, firms may want to consider utilizing a digital ethics lens. This is a relatively new, ethics-based approach to evaluate risk and concerns related to data, algorithms, and corresponding practices. Digital ethics is increasingly important because of an ever-evolving and challenging digital ecosystem marked by a combination of:

  • Vast collection of data via mobile and Internet of Things devices
  • Expansive use of low-cost cloud storage and
  • Increased use of more powerful AI and big-data analytic tools
  • Increasing ability to (re)link data to individuals

Digital ethics addresses a gap in the historic risk decision-making approach common to most organizations. It encourages organizations to take a comprehensive view of the challenges of data processing to foster optimal decision-making, taking into account the organization, individuals, and society.

Recent European Commission Guidance on Trustworthy AI

Over the past couple of years, there have been numerous articles, principles, and commentaries on digital ethics and the responsible use of AI. In April 2019, the European Commission published ethics guidelines for trustworthy AI,1 a report that pulls many of these threads together into a comprehensive document. The EC report discusses both broad principles as well as specific requirements that are recommended for an organization to build out trustworthy AI frameworks guided by ethical considerations. ”Trustworthy AI,” the report says, should be lawful, ethical, and robust.

Broadly speaking, the EC report states that organizations should adhere to the following three ethical imperatives while utilizing AI:

  1. Respect for human autonomy: Organizations should ensure that AI systems allow individuals to maintain control of their lives, and serve as an enhancement of human abilities.
  2. Prevention of harm: AI systems should not cause malicious harm to individuals.
  3. Fairness and explicability: Inputs and outputs of AI systems must be fair to individuals, and organizations should be transparent about how data is processed and how algorithms reach certain decisions.

These three guiding principles are supported by seven key requirements within the report’s “trustworthy AI assessment list.” These requirements should be taken into consideration when assessing and designing AI, from a privacy and ethics-by-design perspective, and when assessing ethical standards for data processing activities:

  1. Human agency and oversight: Organizations should take steps to mitigate any negative impact AI systems may have on fundamental human rights. Individuals should have control over how AI systems process their data if the processing could have an impact on their behaviours or actions.
  2. Technical robustness and safety: AI systems as a default should prevent harm to companies, employees, and customers. AI systems should be resilient and secure, protecting against attacks that could involve the theft or manipulation of the individual data they process.
  3. Privacy and data governance: AI systems should protect the confidentiality, integrity, and accuracy of individual data they process.
  4. Transparency: Organizations should communicate to stakeholders how AI systems make decisions based on the individual data they process. Additionally, individuals should be made aware when they are communicating with AI systems.
  5. Diversity, non-discrimination, and fairness: AI systems should allow fairness both in ensuring all individuals have access to systems, and that everyone’s data is being processed in a fair way.
  6. Environmental and societal well-being: AI systems should consider the direct and indirect societal impacts of the processing activities they conduct.
  7. Accountability: Organizations will be accountable for all aspects of data processing their AI systems conduct.

The guidance also clearly emphasizes the importance of communicating AI practices and policies with clients, stakeholders, and employees. While AI use quickly expands, many organizations have not considered incorporating the above into their governance and risk frameworks and, therefore, may not be asking the right questions or performing the right analysis to support their desired outcomes.

How We Can Help

Based on recent digital ethics and AI governance reports, papers, and principles, including the EC report, Promontory has developed a methodology and framework that can assist organizations in identifying potential AI risk, and then developing and operationalizing AI principles and policies.

We do this by:

  1. Assessing an organization’s current state and desired AI risk appetite. We execute this through document reviews, interviews, and facilitated workshops. Our approach includes the use of a proprietary digital ethics impact assessment, which generally applies a broader lens than an organization’s current risk assessments. This assessment helps organizations document pre-emptive and proactive efforts for managing possible risks to their consumers, organizations, and society. It also provides a roadmap for organizations to better understand the impacts and consequences of their actions from a digital ethics perspective, and to inform digital ethics choices around AI, governance, policy, and process.
  2. Building a tailored set of reference standards for AI governance, including ethical considerations, to assess specific areas.
  3. Developing strategic enhancements to governance, policies, procedures and processes.
  4. Assisting with the deployment and implementation of an AI risk and ethics assessment as well as a control process to monitor existing and review new products, services, and processes.

Contact Us

Should you have any questions regarding AI governance, digital ethics, or our methodology, please do not hesitate to contact us.

Leigh Feldman
Managing Director
+1 212 365 6976

C. Erik Larson, PhD
Managing Director and Global Lead for Quantitative Methodologies and Analytics
+1 202 384 1029

Robert Grosvenor
Managing Director
+44 207 997 3407

Julie Williams
Managing Director
+1 202 384 1087

Nick Kiritz
Director and Lead Expert for Model Risk Management
+1 202 370 0401

Jeremy Berkowitz
+1 202 294 6550


  1. “Ethics guidelines for trustworthy AI,” European Commission (April 8, 2019).