Companies yearn for rigorous AI accountability tools. Following a burst of generative AI advances, many are racing forward to develop and deploy the technology before they miss out on market opportunities. But criticism that AI outcomes are risky and unpredictable has also grown louder, and some tech leaders have called for more restraint.
Amidst this mainstreaming of AI, a five-year-old nonprofit organization called the Responsible Artificial Intelligence Institute (RAII) is launching what it says is the first independent certification program for responsible AI implementation.
Developed with the help of RAII’s 18 corporate members and other experts, the organization’s certification program now is wrapping up an initial pilot evaluation of automated lending. Canada’s national standards body has overseen the pilot and appears likely by summer 2023 to approve the RAII's conformity assessment program as a standard set of requirements, which can be configured for specific AI systems. On April 11, 2023, the federal government cited the RAII's certification several times in its brand-new request for public comments about increasing AI accountability.
The Cybersecurity Law Report spoke with the RAI Institute’s director of partnerships and market development Alyssa Lefaivre Škopac about the certification program’s features, its focus on specific use cases, and the three ways that companies can use it – plus, how RAII aims to keep up with the fast changes in generative AI.
See “Compliance Checklist for AI and Machine Learning” (Jan. 5, 2022).
A Snapshot of the RAII’s Certifications
Certifications Only for Specific AI Uses
RAII's program will certify only a specific AI use as meeting standards for responsible AI, rather than an entire company. It calls its review a System-Level Assessment (SLA). The RAII's first three SLA certifications are for automated employment decisions (a major issue for all industries), lending (finance), and skin-disease detection (health care).
Responsible AI is a blanket label for AI efforts that are trustworthy, reliable, law-abiding, ethical and exhibit other virtues. The RAII is also developing certifications for procurement of AI (all industries), automated decisions about health care access, and automated financial collections.
Each RAII assessment looks at six potential "dimensions" of the AI use, which allude to key risks: "Explainability and Interpretability, Data and System Operations, Accountability, Consumer Protection, Bias and Fairness, and Robustness.”
Each certification involves 89 to 100 questions overall, which RAII has aimed to align with regulations and expert opinions. Assessors score maturity on the questions from 0 to 5, compiled into a larger score for each broad dimension.
Companies can use the SLA as:
- a self-evaluation tool to guide compliance efforts;
- an attestation to its partners; and/or,
- an independent and accredited audit for certification.
The Standards Council of Canada (SCC) has overseen the automated lending pilot, which involves Ernst & Young (EY) using RAII’s SLA to assess the automated lending of ATB Financial, a commercial bank. The SCC appears ready to approve RAII’s conformity assessment structure by summer 2023 as a standard set of requirements for responsible AI in lending systems.
Once approved, EY and other accredited independent conformity assessors (meeting the ISO 17065 standard) then could license RAII’s program to certify financial companies’ use of AI in lending.
In addition to the SLA, RAII has developed two other assessments, which review companies' AI use in detail but do not lead to certification. The Organizational Maturity Assessment (OMA) assesses an organization’s AI practices across five dimensions of maturity: “Policy and Governance, Strategy and Leadership, People and Training, AI System Documentation and Procurement Practices.” The Supplier Maturity Assessment (SMA) evaluates AI vendors using the same five aspects of organizational maturity.
See our two-part series on the practicalities of Responsible AI: “AI Governance Gets Real: Tips From a Chat Platform on Building a Program,” (Feb. 1, 2023), and “AI Governance Gets Real: Core Compliance Strategies” (Feb. 23, 2023).
RAI Institute: A Membership Organization
CSLR: What is RAII’s relationship with its members?
Lefaivre Škopac: We are an independent nonprofit that is member-driven. We are not currently funded by grants or government, so it is our industry members that support our work.
People come to the RAI Institute because we sit at the intersection of industry, academia, civil society, policy making and regulating. Having a convening, aggregating voice from an independent perspective brings a lot of value, especially by including an industry perspective. We have members from all areas of the AI ecosystem.
CSLR: Why do member companies join?
Lefaivre Škopac: With the awareness and mainstreaming of AI, more and more organizations want to publicly demonstrate their commitment to responsible AI through membership with us or communicating to stakeholders about their responsible AI efforts. They want to satisfy the consumer demand of “let me know what your principles are, demonstrate to me that you are doing this.” So, being able to have a certification mark is valuable for people at companies as they consider where they make their investments.
It’s a noisy space out there. There are all these different frameworks, people writing open letters, regulatory changes happening all the time. We offer practical guidance for organizations on how to implement and grow their responsible AI maturity. We have policy analysts that spend a great deal of time making sure that we synthesize the best available information for our members to identify the leading edge of what’s happening in this very, very busy ecosystem, and to read the tea leaves for what they need to do as a result.
Also, we have thought leaders coming to the table. The AWS, BCG, Mastercard, IBMs of the world know that our conformity assessment is a leading mechanism in the ecosystem for assurance and trust in AI. And they also want to lend their expertise and support to what we are building because they see it as an important part of demonstrating what responsible AI looks like practically.
CSLR: How many staff members does RAII have?
Lefaivre Škopac: We have five full-time staff and several fellows and interns. And then we have a very large advisory community that we work with. [RAII’s website lists 35 advisors and board members.]
CSLR: What questions do companies ask the RAII when they first reach out?
Lefaivre Škopac: Many organizations ask how they can get involved to support our mission.
Other top questions are, “How do I know what I should be implementing? Can you help me evaluate and make sure I’m doing all the right things? Tell me how I can use your conformity assessment as part of the assurance implementation that I’m doing. How do I align my organization and different stakeholders within my organization?”
Another main question is around procurement. “How do I make sure that the solutions and providers that I’m bringing into my organization from third parties are evaluated and safe?”
We also get asked, “How do I stay on the leading edge of this? How do I know what’s going on?”[See our two-part series on managing legal issues arising from use of ChatGPT and Generative AI: “E.U. and U.S. Privacy Law Considerations” (Mar. 15, 2023), and “Industry Considerations and Practical Compliance Measures” (Mar. 22, 2023).]
Pilots and Approval Almost Complete
CSLR: What’s the status of RAII’s automated lending certification pilot?
Lefaivre Škopac: The pilot is still ongoing. We’re expecting an end of Q2 (2023) completion.
CSLR: Are there other pilots?
Lefaivre Škopac: We have a few pilots in our initial focus areas. We are launching an automated employment decisions pilot soon, and there will be others. Our members and the community tell us what certifications are the most pertinent, which helps us define our roadmap.
Certifying Each Project, Not the Whole Organization
CSLR: How similar are the certifications for automated lending, employment decisions, and skin detection?
Lefaivre Škopac: The core is transferable and applicable across the use cases. We designed it to be agile.
But there is quite a bit of variation between the systems or use cases, which is why we convene different groups of subject matter experts. The evaluation requirements, the weighting and the documentation requirements change for each industry.
CSLR: Why did you call your certification program a “System-Level Assessment”?
Lefaivre Škopac: With AI, the context of its use is very important – and this is why we talk about the use case or AI system as opposed to the organization. We don’t currently offer an organizational certification. Other parties are developing standards like this, so we chose to focus on the level of key use cases or AI systems, like automated lending and employment decisions.
We certify the company’s AI system, which we define to include its data, models and context. Think of the RAI Institute like the Green Building Council looking at specific buildings and not the real estate company. They get gold, silver, bronze, platinum. This is one model we looked at when developing our certification program.
It’s key to differentiate because an AI lending system in the U.S., for example, involves systemic issues around race and lending decisions. Each use has its distinct risks and potential harms, and therefore needs different controls and mitigation measures.[See our three-part series on new AI rules: “NYC First to Mandate Audit” (Jun. 15, 2022); “States Require Notice and Records, Feds Urge Monitoring and Vetting” (Jun. 22, 2022); and “Five Compliance Takeaways” (Jul. 13, 2022).]
Three Ways for Companies to Use the Program
CSLR: What are the different ways an organization can use the AI system certification framework?
Lefaivre Škopac: A company can use the self-serve tool to assess where their system is at. That would be most applicable for a low-risk system, where it could serve as a checklist and internal assurance tool.
Those who want robust recommendations and an external view can work with us as members, or with one of our partners, to deliver an evaluation with roadmap recommendations and opportunities for remediation.
The third way provides the formal certification mark delivered by accredited certification or conformity assessment bodies. Selecting which approach is a level-of-risk decision and potentially a public-trust decision for the organization.
CSLR: What feedback have you heard from entities that use the rubric for evaluation rather than certification?
Lefaivre Škopac: We do this with most of our members. It’s included in some levels of our membership package.
People really like that it is done in a workshop style. We take them through the assessment and then the output is a rigorous recommendations report on strengths, weaknesses, gaps and steps they need to take.
If they want to go for certification, I recommend they get a readiness assessment from a credible, authoritative third party about which areas they can improve. People are often coming to us because Responsible AI can be a full-time job for an entire team in a company. Being able to let them convene with experts outside has been a big part of how we’ve offered value to our members.[See “Takeaways From the New Push for a Federal AI Law” (Oct. 26, 2022).]
Certification Developed in Workshops and on Real Uses
CSLR: How did you arrive at the workshop approach for delivering the assessment?
Lefaivre Škopac: We found that delivering the assessment, at least for the first time, via a workshop style is the best way to help with the education, the understanding and overall increasing of the maturity of the organization.
A workshop helps when groups of stakeholders don’t necessarily talk to each other. The IT and tech team, the compliance team, the internal audit team, the legal team, the marketing team all have different perspectives and nomenclature around AI. We’ve been told the workshop has been a game changer for establishing a common understanding of what responsible AI implementation means across the organization.
CSLR: How many of these workshops has the RAII done so far?
Lefaivre Škopac: I don’t have an exact number. Dozens. Working with our industry members and testing this on real use cases, getting their feedback is a big part of the development, giving us a mechanism for testing and refining it and learning. Our members are such a critical part of our community and the AI community.
The refining and evolving will never stop because regulations and technology continue to move rapidly. Best practices will change, generative AI is changing and so everything will continue to evolve, be refined, and tested.[See our AI Compliance Playbook series: “Traditional Risk Controls for Cutting-Edge Algorithms” (Apr. 14, 2021); “Seven Questions to Ask Before Regulators or Reporters Do” (Apr. 21, 2021); “Understanding Algorithm Audits” (Apr. 28, 2021); and “Adapting the Three Lines Framework for AI Innovations” (Jun. 2, 2021).]
Assessments For Organizations’ and Suppliers’ Maturity
CSLR: What are the assessments for organizations and suppliers?
Lefaivre Škopac: The OMA and SMA are operational and running, and we have lots of use cases and case studies around how they’ve helped organizations. Our organizational maturity assessment, the OMA, helps develop a policy and governance roadmap. Without foundational infrastructure at the organizational level, it’s going to be really tough to make sure you are building the right systems, processes and governance for strong or fair AI.
The SMA is like the OMA but takes into account the nuances between a buyer of AI versus an AI tool provider or supplier.
We are looking at how we can align both these assessments to ISO 42001.
CSLR: What feedback have you heard about the supplier evaluation?
Lefaivre Škopac: Large organizations are concerned they don’t have a mechanism to evaluate third-party suppliers or vendors because they want to do the right thing and mitigate any risk. So they’re looking to us to help augment their procurement practices, to evaluate novel technologies like AI, whether in HR or some other area.
Then suppliers are coming to us because they want to submit bids and want an independent verification, almost like a SOC 2 [cybersecurity certification], to say “I have confidence in the system I’m providing” but without needing to share lots of private information. Suppliers spend a ton of time trying to explain their AI products and solutions to procurement professionals, and a certification mark for their systems offers both sides a common language for assurance.
RAII Now Testing Certifications for Generative AI
CSLR: Have you adjusted your certifications because of recent revelations about generative AI’s abilities?
Lefaivre Škopac: We’ve started standing up beta generative AI working groups to use the same methodology as our conformity assessments and certifications to see how these can be used for generative AI.
CSLR: On which uses are these groups focusing?
Lefaivre Škopac: One is for health care and one is for synthetic media, meaning any type of media content that has been fully or partially generated using AI. Art or blogs that are generated by using ChatGPT are forms of synthetic media.
CSLR: How many members are giving you input on assessing generative AI?
Lefaivre Škopac: These evaluations are still in beta and development, so it’s too early a stage to share that information.
The Impact From Mainstreaming AI
CSLR: How much has the swell of attention affected your and the Institute’s work on certifications?
Lefaivre Škopac: I joined the team about a year and a half ago. Previously, I had worked for an AI company’s cross-functional team that stood up the responsible AI practice. I also helped our enterprise clients understand that they were part of responsible AI development when implementing tools and products that we had built for them.
Until the end of last year, some organizations still needed a huge amount of convincing that responsible AI was a priority, it required investment, they needed to be proactive.
The mainstreaming of this technology because of the generative AI push has been huge. Since the beginning of the year, we’ve not been able to keep up with the level of interest. CEOs and boards are going to their executives asking, “How is our organization managing this? How are we considering this in the context of ESG and aligning with our corporate principles?”
Also, this is the first time my mom and dad have asked me about my job.
CSLR: As a nonprofit, can RAII quickly scale up when the demand for its guidance sharply increases?
Lefaivre Škopac: People are really starting to understand the need to invest and stand up rigorous, responsible AI governance and practices although not everyone understands how much investment this work takes. But our member organizations are fabulous. They are the reason we exist at the leading edge in the effort to bring rigor to responsible AI. We are always looking for members. The more funding and support we can get, the more we’ll be able to tackle this.
CSLR: The biggest AI companies recently pushed ahead on releasing AI despite its errors, and some industry insiders have now demanded a development pause. Has RAII’s approach changed?
Lefaivre Škopac: We tend not to have knee-jerk reactions. With the open letter [from the Future of Life Institute], we took a week to post on our blog [which urged more transparency and guardrails rather than a pause]. We always want to talk to our advisors and members. It’s very important to us to take a measured, thoughtful and analytical approach to issues like that and how they impact our work.
More and more the public will become aware that AI is at the core of decisioning systems that impact them or in the tools that they’re using day to day. With generative AI and the copyright, privacy and legal issues around it, more people will demand some type of assurance and public trust symbol around it. So, companies are seeing the writing on the wall. The year ahead is going to be a fascinating one, where responsible AI will no longer be nice to have. It will be a must have.
Did you enjoy this article?
Add the following topics to your interests and we'll recommend articles based on these interests.