AITC Submits Comment Letter Regarding NAIC Draft AI Bulletin
Commissioner Kathleen Birrane
Chair, Innovation, Cybersecurity, and Technology (H) Committee
National Association of Insurance Commissioners
1100 Walnut Street, Suite 1500
Kansas City, MO 64105
Re: NAIC Model Bulletin: Use of Algorithms, Predictive Models, and Artificial Intelligence
Systems By Insurers
Commissioner Birrane:
The American InsurTech Council (AITC) is an independent advocacy organization dedicated to
advancing the public interest through the development of ethical, technology-driven innovation in
insurance. We appreciate the opportunity to comment on the NAIC Model Bulletin: Use of
Algorithms, Predictive Models, and Artificial Intelligence Systems By Insurers (Model Bulletin).
We acknowledge the leadership and hard work that you and other members of the Innovation,
Cybersecurity, and Technology (H) Committee committed to completing the first exposure of the
Model Bulletin in record time. This was no small task given the diverse perspectives and wide
range of highly complex issues. As noted in our public comments at the NAIC Summer Annual
Meeting, the Model Bulletin is a positive first step in developing a comprehensive regulatory
framework for insurers using AI, predictive analytics, and other innovative technology.
The Model Bulletin also stands as a powerful statement that the system of state regulation is more
than up to the task of developing appropriate regulatory guidance regarding insurer use of AI and
related technology that benefits insurance consumers, encourages competition and innovation, and
strengthens the U.S. insurance market. The Model Bulletin establishes that insurers will not be at
a disadvantage to banks, other financial services organizations, or global competitors that are also
moving quickly to develop business use cases for AI. The Bulletin also provides a necessary
framework for ensuring appropriate consumer protections.
The risk-based approach embedded in the Model Bulletin reflects the right regulatory approach.
The Model Bulletin builds on previously stated NAIC core principles regarding AI, establishes
regulator expectations, identifies applicable legal standards, and, significantly, the process that
regulators will take to monitor company activity. For the most part, the Model Bulletin
accomplishes those objectives while ensuring that insurers and other licensees can develop and
implement a governance and risk management framework that reflects the unique risk(s)
associated with their use of AI. This flexibility is essential not only in the near term but also in
the future as the technology continues to evolve and business use cases for insurer use of AI
expand.
Regarding the applicable legal standards to be applied, the Committee’s decision to utilize existing
legal standards and principles for insurance regulation that have guided the states and company
activity for many decades is highly significant and deserves to be acknowledged:
• It demonstrates their strength, durability, and ability to adapt. Insurers embracing new
technology is hardly new. For instance, the transition from paper files to computerized
files took place many decades ago. State regulators have never encountered trouble
adapting to those changes. On the contrary, state regulators have also embraced new
technology to improve their own performance and efficiency.
• The current standards are time-tested. While applying those standards to insurers using AI
will take time to develop, the ability of regulators and insurers alike to tie those practices
back to well-recognized standards that have proven to work over decades will be essential.
• It stands as an important reminder that while AI may be new, its use does not alter core
principles embedded in insurance.
We also support the decision to use states’ existing legislative and regulatory market regulation
authority to govern insurers’ use of AI. Given the uneven interpretation and application of many
of those standards across states that already occurs in the traditional market regulation context,
however, we do have concerns about uniform treatment and the application of those standards to
rapidly evolving technology like AI. Specifically, we have a concern that a particular AI process
or business use case may be deemed appropriate in one state, and an unfair trade practice in
another. Lack of uniformity will discourage the use of innovative tools. We would encourage this
Committee and the NAIC to identify achieving meaningful uniformity in this area as a high
priority.
As the Committee works to improve the Model Bulletin before it is finalized, we offer the
following comments and observations.
1. Develop More Precise AI Definitions & Descriptions. We have several concerns with
including descriptions in the Model Bulletin, most significantly the lack of clear commonly
accepted definitions of key terms. A technical literature survey, for example, reveals no
accepted definition of “AI algorithm.” While such a lack of precision may be acceptable in an
academic context, including definitions in the Model Bulletin will confer upon them legal
significance that may be unwarranted, quickly rendered obsolete, and become the basis for
unnecessary disputes and confusion. Likewise, we are also concerned that actuarial
methodologies that have been in use for many decades - but would not be considered “AI” as
that term is commonly understood – could inadvertently be included in the Model Bulletin. A
preferred approach is to avoid future debates over terms while focusing on substantive issues
involving AI and how it is being applied in a specific business use case.
2. Include a Confidential Self-Audit of AI Processes. We strongly encourage the addition of
a confidential self-audit of AI processes and decisions that will provide a framework for robust
self-examination (including third-party providers) and, where necessary, remedial action.
3. Address Concerns Regarding Third Party AI Vendor Oversight. This is an area where
continued discussion and consideration is needed. Efforts to regulate third-party AI systems
through insurers and other licensees raise several important issues, including contractual
responsibility, access to highly proprietary intellectual property, and more. An approach that
depends less on prescriptive requirements in favor of one that provides insurers with the
flexibility to manage their own risks associated with third party AI systems would produce
more effective results while maintaining consistency with the Model Bulletin’s overall riskbased
approach.
4. Maintain Reasonable Documentation Requirements. Prescriptive documentation should
not be more burdensome than what is expected for actuarial modeling. This balance ensures
compliance without hindering innovation.
5. Expand Focus Beyond Traditional Machine Learning. The Model Bulletin should also
consider emerging AI technologies, such as generative AI, which may not have the same issues
as traditional machine learning (e.g., drift). This approach will ensure that regulatory guidance
remains relevant and effective.
6. Consider Utilizing a Pilot Project Approach. Many issues identified by AITC and others
can be resolved through time and a better understanding of how AI is utilized in practice,
including companies’ risk management practices. We would recommend consideration be
given to a project similar to the one used with cybersecurity in which three companies of
varying sizes volunteered to participate in a pilot examination focusing on governance
framework, management of third-party vendors, and data issues. As with the cybersecurity
project, results would be anonymized, and no adverse regulatory action would be taken. This
process may help the NAIC and other companies find best practices and yield new regulatory
approaches to meet NAIC goals better.
7. Embrace Forward-Thinking and Comprehensive Regulation. The Model Bulletin should
anticipate the widespread adoption of AI across various software apps, including HR, finance,
and IT Help Desk functions. By considering the rapid integration of AI into everyday tools
(e.g., Microsoft products, Google search algorithms), regulators can develop guidelines that
address the broader AI landscape. Clear boundaries must be established to prevent overregulation
and to focus on areas of genuine concern.
8. Encourage Cross-Sector Collaboration. Regulators should consider taking a team approach
to regulation and collaborate with large tech companies like Microsoft, AI experts, industry
leaders, and stakeholders to develop a comprehensive understanding of AI technologies as they
develop regulatory guidance. This collaboration will help create informed, effective
regulations that address potential risks without stifling innovation.
Thank you again for the opportunity to address our comments.
Respectfully Submitted,
Scott R. Harrison
Co-Founder, American InsurTech Council