Artificial intelligence is already being used in investment firms. So the compliance risk isn’t whether AI is being used. It’s whether AI is properly controlled and supervised.
In a recent discussion, Graham Roggli of Xantrion and Tito Pombra of Advisor Compliance Consulting outlined practical steps firms can take to adopt AI safely while meeting regulatory expectations.
The Reality of AI Use: It’s Already Happening
The speakers stressed that AI is already being used in day-to-day work across most organizations, not just investment firms, for tasks like drafting emails, summarizing research, preparing meeting notes, and supporting operational tasks. Staff often rely on these tools, and adoption isn’t always through formally approved or company-subscribed services.
The Risk of Banning AI
Prohibiting AI use doesn’t eliminate risk. It moves activity into unmanaged personal accounts, where firms lose visibility, control, and auditability. This approach creates more risk because usage moves outside of governed systems and supervision. The real risk is “unsecured and unsupervised AI”.
Key Risks of Unsupervised AI Use
Reliance on personal accounts for firm work creates several major risks:
- Slippery slope of data entry: Inevitably, staff will put organizational data into unapproved tools, even if policies prohibit it.
- Data leakage: Personal accounts (e.g., personal ChatGPT or Claude) may use the firm’s data for reinforcement learning or post-training, potentially using it to respond to another person’s query.
- Governance and oversight concerns: Using personal accounts undermines the firm’s ability to retain records, review usage, and demonstrate oversight, a significant concern for books-and-records compliance.
Firms must also conduct vendor due diligence on any approved tool to understand how data is handled, where information is stored, and whether prompts and outputs are retained.
Regulator and Governance Focus Areas
From a regulatory perspective, AI isn’t treated differently from any other system that affects clients or firm operations. Firms should be prepared to answer four questions:
- Who approved the tool and why?
- What data is permitted to be used?
- How is output supervised?
- Is any of this documented, and are staff following the documentation?
It is also critical to disclose the use of AI to clients, vendors, and affiliated entities. For example, if AI is used to create marketing content, a disclosure should be added.
Practical Steps for Secure AI Deployment
AI deployment should be approached as a governance initiative, not a technology experiment. Before expanding use cases, firms must establish baseline security controls, clarify ownership, and ensure the platform operates within existing compliance frameworks.
Secure the Platform Itself
To secure AI platforms, firms must prioritize:
- Enterprise-grade services: Use services whose providers explicitly do not train on the firm’s data.
- Authentication hardening: Implement single sign-on (SSO) or multi-factor authentication (MFA).
- Retention configuration: For notetaker apps, configure the AI platform to retain notes and transcripts in the appropriate location for the required period to meet retention requirements.
Define Approved Tools and Governance Criteria
When selecting an AI tool, documentation should define which tools are allowed. Firms shouldn’t focus on the “most impressive” AI. Select tools that align with:
- Existing systems
- Defined use cases
- Technology stack
- Governance requirements
- Supervisory controls
Tool selection is a governance decision, not a feature comparison exercise.
Understand the Tool’s Data Access Model
A critical consideration is the data access model:
- Ambient data access models: Used by enterprise tools like Copilot and Gemini, built into platforms like Microsoft 365 or Google Workspace, where the AI can access all files, folders, and emails the user has access to.
- File-based access models: Used by tools like ChatGPT, Claude, and Perplexity, which typically require the user to upload or give access to a specific file.
This difference is vital when clients have requested that AI never touch their data because ambient tools access all of a user’s available data.
Risk Tiering and a 90-Day Roadmap
Firms should implement risk tiering for AI use cases to define the necessary level of scrutiny and supervision.
- Low-risk: Drafting an internal email or summarizing an internal meeting (limited downside, easily caught).
- Medium-risk: Drafting a client-facing email or sending client meeting notes (requires an additional layer of human scrutiny).
- High-risk: Investment advice, regulatory interpretation, or automated client responses.
Smaller organizations with limited resources can follow a 90-day implementation roadmap:
| Phase | Duration | Actions |
|---|---|---|
| Establish Control | First 30 days | Assign an owner, inventory current AI use, and select one tool. Create a short policy covering approved tools and “responsible AI items,” such as risk tiering. |
| Controlled Pilot | Next 30 days | Pilot low-risk workflows, train staff on the policy and tool use, and perform initial supervisory review. |
| Structured Expansion | Last 30 days | Expand use cases, build an internal prompt library, identify “killer use cases,” and reassess risk and supervision effectiveness. |
Delaying Governance Creates Compliance Exposure
AI governance is not a future project. It is a current supervision requirement.
If your team is already experimenting with AI, the fastest way to reduce exposure is to formalize ownership and guardrails before usage expands. That means putting a small group in charge of decisions, defining what tools and data are allowed, documenting supervision and retention expectations, and setting a simple risk-tiering model so low-risk use can move forward while higher-risk use cases stay controlled.
To make this easy, we created a practical AI Steering Committee Template designed for firms that need to move quickly. It includes recommended roles, a lightweight charter, decision rights, an initial meeting agenda, a starter policy outline, and a vendor due diligence checklist you can use immediately.
Download the AI Steering Committee Template and use it to launch your first governance meeting within the next two weeks. The goal is not perfection. The goal is visibility, control, and supervision before regulators or clients ask for proof.
Download the template:

