Key Takeaways
- FIS and Anthropic have co‑developed a Financial Crimes AI Agent that accelerates anti‑money laundering (AML) investigations, completing them in minutes.
- The agent automatically gathers evidence from a bank’s core systems, evaluates activity against known typologies, and surfaces the highest‑risk cases for investigators.
- Initial deployment will occur at BMO and Amalgamated Bank, with broader availability planned for the second half of the year.
- Leveraging the experience from this collaboration, FIS intends to build and scale additional purpose‑built AI agents for credit decisioning, deposit retention, customer onboarding, and fraud prevention, all offered through a single governed platform.
- FIS CEO Stephanie Ferris describes the launch as the start of an “agent‑first” era, emphasizing that banks need AI that acts, not just assists.
- Anthropic’s Head of Financial Services Jonathan Pelosi highlights the embedded Applied AI team’s role in ensuring every agent decision is traceable to source data and remains under investigator oversight.
- Prior PYMNTS reporting noted that AI is already woven throughout FIS, boosting operations, client service, risk management, and product development.
Overview of the Partnership
FIS, a global leader in financial technology, announced on Monday, May 4 that it has teamed up with Anthropic’s Applied AI team to create the Financial Crimes AI Agent. The collaboration involved forward‑deployed engineers (FDEs) from Anthropic working side‑by‑side with FIS specialists. According to the press release, the goal was to produce an AI solution that could move beyond mere assistance and actively perform complex investigative tasks in the realm of anti‑money laundering.
How the Financial Crimes AI Agent Works
The agent is designed to streamline AML investigations by automatically assembling evidence dispersed across a bank’s core systems. It evaluates customer activity against established typologies of money‑laundering behavior and then surfaces the highest‑risk cases for human investigators to review. As the release states, the agent “automatically assembles evidence across a bank’s core systems, evaluates activity against known typologies, and surfaces the highest‑risk cases so that investigators can review them.” This capability reduces what traditionally took days or weeks into a matter of minutes, dramatically increasing operational efficiency.
Deployment Timeline and Early Adopters
BMO and Amalgamated Bank have been named as the first institutions to deploy the Financial Crimes AI Agent. The rollout to these banks will begin imminently, with the technology slated for broader release to additional financial institution clients in the second half of the year. By initially targeting two major banks, FIS and Anthropic can gather real‑world performance data, refine the agent’s models, and ensure compliance with varying regulatory environments before a wider launch.
Future Roadmap: Expanding the Agent Portfolio
Building on the knowledge gained from co‑developing the Financial Crimes AI Agent, FIS plans to independently create and scale additional AI agents using Anthropic’s Claude models and its own unified data and regulatory infrastructure. The product roadmap explicitly mentions purpose‑built agents for:
- Credit decisioning – automating underwriting while maintaining explainability.
- Deposit retention – identifying at‑risk accounts and recommending personalized interventions.
- Customer onboarding – streamlining KYC/AML checks and reducing friction for new clients.
- Fraud prevention – detecting anomalous transaction patterns in real time.
All of these agents will be made available through a single governed platform, ensuring consistent oversight, data governance, and model accountability across the suite.
Leadership Perspectives on an “Agent‑First” Future
Stephanie Ferris, CEO and President of FIS, characterized the launch as heralding a new era in banking. She was quoted saying, “Every bank in the world wants AI that acts, not just assists.” Ferris elaborated that the future belongs to a trusted provider who manages data, governs agents, and stands between customers and the AI making decisions about their money. This vision underscores a shift from AI as a supportive tool to AI as an autonomous actor within regulated workflows, provided that robust governance safeguards remain in place.
Jonathan Pelosi, Anthropic’s Head of Financial Services, echoed the importance of traceability and investigator control. He noted, “We embedded our Applied AI team inside FIS to build the Financial Crimes AI Agent together, so every conclusion the agent reaches links back to its source data, and every decision stays with the investigator.” This statement highlights the collaborative approach taken to ensure that the AI’s outputs are auditable and that human experts retain ultimate authority over investigative outcomes.
Broader AI Integration at FIS
The development of the Financial Crimes AI Agent is not an isolated effort. PYMNTS reported in November that AI is already embedded throughout FIS, enhancing operations, client service, risk management, and product development. During an earnings call at that time, Ferris remarked, “We anticipated that AI would transform financial services, but the pace and depth of adoption have exceeded our expectations.” This comment reflects the organization’s recognition that AI’s impact is accelerating faster than many forecasts predicted, prompting FIS to double down on AI‑driven innovation.
Implications for the Banking Industry
The introduction of an agent that can conduct AML investigations in minutes has several potential ramifications for banks. First, it could significantly reduce the labor‑intensive manual review process, freeing analysts to focus on higher‑value strategic tasks. Second, by surfacing the most risky cases promptly, the agent may improve detection rates and reduce false negatives, thereby strengthening compliance posture. Third, the transparent link between AI conclusions and source data addresses a key regulatory concern: explainability. Regulators increasingly demand that automated decisions be understandable and auditable; the agent’s design appears to meet this expectation.
Finally, the “agent‑first” paradigm proposed by Ferris suggests a future where banks rely on a curated set of AI agents, each governed by a central platform that ensures data integrity, model performance, and regulatory compliance. This approach could democratize access to advanced AI capabilities, allowing smaller institutions to benefit from the same sophisticated tools that larger banks develop in‑house, albeit through a vetted third‑party provider.
Conclusion
The partnership between FIS and Anthropic marks a tangible step toward operationalizing AI in high‑stakes financial crime prevention. The Financial Crimes AI Agent promises to cut investigation times from days to minutes while preserving investigative oversight through traceable, source‑linked conclusions. With initial deployments at BMO and Amalgamated Bank and a broader rollout slated for later this year, the agent serves as a proof‑point for FIS’s ambitious roadmap of purpose‑built AI agents across credit, deposits, onboarding, and fraud. As Ferris and Pelosi emphasize, the industry’s direction is shifting toward AI that acts decisively, underpinned by robust governance—a shift that could redefine how banks manage risk, serve customers, and navigate an increasingly complex regulatory landscape.

