
I’m not sure if attendance officially hit a record, but it certainly felt like it. Every restaurant and bar in the hotel was booked for private events, and you couldn’t even stop in for a quick snack or drink between meetings.
Money20/20 felt like a place to connect. We hosted four events, and the golf classic was a standout and will likely become a Money20/20 tradition for us.
Money20/20 is often the best snapshot of where the industry is heading. So it wasn’t a big surprise that many of the conversations and sessions centered on artificial intelligence.
There was a convenient filter feature on this year’s agenda. If you filter out awards, networking, breaks, and announcements, of the 266 sessions, 87 were in the AI category. There were demos, podcasts, presentations, pitches, roundtables, and conversations… in every format, we were discussing AI.
Even in more traditional sessions, such as fraud, regulations, compliance, privacy, data governance, customer experience, and global payments, AI dominated the discussion. One area that kept coming up is use cases for AI agents in banking and finance.
A short brainstorm about AI Agents for credit unions and regional banks.
At this point, we’re all familiar with chatbots and have at least dabbled with generative AI.
The fundamental difference is that an AI Agent is directly connected to the bank's core transactional data. By integrating directly with your institution’s transactional data, these agents can interpret member behaviors, anticipate needs, and execute tasks.
In other words, we’re moving from an AI that can simply think (or process information) to an AI that can act on your behalf.
I’m thinking a lot about AI agents for simulations in strategy and planning. In one study, Stanford researchers created a small digital town called Smallville to investigate whether AI could simulate believable human behavior. Each resident was a generative agent powered by a large language model with memory, reflection, and planning abilities. The agents could recall what had happened, form opinions, devise plans, and take action.
Over two simulated days, the 25 agents lived ordinary lives
The researchers found that when agents were given the ability to remember and reflect on themselves, their behavior became more coherent and lifelike.
On Valentine’s Day, one AI agent decided to host a party. Word spread through the simulated town. Other agents discussed it, made plans, and even coordinated logistics. In short, they behaved as if they were a real community responding to a shared event.
The study suggests AI simulations could help people test strategies or anticipate how groups might respond to new policies, designs, or messages before they’re deployed.
For anyone thinking about strategy and planning, it points to a future where AI can serve as a kind of “what-if machine”
And there are so many what-ifs:
- What if an AI agent could audit your data before the auditor does? You could surface anomalies and generate clean, audit-ready reports with fewer hours of prep.
- What if an AI agent trained on your institution’s behavioral patterns could monitor transactions in real time, flag suspicious activity, and escalate only what matters?
- What if marketing could test campaigns before sending a single email? By mining real-time member data, an AI agent could simulate campaign outcomes.
- What if product strategy wasn’t just reactive? AI agents can scan usage data, external trends, and demographic shifts, and show you which services are underutilized, which pricing strategies are effective, and where the next competitive threat is likely to emerge.
AI can prevent fraud and improve customer experience
It’s not just about new tools like AI agents. Artificial intelligence is quietly transforming how banks approach strategy and service.
During the conference, one of the major announcements was Starling Bank’s Scam Intelligence. The new tool lets members upload screenshots of marketplace listings, text messages, or social posts they’re unsure about.
One of the newest developments in AI capabilities discussed throughout the conference was Agentic AI (aka AI agents). These conversations sparked numerous thoughts about trust, human-AI collaboration, and why none of it is possible without quality data and infrastructure.
Starling’s system, powered by Google Cloud’s multimodal Gemini AI, analyzes those images and messages in seconds. It looks for classic scam indicators: urgency language, fake seller profiles, reused images, inconsistent prices, or links that don’t match the domain.
The AI then responds with clear guidance: “This looks suspicious — here’s why,” or “No known risk indicators found.” Instead of reacting to disputed transactions, this empowers members to spot scams at the decision point.
In testing, Starling reported a 300 percent increase in canceled scam payments and the detection of roughly 90 percent of known scam indicators
Using artificial intelligence was in part “what can we do?” and also “how do we do it?” A lot of conversation went into what makes all this even possible, which comes down to proper infrastructure and data quality.
This is particularly relevant in banking, where historical systems, often built in different eras, create siloed ecosystems that present language and protocol barriers. It’s one thing to want smarter AI; it’s another to provide it with the unified, governed data it needs to function effectively.
Innovation only works when the foundation is sound.
When I reflect on everything discussed at Money20/20 (agentic AI, data quality, trust), it all comes back to this: AI is only as useful as the data beneath it.
That’s why our work is anchored in Snowflake. We’re helping banks create a space where they can finally see their data. And iDENTIFY AI helps teams ask plain-language questions on the bank's vast amount of information, so they can quickly surface knowledge that helps them do their work.
Why this has me thinking about Snowflake’s new partnership with Palantir.
In most banks today, the same dataset is replicated across multiple systems (risk, finance, AML/KYC, and marketing), resulting in increased storage and integration costs, as well as additional governance work.
A zero‑copy approach means data is accessed in place rather than shuttled and replicated across platforms.
The immediate payoffs are lower duplication costs, faster time‑to‑insight, and simpler control over who can see what.
What “zero‑copy” means in the Snowflake–Palantir context.
Snowflake and Palantir's partnership allows Palantir Foundry/AIP and the Snowflake AI Data Cloud to read and write the same enterprise data without creating new copies. This is a capability they describe as “bidirectional, zero‑copy interoperability.”
For banks, this translates to one governed dataset, two complementary systems (a data platform and AI/operational apps), and fewer reconciliation headaches. So, this makes our team more excited about the capabilities and security embedded in Snowflake AI.
“Zero‑copy” is a systems design goal. Even in adjacent domains (in‑memory analytics), researchers show that whether you truly avoid copies depends on the architecture and operating system support; simply adopting a “zero‑copy” technology doesn’t eliminate all copies without the right integration approach. The Snowflake–Palantir design addresses this at the data‑platform layer using Iceberg and catalog integration.
This matters for supervisory expectations regarding control, auditability, and data residency because governance policies remain anchored to the primary store, while external tools operate through that store’s controls.
Let’s Chat About Your Data Infrastructure.
If there is anything we love to talk about, it’s data infrastructure.
Schedule some time with our team, and we can discuss what it means when you partner with us to help your bank transition from siloed systems to a single, trusted foundation that enables AI.