
I round up the most relevant AI-in-finance news - the deals being done, who’s rolling out what, and what’s actually working on the front lines
Firms are starting to talk about how they actually use AI...
…Charlesbank runs a three-tool stack. Brightstar built dozens of custom agents in-house. Ethos created an entire operating system that can produce a complete company analysis in 15 minutes.
At Fortune's Emerging CFO conference, finance leaders gave an honest assessment of where AI stands. The verdict: promising, but messy. Implementations taking 6-12 months. Skills gaps everywhere. ChatGPT "not great at math."
Meanwhile, the banks that started investing years ago are seeing real returns. JPMorgan's AI benefits are heading towards $2 billion. And the infrastructure financing keeps piling up, with OpenAI's partners now sitting on $96 billion in debt.
In This Week’s Issue:
News Digest:
Real AI use cases are materializing in PE
CFOs admit AI is transforming finance, but only when strategy leads
The AI capital feedback loop keeps growing
From The Trenches:
The security question you're actually asking
Other Cool Stuff I've Read or Seen:
Model ML raises $75M, BigBear.ai buys AskSage, Anthropic closes gap on OpenAI, Meta/Google chip deal, Cisco acquires NeuralFabric
News Digest
Real AI Use Cases Are Materializing in PE

The conversation around AI in private equity is opening up. Three firms spoke to Axios this week about what they're actually doing. The use cases are consistent: CIM review, market mapping, memo drafting. The approaches vary.
Charlesbank Capital Partners ($22B+ AUM) has taken a hybrid approach. "The majority of our AI use goes through three tools," CIO Ozzie Genc told Axios, naming ChatGPT Enterprise, Microsoft Copilot, and a finance-specific platform. The general copilots handle drafting and reasoning. The finance-specific tools plug into CRM, modeling, and deal-flow processing. "The goal is to sharpen the logic, not hand over capital allocation to a machine."
Brightstar Capital Partners ($5B+ AUM) went a different direction. The firm uses AI frameworks like ChatGPT to build custom in-house agents, according to CEO Andrew Weinberg. Their head of AI and automation has led development on dozens of internal agents, streamlining CIM review, market mapping, and memo drafting.
Ethos Capital ($3.1B AUM) built its own platform from scratch. Co-founder Fadi Chehadé told Axios they created Petra, a "private equity transformation research agent" that works as an operating system connecting email, Slack, internal data, and external sources. "It is foundational, not tangential. Everybody in our firm uses it every day." Petra can produce a complete company analysis in about 15 minutes. Work that used to take weeks.
On the banking side, BMO offers a blueprint for scaling AI while keeping risk in check. The firm only funds AI projects expected to add at least one basis point to ROE. One system reads bankers' inboxes and automatically updates buyer logs. Another consolidates leveraged-finance term-grid data so teams can compare structures instantly. JPMorgan's AI benefits are now heading towards $2 billion. Evident's latest AI Index shows 32 of 50 major banks disclose AI use cases with measurable financial impact.
The details:
Charlesbank ($22B+ AUM): Three-tool stack: ChatGPT Enterprise + Microsoft Copilot + finance-specific platform
Brightstar ($5B+ AUM): Built dozens of custom in-house agents for CIM review, market mapping, memo drafting
Ethos ($3.1B AUM): Built Petra, a proprietary "operating system" for deal workflow. Complete company analysis in 15 minutes.
BMO: Only funds AI projects expected to add at least 1 basis point to ROE
Evident AI Index: 32 of 50 major banks now disclose AI use cases with measurable financial impact
Why it matters: Different approaches are starting to materialize, but no single playbook has emerged. Every firm is unique, yet the use cases are crystallizing. Everyone's circling around the same value props: CIM review, market mapping, memo drafting. They're just taking different routes to get there. Perhaps surprisingly, banks are ahead of private markets on measurable AI adoption. I don't expect that to last.
My take: Use cases are really starting to materialize. If you're reading this and panicking, don’t worry - you're not behind. These are the trailblazers, as evidenced by the propensity to take things in-house in two of the three examples shown. You'll see this combination emerge though: frontier model, Microsoft Copilot, and industry-specific platform. The fact that firms are building in-house isn't surprising given every firm does things differently. That said, I firmly believe customization and personalization is going to be the way to really unlock value going forward (and is what we've doubled down on at DealSage).
CFOs Admit AI Is Transforming Finance. But Only When Strategy Leads

At Fortune's Emerging CFO conference this month, finance leaders gave an honest assessment of where AI actually stands. The verdict: promising, but messy.
James Glover, Deloitte's finance transformation AI leader, captured the key insight: companies deploying AI one use case at a time without a broader plan are struggling to capture meaningful enterprise value. Implementations for agentic AI platforms often take 6-12 months. And the skills gap is real. "You actually have to train your people to use it, otherwise they're going to treat it like a Google search."
Webflow's CFO described early experiments using ChatGPT as a junior analyst. The result? "It's not ready for that yet. It's not great at math." His team also struggled with automating variance analysis due to inconsistent results. But after iterating and testing, they achieved greater reliability and impact. The message: "If you fail the first time, that's okay. Don't give up. Keep going."
Wells Fargo EVP Kunal Madhok put it bluntly at last month's Evident AI Symposium: "A lot of studies talk about plug-and-play POCs. That does not generate ROI. The big unlock is rethinking how you do things with the tools."
The details:
Deloitte: Companies deploying AI one use case at a time are struggling to capture enterprise value
Webflow CFO: Tested ChatGPT as junior analyst. Verdict: "It's not great at math."
Wells Fargo: "Plug-and-play POCs" don't generate ROI. You have to rethink how you do things.
INRIX: AI tools improved ARR forecasting accuracy to 95% with less manual input
Why it matters: The tools are ready. The question is whether your organization is. Most firms are still treating AI like a Google search instead of rethinking workflows from the ground up.
My take: The Deloitte quote is the key one. This isn't something you dabble in. You need to know what you're trying to achieve and then commit to it. Start running pilots. You will learn just by doing that. The process of running these pilots will help you understand what it is you're actually trying to do, and then you can devise a better plan. Most firms don't know what they're trying to achieve and so either aren't doing anything or are trying a shotgun approach which will lead to a thousand new vendor agreements. Be more thoughtful and methodical, but get started.
The AI Capital Feedback Loop Keeps Growing

Yes, another AI infrastructure story. This has dominated headlines for three weeks now. But the numbers keep getting bigger.
The companies supplying data centers, chips, and processing power to OpenAI have accumulated $96 billion in debt to fund their operations. SoftBank, Oracle, and CoreWeave have borrowed $30 billion. Blue Owl Capital and Crusoe have taken $28 billion. Another $38 billion is under negotiation.
The big five hyperscalers have accumulated $121 billion in new debt this year alone for AI operations. That's 4x their average debt level over the past five years.
What we're watching is a feedback loop. VC and PE pour capital into AI companies. Those companies need infrastructure. Infrastructure providers raise debt to build capacity. Valuations rise. More capital flows in. The cycle accelerates. Goldman Sachs analysts found that hyperscalers are using special purpose vehicles to keep the appearance of debt off balance sheets.
OpenAI has pledged $1.4 trillion to secure compute power over the next eight years. It expects $20 billion in revenue this year. HSBC estimates that even if OpenAI exceeds $200 billion in revenue by 2030, it will still need $207 billion in additional funding.
The details:
OpenAI partners' accumulated debt: $96B and climbing
Big five hyperscalers (Amazon, Google, Meta, Microsoft, Oracle): $121B in new debt this year for AI ops
That's 4x their average debt level over the past five years
SPVs being used to keep debt off balance sheets
HSBC: Even at $200B revenue by 2030, OpenAI still needs $207B more in funding
Why it matters: The feedback loop keeps accelerating. The question is whether revenue growth will ever match the capital deployed.
My take: This is the third straight week we've covered AI infrastructure financing. There's a reason. We're watching real-time the creation of a new asset class. The Blue Owl/Meta deal two weeks ago. The Oracle/Stargate financing. Now the $96B debt pile. Private credit and PE are now the financing engines behind elite tech infrastructure. It's interesting to see this convergence of PE and VC. PE with the infrastructure, VC with the applications. Pretty unheralded. The question isn't whether this continues. It's who ends up holding the bag if revenue growth doesn't materialize (but I’m pretty sure it will).
From The Trenches
The Security Question You're Actually Asking

Every AI conversation with deal teams eventually hits the same wall: "But what about security?"
One investor I spoke with recently said the quiet part out loud. He doesn't care about "AI ethics." He cares about whether the guy bidding against him can see his CIM.
"My biggest question is the privacy piece, right? Like, how safe is our data? Especially in this hyper competitive world where you're all competing for the same deals."
Most security decks talk about SOC2, encryption-at-rest, and model training policies. And whilst they're all important, what we really mean is:
Will this tool leak my deal to another customer?
Are my uploads being trained into a model my competitors use?
My answer to this question is always the same: What do you think Google and Microsoft have been doing with your data for the past 20 years? If you've ever sent a document over email, you've exposed it more than using a well-architected AI tool.
That said, a Stanford study published last month found that all six major AI chatbots train on user data by default unless you opt out. Some keep the data indefinitely. Some allow human reviewers to read your transcripts for training purposes. For multiproduct companies like Google, Meta, Microsoft, and Amazon, your chat data gets merged with everything else they know about you: search queries, purchases, social media.
The fear is that specific deal details will show up. That's a legitimate concern if you're using consumer chat products with default settings.
But if you want to avoid models training on your data, you have to use temporary chat features. Problem is, you lose the benefit of memory. That's a real trade-off.
Hope is not lost though. The better approach is to use APIs. Store the memory in the application, not with the LLM provider. Enterprise API contracts typically include explicit no-training clauses, 30-day (or less) data retention, and audit trails. That's what we do.
The sophisticated buyers aren't asking "Is AI safe?" They're asking: "Is this any less safe than emailing a CIM? And are you ever, ever giving my edge to someone else?"
If you can answer that cleanly, you've cleared the real security hurdle.
Other Cool Stuff I’ve Read of Seen This Week:
BigBear.ai acquires AskSage for government AI push (Nov 29) - Generative AI platform used by 16,000+ government teams. High-margin recurring revenue. The boring government contracts are where the real AI money is.
Anthropic closes gap on OpenAI with $9B revenue run rate (Nov 28) - Large accounts up 7x YoY. Expects to break even by 2028, ahead of OpenAI (projecting $74B in operating losses that year). One of these companies is building a business. The other is building a story.
Meta in talks to spend billions on Google TPU chips (Nov 25) - Meta considering Google chips in its data centers by 2027. Nvidia fell 4%, AMD dropped 6%. Turns out "CUDA moat" isn't the wall Jensen thought it was.
Model ML raises $75M for AI workflow automation (Nov 24) - One of the largest fintech Series A rounds in history. Automates client-ready Word, PowerPoint, and Excel outputs. Big 4 client says AI modules freed up "over 90% capacity during review and prep stages." The junior banker's weekend just got shorter.
Cisco acquires NeuralFabric for enterprise SLM push (Nov 21) - Seattle startup helps companies build domain-specific small language models on their own data. Deal already closed. "Enterprises don't need another generic chatbot trained on the entire internet." Finally, someone said it.
Acquisition Intelligence is a weekly newsletter on AI in M&A for finance professionals, private equity investors, investment bankers, corp dev teams, and deal-makers.
For questions, feedback, or to share what you're seeing in the market, reply to this email.
P.S. I'm Harry, co-founder of DealSage. We're building an AI-native M&A intelligence platform to help deal professionals turn their institutional knowledge into better decisions. If you're curious what we're up to, check out dealsage.io or just reply here
