Most bankers get less than 30% of what they actually need from AI because their prompts are vague, generic, and structurally broken. Let's fix that.
Paste the prompt you currently use. We'll score it ruthlessly and show you exactly what you're missing.
Tip: Press Ctrl + Enter to grade
Try these bad examples:
Select a workflow and paste your prompt to see how it scores against professional standards.
We use Scale AI's rubric methodology to evaluate 12 critical dimensions.
Prompts built using Scale AI's rubric methodology. Each template is self-contained, specific, and optimized for M&A workflows.
Time saved translates directly to money savedβor billable hours recovered.
At Β£40-55/hour analyst cost = Β£600-1,100 saved per deal
One-time payment. Lifetime access. Quarterly updates included.
Same task. Different prompts. Dramatically different outputs.
A generic 500-word essay about software industry trends with buzzwords like "digital transformation" and "synergies." No specific metrics, no structure, nothing you can actually use. You'll spend 2 hours rewriting it.
A structured 2-page teaser with specific metrics, proper sections, professional formatting, and placeholders exactly where you need them. 15 minutes of polish and it's ready for MD review.
A generic explanation of what an LBO is, followed by "typical" assumptions that don't match your deal. No tables, no scenarios, no sources & uses. You end up building from scratch anyway.
Complete assumption set with all 5 sections, three scenarios, proper tables you can paste into Excel, and sensitivities. Ready for model input, not explanation.
A list of 20 generic items like "financial statements" and "contracts" with no prioritization, no organization, and missing half the categories you need. Partner asks "where's the working capital analysis request?" and you look incompetent.
150+ organized request items across 7 workstreams, prioritized, with suggested folder structure and industry-specific additions. Partner is impressed you thought of things they didn't.
The difference isn't AI capability. It's prompt engineering.
Grade Your Prompts βHow 50+ M&A professionals actually use AI. What works, what doesn't, and the prompts that save the most time.
No spam. Unsubscribe anytime. We send maybe 2 emails per month.
"I was skepticalβI thought my prompts were fine. Then I ran them through the grader and got 18%. The improved versions actually produce usable first drafts."
"The CIM prompts alone saved me 6+ hours on my last deal. And the output actually sounded like it was written by a banker, not a chatbot."
"Bought for the team after one analyst showed me what he was producing. ROI in the first week. Every junior should have this."
Generic prompts are written by people who've never closed a deal. Our prompts are built by M&A practitioners using Scale AI's professional grading methodology. Every prompt specifies: exact deliverable format, required data sources, industry-specific metrics, success criteria, and error tolerances. The difference is getting a usable first draft vs. getting a Wikipedia summary.
Yes. Our prompts are model-agnostic and work with ChatGPT (GPT-4), Claude (Opus/Sonnet), Gemini, and any other major LLM. We recommend Claude for longer analytical work and GPT-4 for faster iterations. Each prompt template includes a recommendation for which model works best.
The Analyst tier is for individual use. The Team tier covers up to 5 users. The Firm tier is unlimited. If you're caught sharing a single-user license, we'll just ask you to upgradeβno drama, we get it.
AI is not magic. Even with perfect prompts, you need to: (1) provide the actual data/context, (2) review and iterate, (3) apply your professional judgment. Our prompts maximize the probability of good outputβthey don't guarantee perfection. That said, if you're consistently getting bad results, email us and we'll help troubleshoot.
Because polite feedback doesn't fix bad prompts. If we tell you "your prompt is pretty good!" when it's actually mediocre, we're wasting your time. The grader uses Scale AI's professional rubric methodologyβthe same framework used to evaluate AI outputs at frontier labs. Harsh but accurate beats nice but useless.
Yes. If you use the prompts on an actual deal and they don't save you time, email us within 30 days with specifics and we'll refund you. No questions asked. We've never had a refund request from someone who actually used the product.
Quarterly. AI models evolve, market practices change, and we're constantly learning from user feedback. All updates are included in your purchaseβno additional fees.
Professional prompts work together. Here's how to chain them for complete deal workflows.
From engagement to close: 8-prompt chain
Chain Logic: Buyer Universe output feeds into Teaser targeting. Teaser metrics inform CIM depth. Process Letter uses CIM highlights. Each prompt builds on previous context.
β±οΈ Total time saved: 40-60 hours per deal
From screening to IC: 10-prompt chain
Chain Logic: Screening identifies targets. Thesis shapes DD focus areas. QoE findings feed LBO assumptions. MIP design uses LBO return expectations. All synthesize into IC memo.
β±οΈ Total time saved: 50-80 hours per deal
From close to exit: 7-prompt chain
Chain Logic: Integration plan sets 100-day priorities. Synergy tracking feeds portfolio reviews. Add-on screening supports buy-and-build. Exit analysis informs LP updates on realized returns.
β±οΈ Total time saved: 30-50 hours per quarter
Calculate your potential time and cost savings.
Calculation basis: Average of 40 hours saved per deal across all prompts. Actual savings vary by prompt complexity and usage frequency. Conservative estimates used.
Not all models are created equal. Here's what works for what.
| Use Case | Claude 3.5 Opus | GPT-4 Turbo | Gemini 1.5 Pro | o1-preview |
|---|---|---|---|---|
|
Long Documents CIMs, IC Memos, Research |
βββ Best | ββ Good | βββ Best | β Slow |
|
Financial Analysis LBO, DCF, QoE |
βββ Best | ββ Good | ββ Good | βββ Best |
|
Quick Iterations Edits, Refinements |
ββ Good | βββ Best | ββ Good | β Too Slow |
|
Code & Tables Excel formulas, Data |
ββ Good | βββ Best | ββ Good | βββ Best |
|
Following Instructions Format, Structure |
βββ Best | ββ Good | ββ Good | βββ Best |
|
Cost Efficiency $/output quality |
ββ $$$ | βββ $$ | βββ $ | β $$$$ |
Start with Claude 3.5 Sonnet for most M&A workβbest balance of quality, speed, and instruction following. Switch to o1-preview for complex financial modeling where accuracy is critical and time isn't.
Use GPT-4 for rapid iterations and refinements once you have a good first draft. The speed advantage compounds when you're making multiple small edits.
How M&A professionals actually use these prompts on live deals.
β¬85M Enterprise Value | Software Sector
"The buyer universe prompt alone saved us 15 hours of research. We identified 3 strategic buyers we would have missed, one of which ended up being the eventual acquirer."
β VP, Boutique M&A Advisory
β¬150M Enterprise Value | Industrial Services
"We were in a competitive process with limited exclusivity. The prompts let us turn around a full IC memo in 48 hours. Partner said it was the most comprehensive memo he'd seen at that stage."
β Associate, Mid-Market PE Fund
β¬220M Enterprise Value | Consumer Goods Division
"Carve-outs are brutal. The standalone cost analysis prompt forced us to think through every cost category systematically. Found β¬8M of stranded costs the seller hadn't accounted forβthat went straight into our price negotiation."
β Director, Global Investment Bank
"I was skeptical. Another AI prompt collection? But the specificity is insane. The IC memo prompt literally has fields for every section my firm requires. Saved me 6 hours on my last deal."
"The grader destroyed my ego but improved my prompts 10x. I thought I was good at thisβturns out I was getting 25% of the potential value. Now consistently scoring 80+."
"Shared with my team of 4. We've standardized on these prompts for all first drafts. Quality consistency went up, review time went down. MD noticed the improvement."
"The industry customization is the killer feature. I cover healthcareβthe prompts know about IQVIA, reimbursement dynamics, clinical trial pipelines. Not generic garbage."
"Finally someone who understands what we actually need. I've bought 3 'AI for finance' courses that were useless. This is the first product that actually maps to real deliverables."
"Worth it for the LBO assumptions prompt alone. I've rebuilt that from scratch 50+ times in my career. Now I get a structured starting point in 30 seconds."