Approach

How I think about research

Research streams

Most projects fall into these buckets that talk to each other: company work, macro context, and process.

  • Single-name deep dives: unit economics, competitive dynamics, and valuation versus peers.
  • Macro + factor work: regimes, rates, style rotations, and sector/region spreads.
  • ETF and index analysis: building simple ways to express views without stock picking.
  • Methods notes: Monte Carlo setups, sanity checks, and process frameworks.

Principles

Start with the business: customers, cash flows, and competitive moat. Then layer macro and sentiment on top, rather than letting the narrative lead.

I try to keep a clear distinction between "great company" and "great stock", think in probabilities, and make sure each conclusion has a traceable path back to data, assumptions, and scenarios.

Tools are mostly Python and Excel. The goal is not fancy models but transparent ones that make it easy to see where things break.

In practice

From the work

These aren't abstract principles. Here's what they actually look like in the research.

Great company, wrong price
Valuation is not the same as quality

In "Beyond the Buzz: Why Palantir's Valuation May Not Add Up", Palantir has real contracts, sticky government customers, and a growing commercial segment. But the stock was trading above 500x trailing earnings, far ahead of its SaaS peers sitting at a 99x median. The analysis started from that gap and worked backward through comps, scenario analysis, and cash flow math to ask: how much of this price is in the fundamentals, and how much is the narrative?

Always ask "versus what"
Context changes the answer

"US vs Global ETF Valuation Spreads" started with a simple question: is the premium earned? At 28x P/E versus a global median around 15x, the spread looks large. But when you break it down, more than a quarter of VOO sits in five mega-cap names, and "Pricing the AI Narrative" found the same thing from the other direction: nearly all S&P 500 outperformance since 2023 traces to a narrow AI-linked basket, not broad earnings growth. Neither piece would land the same way without the other.

Stress-test the conclusion
Find the cracks before someone else does

"The Unraveling Guarantee: Has the U.S. Lost Its Exorbitant Privilege?" made a bear case on U.S. fiscal sustainability, then spent several pages steelmanning the other side: dollar reserve status, the domestic savings pool, Fed credibility, and the historical record of countries carrying high debt loads for extended periods. The conclusion held, but working through the counterarguments was the point. A thesis that doesn't survive its own stress test probably isn't ready to defend.

Methodology

Show your work

Research that can't be replicated isn't research. It's an opinion with a chart.

Tools & data sources

Everything is off-the-shelf. No proprietary data, no black boxes.

  • Python (pandas, numpy, matplotlib, scipy) for quantitative analysis and visualization.
  • Excel for financial modeling, comps tables, and scenario builds.
  • Data from Yahoo Finance, FRED, CBO, Pitchbook, and primary filings, all cited in the work.
  • Source files, spreadsheets, and notebooks on GitHub ↗.

Reproducibility

If someone with the same data can't get to the same conclusion, the analysis isn't done. Every chart in these papers cites its source. Every multiple has a date stamp. Every peer group has a written rationale for why those names are in it and not others.

The goal is models you can argue with. Not because every reader should rebuild the spreadsheet, but because writing down every assumption forces you to find the ones that don't hold. That's the whole point of the exercise.

Scenario analysis is part of this. A base case without a stress test is just advocacy with good formatting. The Palantir deck runs three valuation scenarios. The debt paper models three rate paths through 2035. The point is to show where the conclusion breaks, not just where it holds.