Annotations (12)
“I think one thing that is going to be really important and already was historically, but I think carries a lot more weight going forward, is just the importance of documentation of thinking and decisions and research. Essentially these models, the usefulness of them rises exponentially with the amount of context they're given.”— David Plon
Operations & Execution · Strategy & Decision Making · Technology & Engineering
DUR_ENDURING
Documentation compounds as AI context
“What I think historically has been hard, but now is much simpler, is being able to cast a wider net of pulling in relevant data points in a much sparser set of data. So for instance, let's say you own Expedia. One of the key investment factors for Expedia is always going to be what's happening in the hotel ecosystem and how volume is trending, how pricing's trending or what OTAs are doing with their distribution strategy.”— David Plon
Operations & Execution · Strategy & Decision Making
DUR_ENDURING
Sparse signal extraction across value chain
“The mental model that I think still works really well is imagine you are writing an email to somebody maybe overseas who's going to work overnight and is going to be doing a task for you. Assume they're smart, but maybe lack a lot of context on you. What information would you want them to have to be able to do a good job? Then whatever you end up writing is probably a pretty good starting point for an effective prompt.”— David Plon
Operations & Execution · Psychology & Behavior · Technology & Engineering
DUR_ENDURING
Prompt like delegating to remote analyst
“I have like a set of bullet points that I try to encapsulate, here's how I think about being an analyst and some of the things you might learn through your experience. One of those is I always remind the model, look, you're going to largely be reading commentary from management teams. They are always biased positively. It's important you take a skeptical eye to anything they're saying.”— David Plon
Psychology & Behavior · Technology & Engineering · Operations & Execution
DUR_ENDURING
Transfer skepticism in prompts explicitly
“One analysis that now is very trivial, but took a lot of time historically was I would go back through the last 3, 4 years and lay out every piece of guidance that the management team had given, both obviously the hard specific guidance, but also anything qualitative or soft, like, oh, we expect revenue to accelerate sometime in the second half of the year. And I think building up a picture of management's guidance style and credibility is really important.”— David Plon
Leadership & Management · Psychology & Behavior · Operations & Execution
DUR_ENDURING
Track guidance history for credibility
“The other thing that has been really helpful is just expanding what gets moved up earlier in the pipeline in terms of types of analyses. So for instance, I mentioned earlier CEO compensation. One of the things that I would do if I was really getting into a name is go through the last 5 proxies and try to map out what are the metrics that the CEO is being comped on, and how have those changed and the weighting of those changed.”— David Plon
Operations & Execution · Strategy & Decision Making · Leadership & Management
DUR_ENDURING
Move deep analysis earlier in funnel
“When I write prompts today, I usually outline a specific task and why the task is happening in the same way where if you give an analyst, build this cost curve, they say, okay. If you say, hey, build this cost curve because I think it might be shifting and that could imply something about future changes in the pricing, that's helpful. So I usually provide some background context. I provide a task. I might specify an output. I might say, hey, I want your output to look like this.”— David Plon
Operations & Execution · Technology & Engineering
DUR_CONTEXTUAL
Five-part prompt structure template
“Where I think AI can be really helpful in that process is getting you enough information to know whether an idea should be killed. I used to do this accounting at the end of every year when I was an investor. How many ideas did I really look at? A remarkably low number that made it through to the deep research stage.”— David Plon
Strategy & Decision Making · Operations & Execution
DUR_ENDURING
AI accelerates idea triage to kill faster
“What I've seen the most adoption and firms that are taking the most advantage of this is finding the right balance between having firm-wide initiatives that don't force people to change their behavior while letting individuals experiment and figure out where AI is going to be additive and reducing friction without negatively impacting conviction.”— David Plon
Leadership & Management · Strategy & Decision Making · Operations & Execution
DUR_ENDURING
Augment workflows, don't mandate change
“There are three main categories where AI can be useful. There's certainly idea generation, so finding the ideas that best fit my mental models. I knew there were probably dozens out there that were totally in my sweet spot, but at any given time, it was very hard to know if I was working on one of those.”— David Plon
Operations & Execution · Strategy & Decision Making · Technology & Engineering
DUR_ENDURING
Three research phases AI can accelerate
“The difference with an LLM is that the response is, for all intents and purposes, instant. The real insight, I think, comes from doing exactly what you just described, which is iterate on this. The cost of sending a single query is trivial. Start simple, start adding complexity as it's helpful. It's really like a two-way dance to end up in a spot where the AI really is acting like a useful analyst and you don't need to wait overnight to get the content to then see how I give feedback.”— David Plon
Operations & Execution · Technology & Engineering
DUR_ENDURING
Instant feedback enables rapid iteration
“I was one of these guys that could never really outsource model building. I always had to build a model myself in order to feel conviction in recommending a position based on that company. There were certainly aspects that limited me in terms of how productive I could be in finding and researching the best ideas. I would bucket it all into the category of there's just a ton of information out there that I could potentially consume.”— David Plon
Psychology & Behavior · Operations & Execution
DUR_ENDURING
Friction in research builds conviction
Frameworks (2)
Three-Stage Research Acceleration
AI-Enhanced Investment Research Funnel
A framework for applying AI to the three core stages of investment research: idea generation (pattern matching to mental models), context building (efficient table-stakes understanding), and ongoing monitoring (ecosystem signal extraction). Each stage addresses a different information processing bottleneck that constrains research productivity.
Components
- Idea Generation: Mental Model Pattern Matching
- Context Building: Accelerated Triage and Table Stakes
- Position Monitoring: Sparse Signal Extraction Across Value Chains
Prerequisites
- Clear articulation of investment mental models
- Access to comprehensive data sources
- Willingness to iterate on criteria definitions
Success Indicators
- Increased idea flow matching firm's sweet spot
- Faster time to kill or advance ideas
- Fewer surprises from ecosystem developments
Failure Modes
- Over-reliance on AI output without human judgment
- Poorly defined screening criteria
- Treating triage output as deep research
Management Credibility Assessment
Historical Guidance Pattern Analysis
A systematic approach to evaluating management team credibility by tracking 3-4 years of both hard and soft guidance, mapping revision patterns, and identifying behavioral tendencies. This analysis surfaces subtle patterns like kitchen-sinking versus serial sandbagging that aren't visible from quarterly beat/miss data alone.
Components
- Catalog All Guidance
- Map Revision Patterns
- Identify Credibility Patterns
Prerequisites
- Access to transcript history
- 3+ years of company history
Success Indicators
- Clear pattern emerges
- Model adjustments reflect credibility profile
- Fewer surprises from guidance changes
Failure Modes
- Management team change invalidates historical pattern
- Credibility pattern not factored into model
- Overfitting to recent behavior
Mental Models (9)
Friction as Conviction Builder
Decision MakingThe principle that effortful work in research and analysis isn't pure inefficiency; it serves the function of building conviction through deep engagement with material.
In Practice: David Plon explaining why he couldn't outsource model building
Demonstrated by Leg-dp-001
Kill Criteria Front-Loading
Decision MakingThe practice of surfacing disqualifying information as early as possible in the research funnel to avoid wasting time on ideas that will ultimately fail to meet investment criteria.
In Practice: Discussion of using AI to surface deal-breaking information faster
Demonstrated by Leg-dp-001
Sparse Signal Extraction
Systems ThinkingThe capability to filter for thesis-relevant data points in a high-noise, low-si
In Practice: Example of monitoring Expedia by extracting only OTA-relevant signals from the b
Demonstrated by Leg-dp-001
Instant Feedback Iteration
Systems ThinkingThe compounding advantage of near-instantaneous feedback loops that enable rapid
In Practice: Contrasting AI's instant response with overnight delegation, enabling real-time
Demonstrated by Leg-dp-001
Credibility Cycles
PsychologyThe pattern by which individuals or organizations build and spend reputation capital over time.
In Practice: Tracking management guidance patterns over years to assess credibility
Demonstrated by Leg-dp-001
Training Bias Transfer
PsychologyThe need to explicitly counteract learned biases when delegating to entities trained or socialized with those biases.
In Practice: Reminding AI models that management commentary is biased positively
Demonstrated by Leg-dp-001
First-Mover Advantage in Tool Adoption
Strategic ThinkingIn rapidly evolving technology landscapes, early adoption and experimentation creates compounding ad
In Practice: Discussion of how firms that document research decisions now will have irreplaceable context for fut
Demonstrated by Leg-dp-001
Adoption Through Augmentation
Strategic ThinkingThe principle that technology adoption is most successful when tools augment existing workflows rath
In Practice: Explanation of why firm-wide AI adoption works best when tools provide value without mandating chang
Demonstrated by Leg-dp-001
Context Compounding
MathematicsThe exponential increase in value of accumulated context over time when that con
In Practice: The rising value of research documentation as AI context windows expand and mode
Demonstrated by Leg-dp-001
Connective Tissue (2)
Delegation to remote overnight worker
The mental model of writing an AI prompt as if delegating a task to a smart but context-lacking analyst working remotely overnight. This frames prompt engineering not as technical skill but as a delegation and communication problem. Just as you would provide background, task definition, output expectations, and constraints to a human analyst, you structure prompts the same way. The parallel illuminates that effective AI use is fundamentally about information architecture and clear instruction, not specialized technical knowledge.
David Plon explaining his approach to prompt writing by analogizing to delegating work to an overseas analyst
Sports analytics teams tracking data before analytics mattered
The parallel between sports teams that maintained detailed tracking systems decades before analytics became valuable, and investment firms documenting research processes today in anticipation of AI leverage. Teams like the Oakland A's (popularized in Moneyball) had been collecting granular player data since the 1990s, which only became actionable when analytical tools and organizational buy-in emerged in the 2000s. Similarly, firms that document investment memos, thesis evolution, and decision rationales today are building a corpus that will become exponentially more valuable as AI context windows expand and agentic reasoning improves. The teams without historical data found it impossible to backfill decades of decisions and thinking.
Discussion of how documentation practices today will compound in value as AI capabilities advance
Glossary (3)
agentic
DOMAIN_JARGONHaving agency; capable of autonomous action and decision-making toward a goal
“As their agentic reasoning gets more heavily utilized to do longer running tasks”
scaffolding
DOMAIN_JARGONSupportive structure that constrains or guides behavior
“There's different levels of scaffolding and constriction around what the model can do”
harnesses
DOMAIN_JARGONControl systems or interfaces that channel capability toward specific uses
“Both in terms of how the models are trained as well as the harnesses around them”
Concepts (3)
value chain analysis
CL_STRATEGYMapping of all activities from raw material to end customer, identifying where value is created
context window
CL_TECHNICALThe amount of text an AI model can process in a single session
AGI (Artificial General Intelligence)
CL_TECHNICALHypothetical AI that matches or exceeds human cognitive abilities across all domains
Synthesis
Synthesis
Migrated from Scholia