The Long Game 169: AI Investment Thesis, Peter Attia, Earth AI, Science-Based Lifting
Why Smart People Believe Stupid Things, Building What's Fundable, How to be Creative, and Much More!
Hi, it’s Mehdi Yacoubi, co-founder at Mirage Metrics, OrderFlow and Mirage Exploration.
This is The Long Game, a weekly newsletter about technology, operations, AI, building a company, health, wellness and the decisions that compound over years. More than 5,000 people read it every week.
If you want it in your inbox, you can subscribe here:
In this episode, we explore:
Petter Attia: grifter or longevity visionary?
Be the guy
Smart people believing stupid things
AI investment thesis
Earth AI
Let’s dive in!
Health
Peter Attia: Grifter or Longevity Visionary?
A lot of people are asking whether Peter Attia is a grifter or whether he’s actually pushing useful ideas about longevity. The truth is more boring and more human. Attia is rigorous, smart, and serious about the science. He’s mainly pushing exercise, muscle, strength, and metabolic health. These are solid fundamentals. He is also selling the products he believes in, which can lead to some questionable situations, especially when you don’t agree with his underlying belief.
Because Attia has become so invested in increasing his personal protein intake he has invested in a protein bar company (David) and a venison jerky company (Maui Nui).
Does this make him a grifter or a charlatan?
I would still say no because he clearly believes strongly in what he is pushing, to the point where is constantly munching on venison jerky sticks, consuming up to 10 per day. Will he make money off the sales of David protein bars? Yes, but I’m convinced this was not his motivation for becoming involved with this company. I’m sure he got tired of venison jerky and was seeking more varied (and healthy?) sources of easy-to-access protein.
But I think an important point is that the pressure to constantly create content makes anyone drift into talking about things that may not matter as much. I’ve been in the longevity world for years. I’ve tried most of the practices. I’ve measured everything. I pushed hard on nutrition, zones, sleep, biomarkers, rapamycin debates, extreme protein targets, everything. And over time, something became obvious to me: the more obsessed I became with optimizing my health, the worse I felt.
There’s a point where vigilance turns into self-surveillance. When your attention is always pointed inward, small sensations turn into symptoms. Innocent aches become problems. You start triggering mind-body loops that didn’t exist before. The longevity crowd almost never talks about this psychological component. It’s the blind spot of the entire “optimize everything” movement.
That’s why I’ve shifted my own philosophy. I still think Attia has valuable lessons. I might disagree with him on a few things (like protein intake), but overall, he’s rigorous. He explains complexity well. He updates his views. He’s not a scammer.
But I’ve learned to take what’s useful and ignore the rest. Today, I focus on the basics. I strength train because I love it. I try to get better at something I enjoy instead of chasing every new protocol. I take a few supplements that I know actually help. And I avoid turning my entire life into an experiment.
The paradox is simple: the more you think about your health, the more fragile you can become. The less you obsess, the healthier you often feel (granted you have a good base of understanding your health).
The real “longevity hack” is not to become a full-time manager of your own biology, but to build a life where your health is supported almost automatically: moving often, training hard at something you enjoy, eating well enough, sleeping enough, and not drowning yourself in worry.
Attia’s core message on exercise, muscle, and cardiorespiratory fitness is correct. It’s worth paying attention to. But there’s a deeper layer that doesn’t get enough airtime. Longevity is also psychological. It’s about not turning yourself into a lifelong patient of your own thoughts.
That’s the part the biohacking world keeps missing, and the part I think we need to bring back into the conversation.
Pair with: I think more of the longevity space should focus on radical science and innovations like what my friend Kai Micah Mills is building, rather than on the healthspan narrative. We won’t live 150, 200 years without radical new science. Supplements and blueprint protocols will certainly not lead us there.
Wellness
Be the Guy
I loved this post.
Be the guy.
If you want to live an extraordinary life, you have to be the guy.
Be the guy to make everything happen. Plan the nights out, book the trips. Get the first round of drinks. Send shots to the girls table. Crack jokes. Put everyone in a good mood. Lead by example. Lift all tides.
Life is what you make it & it goes best for the people who play the leading role in making it special.
Pair with this (monthly reminder):
“The world is a very malleable place. If you know what you want, and you go for it with maximum energy and drive and passion, the world will often reconfigure itself around you much more quickly and easily than you would think.”
Better Thinking
Why Smart People Believe Stupid Things
This is an excellent piece on why so many smart people can believe stupid things.
Smart people can fall into strong biases. They use their intelligence to defend beliefs that fit their identity or group, and this makes their errors more elaborate and harder to see. Studies show that the highest-scoring individuals on reasoning tests are often the most biased on political questions.
Human thinking is shaped by social goals like belonging and status. This leads people to build complex justifications for beliefs that feel right to them. Elite institutions strengthen this pattern by training people to argue persuasively rather than seek truth.
Learning about logic or biases often isn’t enough, because people apply those tools selectively. The traits that help most are curiosity and humility.
Curiosity pushes you to explore gaps in your understanding. Humility makes it possible to revise your views. Rational thinking depends more on these qualities than on raw intelligence.
The correlation between intelligence and ideological bias is robust, having been found in many other studies, such as Taber & Lodge (2006), Stanovich et al. (2012), and Joslyn & Haider-Markel (2014). These studies found stronger biases in clever people on both sides of the aisle, and since such biases are mutually contradictory, they can’t be a result of greater understanding. So what is it about intelligent people that makes them so prone to bias? To understand, we must consider what intelligence actually is.
In AI research there’s a concept called the “orthogonality thesis.” This is the idea that an intelligent agent can’t just be intelligent; it must be intelligent at something, because intelligence is nothing more than the effectiveness with which an agent pursues a goal. Rationality is intelligence in pursuit of objective truth, but intelligence can be used to pursue any number of other goals. And since the means by which the goal is selected is distinct from the means by which the goal is pursued, the intelligence with which the agent pursues its goal is no guarantee that the goal itself is intelligent.
Another interesting part is the claim that elite institutions create people who are excellent at arguing but not at finding truth. In those environments, complex but flimsy ideas can spread because they signal status. Once they reach politics and media, they shape culture more through influence than accuracy.
For centuries, elite academic institutions like Oxford and Harvard have been training their students to win arguments but not to discern truth, and in so doing, they’ve created a class of people highly skilled at motivated reasoning. The master-debaters that emerge from these institutions go on to become tomorrow’s elites—politicians, entertainers, and intellectuals.
Master-debaters are naturally drawn to areas where arguing well is more important than being correct—law, politics, media, and academia—and in these industries of pure theory, sheltered from reality, they use their powerful rhetorical skills to convince each other of FIBs; the more counterintuitive, the better. Naturally, their most virulent arguments soon escape the lab, spreading from individuals to departments to institutions to societies.
Pair with: Luxury Beliefs are Like Possessions
People adopt certain beliefs because it gives them a feeling of belonging, and does not impose any serious costs. You can scream “defund the police” all day with the knowledge that you will not be personally responsible for whatever happens with policing policies. But by displaying that belief, you can elevate your social status among the people whose opinions you care about at no instrumental cost to yourself.
AI Updates
AI Investment Thesis
This post on Yishan’s AI investment thesis went viral, and I found it covers very important topics for anyone in the AI space.
My AI investment thesis is that every AI application startup is likely to be crushed by rapid expansion of the foundational model providers.
App functionality will be added to the foundational models’ offerings, because the big players aren’t slow incumbents (it is wrong to apply the analogy of “fast startup, slow incumbent” here), they are just big. Far more so than with any other prior new technology, there is a massive and fast-moving wave that obsoletes every new app almost as fast as it can be invented. There is almost no time to build a company and scale it.
There are two ways AI application startup founders can make money:
- Make a flash-in-the-pan app that generates a ton of cash and bank the cash (my estimate is that you have about 12-18 months cashflow generation)
- Make a good enough app that you get acquired by one of the big players for sufficient equity
The situation is highly unstable - we don’t know if it’s going to crash or go to the moon but both scenarios make it very unlikely that any AI application startup will independently become a generational supercompany (baseline odds are low to begin with).
The best odds are finding an application niche in a highly specialized field with extremely unique and specific data barriers, ideally ones relating to real atoms (hardware or world-related) data and not software/finance.Great, this is blowing up so I will offer some additional follow-up:
This is NOT your typical prediction of “the incumbents are agile” or the old “what if Google clones your startup” midwit investor question.
The entire novelty of this thesis is that unlike in the past, specific elements of the AI industry are likely to make it so that application companies cannot outrun the wave of obsolescence, which will rush along far, far more quickly than prior technology waves.
The foundational technology has not stabilized in any way whatsoever, and applications require a sufficiently stable foundation for some extended period of time in order to create value and then a system for monetizing that value (i.e. “a business”). The wholesale rate of change in the nature of the foundation is the reason why I think almost all application startups will not survive to achieve any significant scale, not because the current large players are special.
Most companies don’t survive sea changes in the business-technological environment. But these sea changes happen slowly enough that one can build businesses in between. PC, desktop internet, mobile internet, etc, all took many years to play out, and were spaced out enough for application companies to grow, mature, and become incumbents themselves. As a baseline, most startups don’t survive during a rapid period of change either. The small minority of incumbents who survive need extreme agility and enough of a stable footing in the last epoch (i.e. a revenue base that doesn’t dissolve too quickly) to fund their evolution.
Moreover, it’s usually new startups that drive the disruption that challenges incumbents. This is not the case with AI. In this case, the largest players are the ones continually causing the sea change. The environment is so continuously roiled that there is no stable foundation for application startups to become established before the next wave overtakes them. I’m not talking about incumbents outcompeting them, I’m talking about the landscape changing to make them obsolete.
From a practical investment lens, the way to apply this thesis to an AI application startup is to ask: are the fundamental assumptions underpinning this startup’s existence going to be the same in five years? Or will they be unpredictably different? The key here is predictability - if the future will be radically different but you can predict it with confidence, you can pre-position your business. But that’s not the case right now in AI. You can’t skate to where the puck is going if all you know for sure is that 20 people are going to slap the puck in some crazy direction at extremely high velocity.
Sea changes are now happening on a 9-12 month cycle. Very few startups can turn into a mature business in that timeframe - and by mature, I mean having all the boring stuff like sales relationships and brand recognition. Yes, your engineers can make the change, but human hiring cycles and team solidification and market relations are incompressible (e.g. if you hire 100 people in a month, your organization will implode).
Thus, application companies never quite make it to a full business threshold before the sea change happens out from under them. When I say the incumbents will take the application space, I mean that they’re the only ones who can provide enough internal stability and resources to survive the sea changes they themselves will be driving, NOT that they’re going to provide a superior product. They’re just the ones who won’t starve.
And here’s the other side:
The counter dynamic to the AI model doing everything is that, at least in enterprise, bridging the AI models’ capabilities to the customer’s environment still requires a tremendous amount of long tail work.
The gap between an AI agent working for 90% or 95% of the solution and 100% is usually about 10X more work than most realize.
Getting access to the enterprise data, connecting to the enterprise workflows, delivering the change management that employees need to adopt the technology, handling the regulatory and compliance requirements of that industry, and so on all require some degree of highly dedicated focus in a domain.
There’s a strong analogy to vertical SaaS here actually. One would have thought that horizontal technologies could solve all problems in SaaS. But in fact there are endless very large companies that just hyper focus on a single domain, because that level of specialization is valued by the enterprise.
We will likely see the same play out with AI Agents in the enterprise as well. And in fact these domains will be far larger than traditional software categories because the TAM isn’t software, it’s work to be done.
Very fun debate, but I’m taking the other side.
From what I see on the ground, I think both are true in some ways. I see that actually setting AI agents in company operations requires a lot of unsexy work on the ground, often needing forward-deployed engineers to make things work.
I also see the foundational models being able to do more and more tasks without specific knowledge about a company or industry being added.
What do you think?
Pair with: OpenAI product strategy and AI investment thesis
Startup Stuff
Building What’s Fundable
An interesting read showing how YC shifted from inspiring founders to solve real problems to chasing whatever ideas the VC crowd finds fashionable.
As tech became easier to navigate, YC stopped being an on-ramp and became a factory optimized for “what gets funded.” This shows a broader problem in venture: consensus thinking dominates, and contrarian, mission-driven ideas get crowded out.
Founders now follow a hyper-legible, prewritten path instead of thinking independently. Stanford/MIT grads, who spend 1-2 years at Google, Meta, Microsoft…
The fix is to build from belief, not from trends. Mission-driven founders who choose meaningful quests are the only real counterforce to a system that rewards sameness.
The most important takeaway here is that I don’t believe this is YC’s fault. Instead of laying the sins of an entire industry at the feet of one participant, I would argue, instead, that they’re adhering to the logical economic incentives that are being shaped by a much bigger force: The Consensus Capital Machine.
Pair with these quotes about Oracle’s early days:
When Oracle was formed in 1977, venture capitalists wouldn’t spend a dime investing in software… When they heard the investment was about software, they wouldn’t even see me… Oracle started without a dime of venture capital. I put in $1,200 and the other two guys put in $400 each, and with that $2,000 we started Oracle.
Ellison, along with Bob Miner and Ed Oates, started Oracle with just $2,000 in 1977… They never raised money from venture capital and Ellison was largely allergic to raise equity.
What I Read
Galaxy Brain Resistance
This is a very interesting one by Vitalik. A bit related to the piece I shared above about smart people believing stupid things.
Galaxy brain resistance is about whether a way of thinking can be twisted to defend anything you already wanted to believe. If a reasoning style can justify everything, it explains nothing.
A lot of the arguments people use today fall into this trap. Folks decide the conclusion first, then build a story around it: inevitabilism, overly grand long-term claims, moral panic dressed up as principle, or financial hype pretending to be social progress. These aren’t real arguments. They’re excuses.
You see the same pattern in crypto, politics, tech, and AI safety. “I’m doing more from the inside”, or “give me power so I can help later”, can be used to defend anything at all.
The way to avoid this is to have solid principles you don’t bend and to avoid incentives that push you into convenient justifications. These are the only things that keep smart people from talking themselves into anything.
“If your arguments can justify anything, then your arguments imply nothing.”
Pair with: Too Smart
The Ideal Level of Wealth
Interesting read:
As you can see, the amount of wealth needed to live a “good life” is much lower when we continue working (versus never having to work again). Of course, Coast FIRE is riskier than financial independence since you still need to earn an income to support your lifestyle indefinitely. However, Coast FIRE also provides more flexibility and is more realistic for the typical person. Most people don’t want to sit around doing nothing all day. Don’t get me wrong, it’s fun and relaxing for a week or two, but I’ve written about the problems it can lead to.
Whether your goal is Coast FIRE or full financial independence, the ideal level of wealth in the U.S. is in the low-to-mid range of Level 4 ($1M-$10M), or $2M-$5M. I know this is a lot of money and many people will never reach it, but that’s why it’s an ideal. It’s something to strive for. It’s enough where you don’t have to worry about money anymore, but not so much that it becomes a burden or warps your identity.
And, yes, it can warp your identity. Great wealth can influence who you trust, what motivates you, your stress levels, and even how you raise your children. As Felix Dennis wrote in How to Get Rich:
Still, let me repeat it one more time. Becoming rich does not guarantee happiness. In fact, it is almost certain to impose the opposite condition—if not from the stresses and strains of protecting wealth, then from the guilt that inevitably accompanies its arrival.
Pair with: The ladders of wealth creation and How to Get Rich by Felix Dennis.
How to be creative (without taking drugs)
An important read in the era of productivity maximization at all costs.
The mistake people make is treating creativity like productivity. They try to work harder and expect creativity to appear. Instead, sprinkle in new inputs and watch new outputs appear.
Two good techniques:
2. Scroll for anti-social proof - Go to YouTube or Substack, scroll through the explore page, and click only on content that has under 5,000 views. You’ll find niche ideas that haven’t yet mimetically spread. 90% will be a waste of time. Like a successful venture capitalist’s portfolio, the 10% of hits cover the 90% of misses multiple times over.
3. Avoid content made after 2016 - Something happened in 2016. The internet became less weird, less creative. Whatever the cause, pre-2016 content has a distinct flavour of strangeness that has vanished. My favourite hack for this: Find a book or essay you love. Open up ChatGPT deep research. Ask for 50 similar books or essays, all created before 2016.
Pair with: “Jootsing”: The Key to Creativity
In Defence of Men
The essay is thought-provoking. It argues that modern culture often attacks masculinity while ignoring one of its core strengths: the deep male need for competence. For many men, feeling skilled and useful is central to identity. This comes from long evolutionary patterns in which men gained status through ability and achievement.
Being a “success object” helps men, but it also makes them vulnerable. When their competence slips because of age, job loss, or failure, their sense of self can collapse. This is a major reason middle-aged men face high rates of depression and suicide.
The author’s point is that masculinity has problems, but it also has real value. Men need space to acknowledge the good in their nature and not feel ashamed of wanting to be good at something and needed by others.
A fundamental difference between masculinity and femininity is the convention that manhood is not a birthright, but a status that must be earned. The anthropological record writhes with accounts of premodern groups in which boys have to pass gruesome and frightening tests to be considered a man… The researchers concluded, ‘whereas womanhood is viewed as a developmental certainty that is permanent once achieved, manhood is seen as more of a social accomplishment that can be lost and therefore must be defended.
Pair with: Why men don’t age like wine
The Mind of Napoleon: A Selection from His Written and Spoken Words
A real insight into the mind and thoughts of Napoleon.
“The love of glory is like the bridge that Satan built across Chaos to pass from Hell to Paradise: glory links the past with the future across a bottomless abyss. Nothing to my son, except my name!”
Brain Food
Earth AI
As we’re working hard on Mirage Exploration (our mining exploration project in Morocco), I’ve been studying a lot Earth AI. It’s impressive how good their hit rate is and all the discoveries they made over the last few years.
You might not find this very relevant, but I’ve been geeking out about it lately.
Here’s how they’re different from the rest:
They ingest far more data than normal explorers.
Full Australian open-file archives since the 1970s: drill logs, assays, maps, geophysics, company reports.
National geophysics: magnetics, gravity, radiometrics.
Remote sensing and elevation data.
Historical geochemistry at large scale.
Their own drilling and field data.
They clean and unify all of it.
OCR, digitization, georeferencing, fixing coordinate systems, normalizing lithology and assay formats.
They end up with ~200–400 million usable geological training examples.
They train continent-scale ML models.
Multi-modal: geophysics + geochem + geology + satellite.
The model learns mineral system “context” instead of simple anomalies.
Can detect extremely weak signals (example: 0.002 percent Mo soil anomaly).
They rank targets at the scale of all of Australia.
They generate a probability map of mineral systems across the entire continent.
They consistently pick areas that previous explorers ignored.
They control the full loop: target → hypothesis → drill.
Geologists convert AI targets into specific geological theories.
Each drillhole is a hypothesis test.
Results feed back into the model and into human interpretation.
They built their own low-cost drilling hardware.
Modular rigs with minimal site prep and onboard waste handling.
~86 dollars per meter vs ~300 dollars for standard drilling.
Faster mobilization, more holes tested per dollar.
I found this mix of hardware + software part fascinating.
Their hit rate is orders of magnitude higher.
They report ~75 percent of drill sites returning economic-grade intercepts vs ~0.5 percent industry norm.
Faster cycle: roughly 4x faster exploration timelines.
Their business model forces technology to work.
They stake ground or enter alliances, drill with their own rigs, and earn royalties only when they hit real ore.
No consulting incentives, no selling “AI tools” without proof.
Earth AI wins because they combine:
the largest unified geological dataset in Australia
a true multi-modal ML model trained at continental scale
very cheap, fast in-house drilling
a tight scientific loop between AI, geology, and drilling
a business model tied directly to discovery quality
This lets them find new deposits in places where multiple exploration companies have already failed.
We are learning from the best in the world to apply the best practices in Morocco.
Pair with: It’s Time to Mine
What I’m Watching
Is Maine the Culinary Capitol of New England Food? | DIRT Maine
The Fake Side of Science-Based Fitness
I hope this era of ‘science-based’ obsession will soon die. They have transformed lifting for the worse.
The Tool of the Week
Leather Jacket
I have not bought any yet, but I’ve been really into leather jackets lately. I find this one very nice and not overly expensive like many others are.
Quote I’m Pondering
“You cannot observe people through an ideology. Your ideology observes for you.”
— Philip Roth
EndNote
Thanks for reading,
If you like The Long Game, please share it or forward this email to someone who might enjoy it!
You can also “like” this newsletter by clicking the ❤️ just below, which helps me get visibility on Substack.
Also, let me know what you think by leaving a comment!
Until next time,


