The Not-so-Lazy Holiday Reading List
Laziness as competitive advantage
It’s that time of the year again! Everyone’s making predictions about what’s going to happen in 2026!
Predictions are easy to generate and costless to abandon if the person making the prediction bears no responsibility for being wrong. And chatbots can do that better than you and me - producing fluent answers on demand.
What’s harder - and increasingly rare - is the work of reflection: slowing down, taking stock, and making sense of change.
And to make it worth your while, here’s a curated list of Holiday Readings to help you reflect more and reflect better in a world of relentless execution.
Laziness as a competitive advantage
Laziness gets a bad reputation because we confuse it with disengagement.
But a certain kind of laziness - the refusal to stay busy for its own sake - is its own form of competitive advantage.
In a world flooded with cheap execution, instant answers, and confident predictions, effort is cheap and motion is constant. What’s scarce is the discipline to stop. To not respond immediately. To not generate another take just because you can.
Strategic advantage today comes less from doing more work
and more from doing less of the wrong work.
The ability to delay action until the question is clear and to conserve attention for what actually matters is key to leverage.
When everyone else is sprinting, the person willing to sit still can see the terrain.
So here’s to a few days of calm and quiet. Time with family - but more importantly, reflective time with yourself.
Our first three picks below focus on this idea.
Let’s dig in…
For one, stop chasing output
This is a good first read if you’re trying to unlearn the reflex to do more just because you can.
The piece makes three clear arguments:
First, when execution becomes cheap, whether it’s code, content, or prototypes, output stops being a signal of value. Václav Havel’s restraint in a suddenly free media environment shows why speaking less can matter more.
Second, advantage shifts from productivity to discernment: knowing what not to build, echoing examples like Muji and Miyazaki, who won by imposing limits rather than chasing scale.
Third, taste and coherence, not speed, become the real differentiators.
It’s a grounding read for year-end reflection, especially if you want to enter the new year thinking more carefully about where attention, judgment, and restraint actually pay off.
Stop digging faster…
Our second read is well-suited to our current moment in the AI gold rush.
In a gold rush, shovels help you dig faster - do what you already do, but more cheaply.
But when speed is cheap, what becomes scarce, and therefore valuable, is knowing where to dig with clarity.
Treasure maps help you see what you should be doing instead, and redesign your workflows, organization, and business model accordingly.
Good maps eventually spark your curiosity.
The thing about maps is you don’t need to see everything. You need to see what matters and what can be acted on.
In the early 20th century, London’s underground transit map was a tangled mess. It was geographically accurate as a map should be. But commuters had no clear mental model of how to move through the system. The map was technically correct, but practically useless.
That was until Harry Beck came along. You see below what happened next.
Beck, an electrical draftsman, treated the subway like a circuit diagram. Instead of mapping stations to their true geographic coordinates, he focused on usability. Straight lines. Even spacing. Logical connections.
The map wasn’t entirely true, but it helped you see what mattered.
Beck’s design has since influenced metro maps across the world.
That’s how a Treasure Map approach helps in a world of cheap execution. It elevates what matters and excludes what doesn’t.
Don't sell shovels, sell treasure maps
In 1849, a swarm of men poured into California, lured by rumours of rivers filled with gold.
And finally, here’s to a good year ahead of asking good questions…
Finally, this essay helps you step back and reset how you think. It makes three key points.
First, as AI makes answers faster and cheaper, the real bottleneck shifts to attention and judgment.
Second, real progress comes from reframing problems, not from producing more polished answers within old frames.
Third, in a world where conditions keep changing, the ability to stay curious and keep asking better questions becomes a durable advantage.
When answers get cheap, good questions are the new scarcity
In 19th-century Paris, the Académie des Beaux-Arts defined what counted as legitimate art.
Advocating the spirit of The Holidays as a competitive advantage
This quiet period at the end of the year creates a brief but powerful advantage.
When answers are cheap and attention is scattered, the ability to step back, notice patterns, and reframe questions becomes a form of competitive edge.
Reflection isn’t passive. It’s how orientation is rebuilt before the next move.
Use these few days wisely. And see how you could incorporate such moments through the rest of the year ahead. In a world of cheap execution, the ability to step back and make sense of change will matter more than the ability to run faster and respond to change.
And with that… Happy Holidays!
If you like these ideas, get/gift a copy of Reshuffle
These ideas are based on my book Reshuffle.
If you’ve made it this far and want to dig into more recommendations, here’s the larger end-of-year reading list:
True, but utterly useless
This piece - now read more than 100,000 times- is the antidote to year-end prediction culture and LinkedIn-slogan thinking.
The piece starts with the Maginot Line, an engineering masterpiece built for a form of warfare that had already changed, and uses it to make three sharp points about how people misread AI.
First: the biggest mistake is framing AI as automation vs. augmentation. That keeps you optimizing tasks while the system of work is being redesigned. Then there’s the containerization example that I use in Reshuffle: the dockworker didn’t lose to a better dockworker; the whole port’s logic changed.
Second: productivity doesn’t automatically translate into advantage. When tools spread, execution becomes commodity input, and value often flows to whoever controls coordination, distribution, and the scarcest complements, not to the person doing the faster work.
Third: jobs and workflows aren’t stable objects. They’re temporary solutions to coordination problems. When AI shifts decision rights and execution sequences, roles get unbundled and rebundled, like basketball positions after analytics, or typists after word processors, so competing to use AI better can be the wrong race entirely.
If you stop chasing slogans and start asking how the system is changing, you can choose your next move with far more agency than the meme suggests.
The many fallacies of 'AI won't take your job, but someone using AI will'
In the years between the two World Wars, France built The Maginot Line - a line of fortifications stretching along its eastern border.
Humans as luxury goods!
The AI skeptic says: “AI won’t replace the human touch.”
It sounds wise. Reassuring. It lets us relax into the belief that markets may change, tools may evolve, but people will always value people.
It is comforting. It’s also mostly nonsense.
Not because it’s false in a literal sense - humans will, obviously, remain human - but because it mistakes sentiment for economics and intrinsic value for market value. It answers the wrong question with great confidence.
Markets don’t reward what’s meaningful. They reward what’s scarce and measurable under the system that allocates value.
AI doesn’t need to replace the human touch to devalue it. It only needs to make some forms of previously human-performed work abundant, standardized, and interchangeable.
At that point, deeply human work can remain intrinsically valuable while becoming economically cheap.
When answers are abundant, the leverage shifts to people who can
(1) frame better questions,
(2) filter meaning from noise, and
(3) make calls under uncertainty while owning consequences.
Humans as 'luxury goods' in the age of AI
A bottle of wine sells for $80 in stores. The restaurant charges $400.
The fugu guide to jobs in a world of AI
The AI skeptic says focus on what the machine can’t do.
The safe advice is to double down on “irreplaceable” tasks and wait out the transition.
It sounds prudent. Many nod along.
The trouble is that this way of thinking optimizes for the wrong enemy.
AI rarely destroys work by replacing tasks outright. It does something subtler—and more disruptive.
It changes the constraint in the system.
When execution becomes cheap and reliable, the value of doing the work collapses, even if the work remains human. Roles don’t disappear because machines outperform us; they lose relevance because the new system no longer pays for what used to matter in the old system.
This article offers a cleaner lens.
Instead of asking what AI can’t do, it asks what AI breaks: which coordination gaps widen, which risks concentrate, and which judgments suddenly become system-critical.
Like the licensed fugu chef, the winners are those positioned exactly where the new system breaks.
The fugu guide to jobs in a world of AI
In Japan, a licensed fugu chef occupies a unique position in the food economy.
Making sense of hype!
We wouldn’t be wrapping up 2025 without a gentle nod to the topic of hype.
Much as we may hate it, hype is critical to capitalism today - it solves a very specific coordination problem.
Modern systems need many independent actors to move together before the payoff is visible. Capital, talent, regulators, complementors, customers - all have to act in concert under deep uncertainty. Traditional institutions used to do this work through standards, subsidies, and long-term planning. Increasingly, they can’t.
Hype fills that gap.
It creates a shared belief about a future state, early enough that people are willing to commit resources before the system actually works. It changes incentives not by force, but by reframing payoffs. Hype creates a focal point in a coordination game where no one actor can move first safely.
AI and the strategic value of hype
In medieval warfare, survival was determined by how well you could keep enemies out. Castles were surrounded by moats, designed to slow down invaders.
Yet, hype can make you consistently miss the point
AI proponents and critics land on two ends - AI will either unlock incredible prosperity or eat up all jobs.
People miss the point about AI because they argue about outcomes instead of architecture.
Both sides of the debate - doom and optimism - are trapped in the same lazy frame. They ask whether AI destroys jobs or boosts productivity, whether the pie shrinks or grows.
But AI doesn’t just change tasks; it reshapes the rules of the game. When systems are rebuilt - workflows, organizations, platforms, ecosystems - the pie can grow even as bargaining power concentrates. Growth and inequality rise in concert, by design. The mechanisms that expand output also determine who holds the knife when the pie is sliced.
The debate misses this because task-based thinking is cognitively easy and morally satisfying. It lets pessimists warn about job loss and optimists promise shared prosperity without confronting the uncomfortable middle: someone must control coordination, and whoever does captures disproportionate value.
How to intellectually debate AI while completely missing the point
The conversation around AI tends to polarize quickly.
In 2025, you can’t say hype without saying ‘agentic’
Agentic AI is framed as the next, more powerful wave of automation - an extension of RPA without the rule-based rigidity and with broader task coverage.
Success is measured in hours saved, headcount reduced, and processes automated.
The problem, though, is that workflows are treated as fixed, with agents dropped in to execute steps faster. Governance is bolted on after the fact, as compliance, audit, or monitoring.
This mindset assumes the system itself is stable and only needs efficiency upgrades. It rewards short-term gains and familiar metrics.
What really matters, though:
Agentic AI is not an efficiency technology but a coordination technology.
Its power lies in collapsing, eliminating, or radically reshaping workflows rather than automating them, step by step.
Value comes from redesigning how decisions are made, how agents interact, and how exceptions are governed across the system.
Instead of thinking about speeding up your workflows, ask the following two questions:
Which constraints does agentic AI remove, and which new constraints does it create?
Stop asking what tasks get automated; start asking how the system’s bottlenecks move.
Where should governance, decision rights, and accountability be located once agents act in parallel?
Stop optimizing execution speed; start designing the coordination architecture that determines who captures value.
The problem with agentic AI in 2025
In the early nineteenth century, canals represented the height of industrial progress. They connected inland towns with ports, allowing coal, grain, and other bulk goods to move at far lower cost than by wagon.
The true opportunity lies in architecting AI-first
Most firms mistake tool adoption for architectural transformation.
AI-first means rebuilding your system around AI’s architectural properties - new atomic units, constraints, and coordination logic - rather than layering AI onto existing workflows.
It’s not about faster execution, but about changing how value is created, governed, and captured.
The difference shows up along three axes. Miss any one of them, and you’re bolting AI onto a legacy axle.
1. Atomic Unit Shift
Every real transformation starts by redefining the smallest unit of value the system works with.
Venice moved from chests of coins to ledger entries.
Figma moved from files to elements.
If your unit of work hasn’t changed, your larger system wont change.
2. System Shift
Once the atom changes, workflows, org charts, budgets, and decision rights must be rebuilt.
Walmart restructured retail around the new data created by barcode adoption.
This is where most incumbents stop, because systems change breaks power structures, reporting lines, and revenue logic.
3. Competitive Shift
Architectural shifts always change what it means to win.
Competition moves away from surface performance and speed toward control over coordination points.
Moats cease to matter when the axis of competition shifts, and incumbents usually keep defending the old one.
You think you are AI-first, but you probably aren't
My book Reshuffle is available in Hardcover, Paperback, Audio, and Kindle.















A thoughtful reminder that in a world of abundant execution, strategic reflection and asking the right questions (not just doing more) are the real sources of competitive advantage
Great list — the “not-so-lazy” framing is perfect. What I’m most curious about is which platform layer you think will be most structurally reshaped by AI next: discovery, distribution, or governance? I’ve been thinking about this from a “meta-layer” angle — AI becoming an organizing layer that sits above the platform itself. Quick note here: https://northstarai.substack.com/p/ai-spoke-of-a-meta-layer-in-its-own