25 Comments
User's avatar
The Innovation Show's avatar

Brilliant analogy Sangeet 💪💪💪

Expand full comment
Sangeet Paul Choudary's avatar

Thanks! I'm glad it worked. Some of the most risky analogies (previous ones include the sommelier analogy and the fugu analogy) have often been the most memorable for readers.

Expand full comment
Future of Marketing's avatar

Beautiful, beautiful post. Thanks, Sangeet 🙏

Expand full comment
Sangeet Paul Choudary's avatar

Thanks

Expand full comment
Erin Kenneally's avatar

Spot on insights. Note the same holds for the ‘risk innovation’ side of that coin:

https://open.substack.com/pub/erinkenneally/p/ai-risk-insurance-ransomware-redux?r=18syap&utm_medium=ios

Expand full comment
Paul Clegg's avatar

Amazing article Sangeet, thanks for sharing

Expand full comment
Sangeet Paul Choudary's avatar

Thanks Paul!

Expand full comment
Shail K's avatar

Loved this line, “Those who clung to the mindset of canals missed the real value of railroads.”

Ties directly to - https://www.linkedin.com/posts/shailkhiyara_sundayspark-activity-7380606443357597696-p0Ze?utm_source=share&utm_medium=member_ios&rcm=ACoAAAAI0qsBpxphE2KG_1m9XtczLVREYS_DN5U

Expand full comment
Howard Yu's avatar

Love this framing, Sangeet.

The idea that railroads made governance the performance driver makes a lot of sense.

I see the same thing in companies too. Teams celebrate silo productivity, deploy agents into claims, finance, or marketing, and then wonder why the system still jams. Execution gets faster. Coordination does not get smarter.

The real work is to treat governance as a product. Who sets the guardrails for agents, what data contracts bind functions, how exceptions escalate, which decision rights move to the edge.

Leaders often say they have no time to think long-term. That is the wrong excuse. You create time by removing constraints. It is a coordination problem that needs the most attention.

Thank you again for sharing such important insights!

Expand full comment
Shyamal Parikh's avatar

Hi Sangeet,

I really liked this article and am thinking along these lines as someone who built a PM tool like Asana.

However, given how horizontal the usecase is, I find it hard to narrow down on a workflow and then to think about it through the agentic lens.

It feels like one would have to first narrow to a vertical, losing all other revenue source and then go about redesigning the workflow from scratch.

Is there a better route?

Expand full comment
Sangeet Paul Choudary's avatar

I agree - the more horizontal the use case, the more challenging this is. For something like Asana or Notion, the real challenge is that every workflow running on the tool is different based on the domain. This is precisely where I feel agentic workflows will have to be more domain-led rather than completely horizontal. Horizzontal players will be stuck to task automation enabling new competition from vertical agentic players who uinderstand the workflow more deeply but within the limitations of SAAS weren't quite superior enough to their horizontal counterparts.

Expand full comment
Derek Aranda's avatar

Great article and it calls to mind the book Art of Action that looks at how organizations can translate strategy to execution. It emphasizes the concept of friction where bureaucracy impedes that process. As you describe agentic AI as a coordination opportunity it made me think the really successful implementations will address that friction to enable organizations to more effectively coordinate activity around their objectives while responding/anticipating/adapting to their competitive marketplace and environment, anchored in that objective. That will be a true game changer. Not just ordering faster but knowing when to order or whether to order at all.

Expand full comment
Dhawal's avatar

Thanks for this amazing article, Sangeeta. Systems Thinking is the way to go. Just completed Reshuffle book. Loved it.

Expand full comment
Ditihalo Mmusi's avatar

Very insightful. Thank you for sharing 🙏🏽

Expand full comment
Sam Keen's avatar

This was a great read to start my day, thanks Sangeet

Expand full comment
Colin Brown's avatar

Love this! Canals and railways great storytelling

Expand full comment
Jon Eivind T. Strømme's avatar

When reading this post I started thinking about the classic HBR article "Reengineering work: Don't automate, obliterate (https://hbr.org/1990/07/reengineering-work-dont-automate-obliterate).

The only reason for companies to use RPA was that the underlying applications was so old and cumbersome, but they didn't want to invest in replacing them. So they built an RPA on top (to all developers frustration). With AI, companies can solve their Jobs-to-be-done (JTBD) by developing/replacing/improving these applications and the need for RPA vapors.

Anthropic just released Claude Agent SDK ("a supervisor agent that builds and manage agents"). In the release they said that the Agents often did a better job if they where not dictated by a process, but could get some autonomy to solve the task. The context, the orchestration management and the governance is paramount.

Expand full comment
Mohit Joshi's avatar

Thanks Sangeet for this wonderful post. While I really love the framework that you are presenting here, I am still struggling a bit in how to implement it in my work as a product manager. For example, if I have to create a new product or improve an existing one, I’ll usually start out with a customer problem and set a goal around resolving it. This may then result in me choosing to automate something that will improve the experience. But I am not clear on how to best apply a frame where a new tech can disrupt an entire system. How do I think about second order impacts of coordination changes by new system and use it as starting point for innovative solution? Or should I just focus on customer problem and let the new system emerge as a solution organically? Maybe it is a bit abstract as thinking tool for me and I’ll appreciate if you could point me to anything that can help.

Expand full comment
Sangeet Paul Choudary's avatar

I think you've already started on the right path with starting with the customer. Understand their workflow today. Understand what are the assumptions absed on which that workflow exists (the constraints). Now ask yourself how AI impacts those constraints. If it does, ask yourself what the new assumptions are and hence what the new workflow looks like.

Expand full comment
Simon Torrance's avatar

I like it a lot. I think the faster horses vs motor car analogy is another powerful analogy, potentially even stronger given how different agentic Ai is to what exists today.

Expand full comment
Sangeet Paul Choudary's avatar

Thanks Simon, I considered that before choosing this partly because the horses vs motor car is overused in tech but more importantly because it still speaks to an automation frame (transportation with a new energy source) and not to a coordination frame which is my key point here. The coordination effects of cars played out later in how city design changed and highway networks were built but Henry Ford wasn't talking about that in his faster horse statement.

Just a perspective. I'm sure that when articulated well, it would have its own point to make which would be just as powerful.

Expand full comment
G. Retriever's avatar

Two related points. First is that this logic requires agentic AI to actually WORK, which it currently does not. Second point: the horse in the painting should be on the towpath, not in the canal

Expand full comment
Sangeet Paul Choudary's avatar

Two related points.

First, I'm not going to get baited down that rabbit hole. I've seen enough to know what works and why even that which works is dismissed because of the excessive hype around that which does not.

Second, who cares - in a market, "good enough" and economically accessible, even if inaccurate, substitutes the accurate but economically inaccessible. In this case, I don't have any interest in working with a designer just to get to an accurate picture. The inaccurate one helps me make the point just as well.

Expand full comment
G. Retriever's avatar

The point, which I thought was obvious: if small errors (a horse in the wrong place) compound over every step, AI has to begin implementation in very short hops with discrete tasks and laborious human supervision. If it ever gets to the point where it can do many steps while avoiding error compounding (or even error correction, the holy grail), then you could begin more complex implementation.

Expand full comment
Sangeet Paul Choudary's avatar

Yes, the point was obvious. It's the blanket dismissal of "actually WORK" that I didn't want to engage further with.

There is no dearth of automation being sold under the latest hyped name of agentic AI. The point of this post is to sensitize people to the fact that even if it does work, it completely misses the point.

Expand full comment