It will be the wave of the future. There will be lawsuits due to faulty code or “hidden” undocumented features that show up after release. It’s nothing new. EEroms have allowed for firmware updates in microprocessors based products for years Even automobiles now get their code updated. Like all technologies that are innovative, they are disrupting. That’s progress
AI hallucinations. When AI can't summarize a supreme court opinion correctly, when it generates mostly garbage search results, when it fundamentally misstates a statute, if people are trusting such results they are going to get very badly hurt.
There is a ginormous personal injury lawsuit that will eventually happen because of AI. When people start paying the price for trusting AI, they will have no choice but to abandon the technology.
It's a tool, it currently isnt 100% valid. That's the caveat you accept when you use the technology. If you accept all the information at face value then you have a problem. I hope you don't blindly accept information from a website without checking the citations either. Same principle
Let's be clear on terms: "checking" means "redoing the task."
AI adds to the workload. If one is using it as you concede here, it must add to the workload.
And the underlying premise of AI is that you CAN trust it--that one does not need the caveat you mention. There will come a time when someone will rely on AI, will not apply that caveat, and will not only suffer grievous harm, but will inflict grievous harm on someone else.
The big lawsuit will not be filed by someone who relied on AI, but by someone who relied on someone who relied on AI. That lawsuit will not go well for AI.
What I am describing is how AI is being sold to the public.
What I am describing is how AI is being deceptively sold to the public.
When people say AI is going to take the place of the copywriter, the editor, the legal and medical professional, they are saying that AI is 100% trustworthy.
You are admitting that AI is not 100% trustworthy. That means AI cannot take the place of the copywriter, the editor, or the legal or medical professional.
When AI is being presented honestly to the public I will not have a need to object to AI being presented dishonestly to the public.
As AI is being presented dishonestly to the public there is a very pressing need to object to that dishonesty.
I can only make out about 20% of the terms that you are using. 🤔✍🏼🙋🏻♂️🤦🏼♀️🙇🏻🤷🏻♂️ Old guys reading about AI, Crypto, VC or NFT's doesn't translate well amigo. 😏 Thank you for this post anyway. Knew tricks for this'ol dawg....💱🤖🌉
The Windsurf and Cursor stories underline something that’s been true for decades: platforms become infrastructure, but UX wins adoption. When your AI tool solves a job clearly, repeatedly, and delightfully, people build habits around it. That’s how “just a wrapper” becomes the new default.
I’m especially glad you emphasized how vertical AI apps are expanding net new demand, not just eating market share. In my corner of the luxury travel and real estate space, we’re watching the same thing unfold,tools that used to be considered “for developers only” are being repackaged for travel agents, property advisors, and concierge teams. And these aren’t gimmicks. They’re reshaping workflows, reducing friction, and letting lean teams serve high-end clients with a level of personalization that used to require entire back offices.
One question I’m sitting with: as more vertical apps emerge, how long before we see a middleware layer specifically designed to manage cross-app AI interaction (e.g. routing outputs from legal tools into CRM or project management systems automatically)? That might be the next big land grab,something like “Zapier for AI verticals,” but enterprise-grade.
There’s a lot of hype around models, but you’re right,the real winners will be the ones who make people say, “I can’t work without this.”
I’ve read a lot of your articles and posts, and it’s clear you really know your stuff when it comes to AI. Looks like there’s a lot I could learn from you. Would it be alright if I reached out to you privately?
Thanks for reading, what a great comment! So agree with you about UX design, it's an under appreciated value add.
That's so interesting about your industry, glad it's making its way there to make your life easier
As workflows become more agentic what you describe will be easier (e.i. you have a meeting, an AI transcribed understands the action items you aligned on in the meeting and inputs them into Asana for you)
Thanks again for engaging in such a wonderful way!
It’s fair to say that “agent workflow” is the right term. We’re already seeing early signs of this, with travel CRMs integrating features like automatic aggregation of call logs or instant itinerary adjustments based on customer sentiment. But much of this functionality is still in the patchwork stage. I think the real breakthrough will be when these tools can be interconnected without human intervention, such as sentiment in a WhatsApp chat triggering a price adjustment in Salesforce, or flagging a property update in a shared Notion board.
It feels like whoever can break this glue layer without making it look rigid or bloated will quietly own the entire stack.
This is just my personal opinion, so please forgive me if I’m wrong.
Imho you're missing part of the bigger picture, how foundation models will suck in all use cases that are build as a wrapper on top, if the thing lacks defensibility. The key issue is how do you build so that the models don't eventually eat you for lunch through a simple feature release. I view GPTs as intelligence seeking predators and one has to be very strategic to not become its next prey.
Interesting thoughts. Yes addressing a TAM faster and better than a model or bigger company can pivot too will be a big part of building a successful company in the age of AI
It will be the wave of the future. There will be lawsuits due to faulty code or “hidden” undocumented features that show up after release. It’s nothing new. EEroms have allowed for firmware updates in microprocessors based products for years Even automobiles now get their code updated. Like all technologies that are innovative, they are disrupting. That’s progress
Sí, thank you for the insightful comment as always 🤝🏼
AI isn't solving any problems. It's creating them.
And there's going to be a whole lot of damage done by the time people realize that.
Thanks for engaging! What problems have opened up for you as a result of AI technology?
AI hallucinations. When AI can't summarize a supreme court opinion correctly, when it generates mostly garbage search results, when it fundamentally misstates a statute, if people are trusting such results they are going to get very badly hurt.
There is a ginormous personal injury lawsuit that will eventually happen because of AI. When people start paying the price for trusting AI, they will have no choice but to abandon the technology.
And that lawsuit will happen.
It's a tool, it currently isnt 100% valid. That's the caveat you accept when you use the technology. If you accept all the information at face value then you have a problem. I hope you don't blindly accept information from a website without checking the citations either. Same principle
Let's be clear on terms: "checking" means "redoing the task."
AI adds to the workload. If one is using it as you concede here, it must add to the workload.
And the underlying premise of AI is that you CAN trust it--that one does not need the caveat you mention. There will come a time when someone will rely on AI, will not apply that caveat, and will not only suffer grievous harm, but will inflict grievous harm on someone else.
The big lawsuit will not be filed by someone who relied on AI, but by someone who relied on someone who relied on AI. That lawsuit will not go well for AI.
What you're describing is negligence on behalf on a human being and has nothing to do with the technology
If you don't check your work then that's on you 🤷🏻♀️
There has already been a case where a lawyer relied on chat gpt to research case law, and it made up a statute
The lawyer waa disbarred because it's his job to check his own work.
What youre describing would be like trying to sue Google for hosting a website
It's up to you to ascertain the veracity of the information you are using
You are in error.
What I am describing is how AI is being sold to the public.
What I am describing is how AI is being deceptively sold to the public.
When people say AI is going to take the place of the copywriter, the editor, the legal and medical professional, they are saying that AI is 100% trustworthy.
You are admitting that AI is not 100% trustworthy. That means AI cannot take the place of the copywriter, the editor, or the legal or medical professional.
When AI is being presented honestly to the public I will not have a need to object to AI being presented dishonestly to the public.
As AI is being presented dishonestly to the public there is a very pressing need to object to that dishonesty.
I can only make out about 20% of the terms that you are using. 🤔✍🏼🙋🏻♂️🤦🏼♀️🙇🏻🤷🏻♂️ Old guys reading about AI, Crypto, VC or NFT's doesn't translate well amigo. 😏 Thank you for this post anyway. Knew tricks for this'ol dawg....💱🤖🌉
Let me know how I can simplify in future to further address my TAM 😂
Thanks for reading
The Windsurf and Cursor stories underline something that’s been true for decades: platforms become infrastructure, but UX wins adoption. When your AI tool solves a job clearly, repeatedly, and delightfully, people build habits around it. That’s how “just a wrapper” becomes the new default.
I’m especially glad you emphasized how vertical AI apps are expanding net new demand, not just eating market share. In my corner of the luxury travel and real estate space, we’re watching the same thing unfold,tools that used to be considered “for developers only” are being repackaged for travel agents, property advisors, and concierge teams. And these aren’t gimmicks. They’re reshaping workflows, reducing friction, and letting lean teams serve high-end clients with a level of personalization that used to require entire back offices.
One question I’m sitting with: as more vertical apps emerge, how long before we see a middleware layer specifically designed to manage cross-app AI interaction (e.g. routing outputs from legal tools into CRM or project management systems automatically)? That might be the next big land grab,something like “Zapier for AI verticals,” but enterprise-grade.
There’s a lot of hype around models, but you’re right,the real winners will be the ones who make people say, “I can’t work without this.”
I think it's a good take. A lot of these verticalized markets will be winner take most
I’ve read a lot of your articles and posts, and it’s clear you really know your stuff when it comes to AI. Looks like there’s a lot I could learn from you. Would it be alright if I reached out to you privately?
Thanks for reading, what a great comment! So agree with you about UX design, it's an under appreciated value add.
That's so interesting about your industry, glad it's making its way there to make your life easier
As workflows become more agentic what you describe will be easier (e.i. you have a meeting, an AI transcribed understands the action items you aligned on in the meeting and inputs them into Asana for you)
Thanks again for engaging in such a wonderful way!
It’s fair to say that “agent workflow” is the right term. We’re already seeing early signs of this, with travel CRMs integrating features like automatic aggregation of call logs or instant itinerary adjustments based on customer sentiment. But much of this functionality is still in the patchwork stage. I think the real breakthrough will be when these tools can be interconnected without human intervention, such as sentiment in a WhatsApp chat triggering a price adjustment in Salesforce, or flagging a property update in a shared Notion board.
It feels like whoever can break this glue layer without making it look rigid or bloated will quietly own the entire stack.
This is just my personal opinion, so please forgive me if I’m wrong.
Imho you're missing part of the bigger picture, how foundation models will suck in all use cases that are build as a wrapper on top, if the thing lacks defensibility. The key issue is how do you build so that the models don't eventually eat you for lunch through a simple feature release. I view GPTs as intelligence seeking predators and one has to be very strategic to not become its next prey.
Interesting thoughts. Yes addressing a TAM faster and better than a model or bigger company can pivot too will be a big part of building a successful company in the age of AI