We are entering an era where “there’s an AI for that” will be a common refrain across industries, and where the most important question isn’t “whose model is slightly better on benchmarks”, but “who delivered the solution that people love to use”.
On May 6, OpenAI agreed to acquire Windsurf an AI-powered coding assistant formerly known as Codeium, at a $3 billion valuation.
At the same time, Cursor (by Anysphere), another AI coding tool, saw its valuation surge into decacorn territory, reportedly raising capital at about a $10 billion after turning down acquisition offers from OpenAI and others.
As foundational models commoditize and the business falls to the hyperscalers who can afford to built out the compute, the value will begin to accrue at the application layer.
Windsurf’s acquisition and Cursor’s sky-high valuation together form the first clear case of a vertical AI application breaking out, a hint that the next big phase of AI will be dominated not by foundational models alone, but by the apps built on top of them.
A New AI Stack Emerges
For the players in the foundational model game, building large-scale neural networks and the infrastructure to run them is still paramount.
But as those foundational models become widespread and increasingly commoditized, attention is shifting to the application layer. This application layer consists of AI-powered tools and products tailored to specific use cases and user workflows.
Windsurf and Cursor exemplify this trend. These tools don’t involve anything new under the sun. Their breakthroughs lie in how they apply AI to a particular jobs-to-be-done (writing and understanding code) with superior user experiences.
In fact, their success suggests that simply having advanced AI models is not enough. Packaging those models into a product that fits users’ needs is the real sea change.
Consider Anthropic’s Claude. Claude Sonnet has long been the best coding assistant in the foundational model game, but Anthropic has been unable to compete in the consumer market because of OpenAI’s superior UX and product experience.
The hyperscalers will be able to own the foundational model market, but the moat for the application layer will be an intuitive UX and superior experience.
How We Got Here
Windsurf (Codeium) and Cursor have quickly become pioneers of vertical AI in the developer tool space.
Windsurf, founded in 2021 by Varun Mohan and Douglas Chen, built an AI-native code editor that integrates seamlessly with developers’ workflows. By 2025, Windsurf had reached an annual recurring revenue (ARR) of about $100 million, up from $40 million at the start of the year.
Its ability to plug into enterprises’ legacy systems made it especially attractive for corporate software teams. Seeing this traction, OpenAI moved to acquire Windsurf for $3B, the largest deal in OpenAI’s history, to compete with Anthropic’s edge in AI assisted (vibe) coding.
Cursor, on the other hand, has been scaling even faster. Built by Anysphere, Cursor offers developers an AI pair-programmer and coding helper that has caught fire in the market.
According to TechCrunch, Cursor’s revenue has been doubling every two months, reaching roughly $300 million ARR. Cursor now boasts over a million daily users.
OpenAI initially approached Cursor for acquisition, but were turned down, and instead is in talks to raise new funding at about a $10 billion valuation.
The Commoditization of Foundation Models
OpenAI’s early lead with GPT-4 is now challenged by Google’s Gemini and xAI’s Grok, which are putting pricing pressure on access to base models.
Meanwhile, open-source models (Meta’s LLaMA 2, DeepSeek, and Mistral) continue to chase the closed source frontier models, open sourcing technology that the closed source companies planned to monetize at as much as $200 per month.
A year ago, an app like Perplexity relied on OpenAI’s GPT-3.5 and a third-party search API. By early 2024 Perplexity had built its own search index and started using open models (Mistral 7B, Llama 70B), moving “beyond” being just an OpenAI wrapper.
In short, the core AI technology is becoming widely available, either through open-source projects, or cloud APIs from a variety of providers.
As a result, the differentiation is shifting upward. If any capable model can be plugged in on the back-end, then the front-end experience and domain-specific integration become the defining advantage.
Foundational AI (models and infrastructure) are set to become a commodity utility. As powerful and essential as electricity, but not where unique, defensible products are made.
Much like the early internet era saw basic web infrastructure standardize (HTTP, browsers, etc.) and the real innovation happen in web applications, we’re now seeing the AI plumbing stabilize.
This doesn’t mean foundation models are valueless. They are incredibly valuable, but largely in enabling the next layer.
To put it in context, cloud services provided the AWS segment of Amazon a record $26.3 billion in revenue in Q2 2024, marking 19% year-over-year growth, despite S3 dropping nearly 97% in price since its launch in 2006.
Cloud is a high CapEx, highly commoditized business, and yet still makes record profits. Foundational AI will follow the same trend.
But for most users and businesses, what will matter most is how AI is applied to solve their specific problems.
The value is found in migrating from the general (a model that can do anything in theory) to the specific (a product that actually does the one thing you need really well).
UX and Product-Market Fit
In emerging AI verticals, we can expect a “winner (or buyer)-take-most” dynamic to unfold. The first application that truly nails the use case with a delightful UX can rapidly capture a huge share of users in that domain.
We’ve seen this with coding assistants. Once developers find a tool that dramatically boosts productivity, they are likely to stick with it and even standardize their workflow around it, even if a slightly better product comes out.
This leads to outsized network effects and scale advantages for the leading app. For example, Cursor’s explosive growth to hundreds of millions in ARR and over a million daily users shows how quickly a best-in-class AI app can gobble up market share.
In each vertical, be it coding, design, legal review, customer support, etc., the AI application with the best product-market fit will attract the lion’s share of customers, who will have little reason to split their attention among dozens of “me-too” tools.
“Buyer-take-most” also implies that the acquirers or investors backing these leading apps can capture outsized value.
In other words, once an AI application demonstrates superior UX and traction, everyone wants a piece of it. End-users flock to it, and platform companies or investors are willing to spend big to get on board.
This dynamic will likely intensify across verticals, as AI applications become the new battleground for delivering AI to end-users.
From Wrapper to Winning
Just a year ago, Perplexity AI was often dismissed as “yet another UI wrapper” around ChatGPT. Perplexity launched as a conversational search engine that answered user questions by querying the web and GPT-3.5, displaying answers with source citations.
Skeptics saw it as merely piggybacking on OpenAI’s model and Bing’s search results. A thin layer with no moat.
But fast-forward to today, and Perplexity has proven that unique UX and relentless improvement can turn a “wrapper” into a successful application-layer AI product.
What did Perplexity do differently? It innovated on user experience. Perplexity provides structured, cited answers using RAG technology, offering follow-up question suggestions, and a smooth conversational interface that feels more like an interactive research assistant than a generic chatbot. This focus on information-seeking paid off. Users who needed quick, credible answers found Perplexity incredibly useful.
Second, the team didn’t stand still with their back-end. As noted, Perplexity built its own search index and crawlers, and even started using its own mix of AI models (including open-source ones). This gave them more control over speed, cost, and customization, leading to a more distinctive product over time.
The result is that Perplexity now dominates its niche. It commands over 60% of all traffic in the AI research/query tool category, effectively making it the leading “answer engine” among AI-powered search tools.
In fact, as of late 2024, Perplexity was handling more than 100 million search queries per week from users around the globe. Such traction has translated into investor confidence. Perplexity went from a $1B valuation in early 2024 to about $9B by the end of that year, and was reportedly in talks to raise funds at a staggering $15–18 billion valuation in 2025. Not bad for a startup once derided as a simple wrapper!
The Perplexity story illustrates that the application layer can add real value. By creating a novel user experience and focusing on a clear job-to-be-done (high-quality Q&A and search), an AI app can differentiate itself even if the underlying models are available elsewhere. It’s a powerful validation that unique UX + AI = traction.
Perhaps most importantly, Perplexity has expanded the idea of what an AI application can be. It’s not just about copying ChatGPT’s capabilities. It’s about rethinking the interface and integration.
Perplexity carved out a role as a “copilot for knowledge exploration,” which is a distinctly different product-market fit than a generic chatbot. As other entrepreneurs and companies consider building on large language models, Perplexity serves as a case study. Those who were quick to label such apps “just wrappers”, are now eating their words, as true product differentiation becomes apparent.
Usage-Based Models Aligned to Value
The rise of application-layer AI tools also brings a shift in monetization strategies. Traditional software (SaaS) often relied on per-seat licenses or flat monthly subscriptions.
AI applications, by contrast, tend to monetize through usage-based pricing tied to specific jobs-to-be-done or jobs completed. In other words, users pay in proportion to the value they get, often measured by how much they use the AI to accomplish tasks.
This makes sense. AI services have variable cost (each query or generation has a compute cost), and they often replace or augment work that has a measurable output.
We’re already seeing a proliferation of new pricing models in AI products. More than 54% of AI products are moving beyond traditional seat-based subscriptions, embracing usage-based or hybrid pricing models.
For example, some dev tools charge based on the number of code generation requests or the volume of code produced, rather than a fixed license. There are variants where pricing is per output (e.g. tasks completed) or even outcome-based (charging for successful resolutions of a task).
An AI coding assistant might bundle a certain number of “AI-generated code completions” in a plan, while an AI legal tool could charge per contract draft reviewed.
This usage-linked model aligns the incentives. Customers pay when the AI actually does work for them. It’s essentially pricing by the unit of value delivered. We see early examples across domains.
An AI customer support platform might remove per-agent fees and charges by the number of tickets resolved. A sales AI tool might price per qualified lead generated. Importantly, this shift is enabled by modern billing tech that can meter usage in real-time.
Investors and product leaders have noted that usage-based and “outcome-based” pricing are quickly becoming the norm for AI startups, outpacing traditional SaaS models.
Over time, this usage-based monetization will likely prove highly lucrative for successful AI applications. By tying pricing to actual value delivered, these apps can capture a fraction of the economic benefit they generate for users.
Usage-based models can scale as the customer scales their use. As AI becomes more integral to a workflow, the spending can grow accordingly. This also lowers the barrier for new users (“pay as you go” is less daunting than a big upfront commitment) which can accelerate adoption, a key factor when expanding to non-expert “normie” users.
Normies Welcome
One of the most exciting implications of the application-layer boom is the dramatic expansion of the Total Addressable Market (TAM) for AI solutions. Thus far, a lot of AI usage has been concentrated among tech enthusiasts, early adopters, and specific professional niches.
But as AI applications become more user-friendly and targeted to specific jobs, general users (“normies”) are increasingly adopting AI-powered tools, and this could explode the TAM.
We got a glimpse of this with ChatGPT’s viral adoption. It reached 100 million users in just 2 months after launch, making it one of the fastest-growing consumer applications ever.
That kind of mainstream curiosity and willingness to use AI hints at how large the user base can be when the product is intuitive. Now, think beyond a chatbot.
Imagine AI tools embedded in everyday applications that millions or billions of people use, from word processors to web browsers, or as standalone apps for common tasks.
The addressable market is essentially every knowledge worker, every student, every consumer with a smartphone. In other words, almost everyone in the world. AI won’t be confined to specialists. It will be as ubiquitous as the internet or smartphones, powering countless micro-use-cases throughout the day.
Crucially, the TAM isn’t static. It will grow as AI unlocks new use cases and users. AI applications can lower barriers and costs so much that people will do things they previously skipped or couldn’t afford.
For example, many individuals and small businesses rarely consulted lawyers due to cost. A cheap AI legal assistant could prompt thousands of new legal inquiries to be handled that simply never happened before.
Likewise, someone who might not have hired a personal tutor might readily use an AI tutoring app, increasing the total hours spent on learning.
The counter-factual to this isn’t replacing tutors or paralegals. The counter-factual is these people would never have sought these services to begin with.
By making expert-level assistance available at scale, AI apps will create new demand rather than just capturing existing demand.
In economic terms, we’re likely to see significant market expansion effects. Each vertical AI application that achieves product-market fit can potentially reach an audience far beyond the traditional industry players.
The pie is getting bigger. AI-powered coding tools might eventually be used by not just software engineers, but also by novice “citizen developers” or even professionals in other fields automating parts of their job.
AI writing tools aren’t just for journalists, they’re for anyone drafting an email, resume, or blog post. This broad adoption by everyday users will dramatically raise the ceiling for AI startups’ growth.
Tech investors should recognize that we’re not just shuffling around existing enterprise software dollars we’re creating entirely new markets and value streams with AI applications.
Healthcare, Education, Law and Beyond
The software development domain may be the first clear success story for vertical AI applications, but it will not be the last.
Healthcare, education, and law are three massive industries on the cusp of transformation by specialized AI tools, and they highlight the vast opportunity in the application layer.
Healthcare (Clinical Documentation & Transcription): Doctors often spend hours on paperwork and documentation, an area ripe for AI assistance. Already, we see strong traction for clinical note-generating AI.
For instance, Microsoft’s DAX Copilot (an ambient clinical documentation tool) is being used by over 400 healthcare organizations to automatically record patient visits and draft medical notes. Early results show it saves physicians an average of 5+ minutes per patient encounter, and the majority of doctors say it improves note quality as well.
On the startup side, companies like Abridge have emerged, which uses generative AI to transcribe and summarize doctor-patient conversations. Abridge’s system, trained on over 1.5 million medical encounters, can produce a structured clinical note in real time and integrate it into the hospital’s EHR. As of early 2024, around 5,000 doctors were already using Abridge’s tool.
This is just the beginning, there are hundreds of thousands of physicians in the US alone, and millions globally, who could benefit from such AI assistants.
Beyond notes, think of AI helping with diagnostic suggestions, treatment plan generation, or medical coding and billing. Each is a huge application in its own right. The healthcare AI app market could easily become a multi-tens-of-billions market, and importantly, lead to better patient outcomes by freeing up doctors’ time.
Education (Personalized Tutoring): Education is poised for an AI revolution centered on personalization. The dream of one-on-one tutoring for every student has always been out of reach, until now. Sal Khan, founder of Khan Academy, predicts that AI will provide every student with a virtual personal tutor at an affordable cost.
We’re already seeing early glimpses: Khan Academy’s own pilot AI tutor (Khanmigo, built on GPT-4) can help guide students through math problems step-by-step, ask Socratic questions, and give feedback on essays.
Other platforms are integrating AI tutors for language learning (e.g., Duolingo’s AI chat companion) and test prep. The potential here is enormous: hundreds of millions of students around the world could have access to a 24/7 AI tutor that adapts to their learning style and pace.
This could dramatically improve learning outcomes by providing instant help and tailoring instruction to each learner, something even the best education systems struggle to do at scale.
Beyond K-12 and college, consider adult learning and job reskilling: personalized AI coaches could help people learn new skills throughout their lives. An optimistic view is that AI tutors will make education more equitable and effective globally, unlocking human potential.
For investors and builders, the education vertical represents a huge TAM with a clear “job-to-be-done” for AI (tutoring, grading, content generation, etc.), and whoever cracks the code on engagement and efficacy stands to gain a “buyer-take-most” position in that market.
Legal (Research and Drafting): Law is another field where expertise is expensive and throughput is limited. AI, if applied well, can supercharge legal research, contract review, and document drafting.
We’re already seeing adoption in top-tier law firms. Harvey, an AI legal assistant platform (backed by the OpenAI Startup Fund), has been adopted by over 30% of AmLaw 100 law firms as of early 2025.
Harvey raised more than $500M in funding, reflecting the high expectations for AI in the legal arena. It assists lawyers in tasks like scanning case law, generating first drafts of memos or contracts, and answering practice-specific questions, all through a simple chat or document interface.
Similarly, other tools like Casetext’s CoCounsel (acquired by Thomson Reuters) or startup Spellbook are targeting contract review and due diligence, using AI to flag risky clauses or suggest revisions.
The legal profession, known for its cautiousness, is warming up to these AI copilots because they promise to save time and reduce drudge work (like sifting through dozens of documents for relevant precedent).
Over time, as these tools prove their reliability, they could become as standard as legal research databases are today. The opportunity isn’t limited to BigLaw attorneys; solo practitioners and even individuals could leverage AI for affordable legal help (imagine an AI that helps you draft a will or dispute a parking ticket). Thus the legal vertical for AI apps ranges from enterprise software for law firms to consumer-facing legal advice tools, altogether a multibillion-dollar opportunity.
These verticals are just exemplars. The truth is nearly every industry and profession has its “killer apps” waiting to be reinvented by AI. From finance (AI financial advisors, fraud detection assistants) to marketing (AI content creators and strategy planners) to manufacturing (AI-driven design tools, maintenance bots), the application layer is a wide-open field.
Each vertical will have its nuances and incumbents, but the playbook unfolding with Windsurf, Cursor, and their peers will likely repeat. Identify a high-value workflow, build an AI-powered solution finely tuned to that workflow, out-execute on UX and integration, and rapidly iterate to achieve a product-market fit that leaves competitors in the dust.
In summary, the acquisition of Windsurf and the rise of Cursor are not just isolated successes. They are harbingers of AI’s next phase. They show that the real value in AI will be captured by those who deliver AI seamlessly into users’ hands.
The foundational layer of AI is becoming a level playing field (or at least, one with many players), while the application layer is becoming the arena of intense competition and innovation. We can expect a wave of “vertical AI” companies in every field, each aiming to be the definitive AI-powered solution for a specific problem. Some will fail to find traction, but those that get it right could define the next generation of market leaders in their industries.
Rather than AI being bottled up in research labs or limited to Big Tech’s walled gardens, the power of generative AI is being unleashed via practical applications that genuinely help people.
This is a future where AI doesn’t just live in a sci-fi abstraction. It shows up as a helpful agent in your doctor’s exam room, as a tutor guiding your child through algebra, as a coding buddy in your IDE, or as a tireless analyst sifting through legal documents.
It’s true that the underlying models are becoming commoditized, but that’s the natural progression of any transformative technology. The same happened with the internet protocols and with computing hardware.
Commoditization is a sign of maturity and widespread adoption. What matters next is how we build on top of that solid foundation. The application layer will be where human creativity meets AI capability, producing tools that feel almost magical in how they can augment our work and lives.
In the coming years, expect to see AI applications that we haven’t even imagined yet, solving problems we didn’t realize AI could solve. But one thing is certain: the value is moving up the stack.
We are entering an era where “there’s an AI for that” will be a common refrain across industries, and where the most important question isn’t “whose model is slightly better”, but “who delivered the solution that people love to use”.
It will be the wave of the future. There will be lawsuits due to faulty code or “hidden” undocumented features that show up after release. It’s nothing new. EEroms have allowed for firmware updates in microprocessors based products for years Even automobiles now get their code updated. Like all technologies that are innovative, they are disrupting. That’s progress
AI isn't solving any problems. It's creating them.
And there's going to be a whole lot of damage done by the time people realize that.