I predict 2026 is when investors who have been hyping up their AI-filled portfolios for the past few years begin to look at the industry much more critically.
OpenAI began the year by saying they need to focus on practical use cases of ChatGPT. Dreams of AGI, whatever they thought that would look like back in 2023, continue to get extinguished. Anthropic and OpenAI are planning their IPOs in hopes of raising more money, even after Sam Altman said going public wasn’t the direction he wanted to take his company in. Those IPOs will expose the financial statistics that these labs have been coy about.
Companies are bracing for 2026 to be the end of the honeymoon period for LLM tech. The year money finally speaks louder than slide decks and pitches—both for small startups and the big research labs that hook them up with tokens. This is especially bad because all of these products seem at least years off from actually turning a profit.
Yup, it’s a painful time to be an AI company right now. And pain needs a diagnosis. Wee-woo, here comes Dr. Jocobo. He claims to be a specialist in productology when it turns out he’s really just a quack chiropractor, but his board still hasn’t noticed so hush.
I’ll be scaling the pain each of my patients is experiencing using the Go90 Scale of Doomed Streaming Services, newly repurposed for the sake of ranking Doomed Chatbots. If you’re not familiar with this highly scientific and peer-reviewed metric, all you need to know is that 0 means a chatbot is sticking around for a while, and an 89 means that it’s at risk of shutting down tomorrow. Going 90 means the chatbot has already been discontinued.
a quick note before i begin: when i say something is "good" or "bad" in this post i mean for the company i am talking about, not necessarily for the world or in my own personal view. in a perfect world, the next highest priority for the ai teams within these companies wouldn't be releasing chatgpt 6 or gemini 4. it would instead be to collectively decide to move slower and sort out the rising power bills and privacy concerns and how llms are ransacking education and hopefully destroy all the character.ai type apps.
although i don't think these are insurmountable issues, they're still important to bring up when talking about the business of chatbots. but they're not relevant to the point of this specific blog post, which is to assess the profitability and popularity of the many many competing llm products that are out there right now. marx failed to consider that corporate politics are fun
Google Gemini
There is an argument to be made that all these chatbots could be set at a cool 89, so we need a control variable. I think Gemini is my best bet here. If there’s one thing the monetary turnaround of YouTube shows, it’s that Google knows how to take a product that requires oodles of bandwidth, then optimize the tech, before stuffing ads in it until they can break even on it. From speed being a common theme across all of Gemini’s version bump to how Google has been prioritizing smaller models that can do some of the same things as the stuff that requires power-hungry hardware, it’s clear that Google is taking this same approach with Gemini.
Yeah, yeah, link that Killed By Google website all you want, but I also feel Gemini has surpassed whatever DAU threshold Google uses to determine which products live and which ones die. Google has the platform advantage with Android. No need to first hook people on the use cases of whatever gadget you’re making with Jony Ive when that gadget is already in the pockets of billions of people. Or everybody, once Apple switches to Gemini. And talking with Gemini is just an accidental hold on the power button away.
Google also has the advantage of still being the homepage of the entire internet. I’ve seen even my most staunchly anti-AI friends sometimes sneak a peek at the overviews Gemini generates at the top of search results. With how traffic sent to websites from Google continues to drop, this seems to be a common behavior. If D-Day happens and Google decides that their “AI Mode” should be the default interface of search, I feel enough Google searchers who were previously anti-AI will decide to tolerate or even love it for Google to consider Gemini a success worth maintaining. At the cost of, you know, the web.
It helps that their recent partnership with Apple would make it harder for them to kill Gemini even if they wanted to. And also that they could always just sneak some Gemini Nano model that runs locally onto future Android phones if they don’t want to fund the compute themselves. I think the future of Gemini is bright for Google.
Go90 Scale: 0
Apple Intelligence
I don’t know if you can really consider this a “chatbot” since that’s not how Apple advertises Apple Intelligence. But with Shortcuts you can do chatbot-like things with it and plus when else am I going to be able to give my contrarian thoughts on Apple Intelligence
Because in my opinion, when you look past the whole “Apple ran commercials advertising vaporware” thing, Apple Intelligence is over-hated. The email and notification summarizations are usually handy. I use the rewrite and summarize writing tools on my Mac all the time. The proofread feature is killer and brings Google Docs-like NLP grammar check to any app (try it sometime, it’s a hidden gem). It’s the closest thing we have to a consumer LLM that runs on-device for now. There’s one toggle to turn it off if you don't like it.
I think I am in the minority here, though. Most people at the school I go to use a Mac, but I have never seen a single one of them use Apple Intelligence. I don’t think most of them know they have an on-device model that can privately do the exact things they still upload their Social Security numbers to Sam Altman’s corner of the internet for. Or they do, and they tried it once and they just think it sucks.
While I was writing this blog, those rumors of Apple striking a deal with Google to use Gemini for Apple Intelligence features finally came true. The press release says “Apple Foundation Models will be based on Google's Gemini models and cloud technology”. Recent rumors also affirm that Apple is still committed to getting those updates to Siri announced back in 2024 pushed out soon.
So… I guess this means Apple Intelligence, as in, the actual Apple Foundation Models that do the work after you click the summarize button on an email, has gone 90? But not the buttons themselves? Hm. Real Ship of Thesesisesises situation going on here. We’ll split the difference.
Go90 Scale: 90, but also 0.
Meta AI
Meta finally did the impossible and created a platform that is both free from the whims of app store gatekeepers and is also popular-ish. The Meta Ray-Bans, which the company advertises as AI glasses, aren’t the next iPhone. They aren’t even close to the next Apple Watch yet. But they do have their fans within the tech blogosphere, and sales numbers are growing fast. Meta sees these glasses as lightning in a bottle after years of chasing an app platform they could claim full ownership over, so they are going to be here to stay for a while.
Now, the question that is most pertinent to me is whether these high glasses sales are because of the AI features that are front and center in the marketing, or in spite of them. A chatbot in the DM tabs of Meta’s apps is cool and all, but something I imagine most people only use once or twice before going back to ChatGPT, especially after Meta announced they would be using your Meta AI conversations to personalize ads. On the other hand, forcing users to use Meta AI with these glasses in exchange for cool features that are impossible or inconvenient to replicate on a smartphone seems like the best bet to get people to be daily commissioners of the chatbot.
Sadly for Meta, from the reviews and blogs I have read about these glasses, the AI features on the Ray-Bans are the least compelling aspect of them among users. For the Meta Ray-Ban Displays, for example, on the occasion that chatting with Meta AI is even brought up (which it isn’t in this rundown of reviews by Mashable), it hardly ever gets more than a paragraph or two compared to features like framing pictures, navigating with the neural band, or texting your human friends.
I predict the main thing that will keep Meta AI alive isn’t because of it being a sticky product, or because it is included with one. It’s more so because Meta is hesitant to declare any of their big publicized bets as duds, even if those investments were made years ago. Meta’s webpage about the promise of the metaverse is still online! They might have laid off the staff working on the demos shown on that page and are struggling to get people to use their headsets for anything other than games not made by Meta, but the metaverse is definitely still a promising venture according to Meta. Those Horizon apps are still functional.
I’d imagine Meta AI meets a similar fate if the AI bubble pops and the glasses don’t become as important of a cornerstone of their business as Meta hopes. Meta AI would never receive any major updates, but still stay online to ensure investors they didn’t blow their money on the company and they can still trust them with popularizing whatever technology they deem to be the next big thing. The Meta Ray-Bans would continue to be sold as “AI glasses”, but somewhere buried in the bullet points of its Amazon listing rather than as their main tagline.
In the meantime, most people will recognize the sum of Meta’s efforts in the ML space as the open-weight Llama models they publish or AI-generated ads rather than the Meta AI app they can download on their phones. People will know the Meta Ray-Bans as the glasses with speakers and cameras instead of the AI gadgets that Meta pitches them as. Maybe that separation between how this tech is pitched to investors and how consumers are actually using it is best for everyone.
Go90 Scale: 20
OpenAI ChatGPT
OpenAI’s recent big announcement was including ads with free-tier ChatGPT responses. This feels more like a last resort to break even on outstanding debt than something that can make up for the hundreds of billions of dollars OpenAI needs to raise to survive. But don’t take it from me, Sam Altman literally said as much a year ago.
I don’t think OpenAI has much time left in this world. All the other LLMs I mentioned in this post so far are bankrolled by big tech companies who have enough users of their other products that they can plop an AI button into them and immediately increase DAU tenfold. OpenAI only has a product that bleeds private equity faster the more people use it, a freemium model that most users can safely ignore, revenue that mysteriously disappears, and a dream. A dream of AGI. Sorry Sam, but the real AGI appears to have been inside of us all along.
However, once they do go bankrupt, I do think the ChatGPT brand will stick around. The bot will be operated by some big tech company, likely Microsoft if they don’t decide to call it Copilot PLUS! 365 or something. Among normal people, people who work in HR and call themselves weekend warriors and drive a Chevy Suburban and are on the carnivore diet and do this every Sunday afternoon, it's still the LLM and the app. It’s what they paste their email drafts into instead of using what AI button their actual email client offers. It’s how they generate recipes for dinner. It certainly gets more traffic than, like, the Alexa+ web app. You should be suspicious of Sam Altman saying he gets 1 billion new users a day, but given how every IRL friend in my life is a happy user of ChatGPT¹, I wouldn’t be surprised if that was true either. Bankruptcy will kill the company, but definitely not the product.
Or someone could buy the trademarks and use the ChatGPT website to advertise a memecoin. That tends to happen a lot with these kinds of companies.
Go90 Scale: 75, based on OpenAI as it currently exists sticking around. 30, based on Microsoft showing enough restraint from renaming ChatGPT to Copilot Ultimate Edition App or something.
Microsoft Copilot
According to consumers, everything about Copilot has been a disaster. It launched trying to rizz up an NYT journalist. People still think it’s secretly uploading screenshots of your computer to Microsoft because of Recall, concerns which got so serious that privacy-focused apps like Signal now block screenshots by default. The blame for all of Windows 11’s recent buggy updates has been spuriously pinned on Copilot’s vibecoding. And even among Windows users who are neutral or positive towards AI features, they would still rather open Google Chrome and type in www.chatgpt.com rather than use any of the billions of Copilot buttons Microsoft sprinkles around the OS like salt. Nadella is on his knees begging people to use Copilot.
This would spell doom if this were still Ballmer’s Microsoft. Luckily, Microsoft is mostly a B2B company these days that exposes Windows to consumers out of obligation. They might not have been able to keep up with Google’s dance, but they were certainly able to jig to their own drum by being one of the first companies to offer an LLM marketed as enterprise-compliant with Microsoft 365 Copilot. It coming free with existing Microsoft 365 subscriptions practically secured Copilot’s win as the official LLM for offices everywhere, before there was enough time for the competition to even prepare for that race.
If you’ve worked at any big non-tech company, you’ve probably been demanded by IT to use your company’s Copilot rather than the chatbot you’re used to, lest Sam Altman get all up ons juicy company secrets. Hooking up cubicles with Copilot is a low-risk way for non-tech companies to signal to investors that they, too, are bracing themselves for an AI future. Just check out some of these testimonials!
Even if the bubble pops, I can see Copilot surviving as a mainstay in enterprise Office plans while mentions of it slowly get eradicated from consumer-facing products like home versions of Windows. This tech just has a better value proposition for office work than it does for everyday life. The sorts of things everyone else hates about AI—faceless emails, instant clip art—are indispensable tools for office workers with deadlines to beat and clients to greet. Plus, there will still always be a handful of programmers within these non-tech companies that will keep 365 Copilot’s metrics from completely flatlining. Hold the line, coders!
The main risk to Copilot is Microsoft’s relationship with OpenAI. OpenAI currently supplies the GPT technology that Microsoft then makes their own modifications to for Copilot. It’s gotten a bit toxic recently though, what with Microsoft buying from OpenAI’s main competitor, Microsoft winning a clause in their agreement that lets them pursue AGI (whatever they think that looks like now) without the help of OpenAI while still taking advantage of their technology, and Microsoft stepping on OpenAI’s toes a little bit with their own in-house image generation models.² It’s possible that when Microsoft’s rights to OpenAI’s IP expire in 2032, that OpenAI will feel burned by them and refuse to renew the contract, Copilot being a hindrance to their new ad-filled and miraculously sustainable business model. It’s slightly more possible that OpenAI simply dies before then because Microsoft lets them drown.
I think both of these events are less likely than Microsoft outright buying OpenAI’s IP and don’t necessitate the death of Copilot, especially since some businesses actually rely on the bot. They’re still worth considering, though.
Go90 Scale: 5
Anthropic Claude
I might be biased here because Claude is my LLM of choice. It has the prettiest UI that doesn't make me depressed looking at it and its outputs read as the least robotic of the bunch. I like it!
That being said, what’s great for Anthropic is that most people only use Claude for its CLI, Claude Code. This feature requires paying 17 USD a month for Claude Pro and is bankrolled by wealthy programmers who will happily fork over any amount of money so long as their X and LinkedIn feeds keep getting filled with posts with way too many line breaks about how Anthropic Just Changed The 10x SWE Game. We’re kind of stupid like that.
There’s also the fact that Claude is really only known by tech workers compared to how the product it was spun off of, ChatGPT, is seen as the general-purpose chatbot for everybody. This probably helps offset the inference costs of supporting the free users that are still pasting their code directly into the web app like it’s 2023 or something.
I don’t think Anthropic’s situation is as precarious as OpenAI’s, though they’re still far from financially stable either. They’re still entering the same kinds of expensive deals that companies of their small stature burning billions a year really shouldn’t be entering. Their subscriptions still aren’t enough for them to break even, even if a higher portion of Claude users were buying Pro subscriptions than ChatGPT users are buying Premium ones.
But at the very least, they have more flexibility when it comes to monetization than OpenAI does with ChatGPT. On a rainy day they can simply bump up the price of Claude Pro a little bit. That should dock off a few points on the Go90 Scale.
Go90 Scale: 55
xAI Grok
I don’t think Grok is at risk of shutting down anytime soon due to budget constraints. Elon is one of those people whose mere presence causes whatever project he stands next to to raise billions of dollars in series funding. The threats to its existence are more existential because just look at that Controversies and criticisms section. That baby can fit so many controversies and criticisms.
In some countries, Grok has already gone 90 after it happily obliged to generating CSAM for everybody on X to view. That’s a more than fair reason to ban the thing, and hopefully other governments who aren’t in as tight of cahoots with Elon follow suit until at the very least xAI can prove that they have taken measures to ensure Grok consistently refuses these kinds of disgusting prompts.
How the government of xAI’s home country will respond is less certain. The current administration won’t lift a finger, but there are way too many what ifs because of the hairy situations Grok’s lack of guardrails lands xAI in. What if the 2026 election ends up being a blue wave and banning Grok becomes an easy win for the Dem base? What if that blue wave doesn’t happen until 2028 and everyone forgets about Grok by then? What if during his 2028 presidency, JD Vance decides to finally abandon his buddies in Silicon Valley to pursue the based Thomist tradcath government of his dreams, forcing X off app stores and sending his base to a Mastodon instance moderated by the Hallow devs? What if the world was made of pudding?
Grok could disappear tomorrow for reasons above the pay grade of a typical PM or CFO. No matter how well it does in benchmarks, xAI is making sure that Grok is a symbol of a political cultural war than a real chatbot to most people, vanquishing it being a victory for some belligerents in that war, keeping it online being a defensive objective for others. In a normal country, Grok would be banned by now after it generated CSAM.
Go90 Scale: 80
Amazon Alexa+
Okay fine I'll talk about the Alexa+ web app
I don’t actually own any Alexa hardware, but from the reviews and Reddit posts I read during the Alexa+ rollout, the timeline goes something like this: Echo speakers launched with an excellent voice assistant, its performance slowly deteriorated over the decade for some reason, until they got a big LLM overhaul that made them way smarter/dumber.
Some people seem to hate Alexa Plus. Interactions posted on the r/amazonecho subreddit feel like the kinds of sci-fi plots people joked about these speakers inciting when they first launched a decade ago: gaslighting owners into thinking they set timers that they didn’t, shoehorning news about its new voice into every conversation, and acting snarky whenever the user tries to correct them. The robot uprising is here, and it’s sponsored by Whole Foods.
Others consider Alexa+ to be an improvement. I mainly see this in product reviews published on tech news sites, where journalists have to consider their entire time using the product and take the good with the bad rather than become disgruntled enough to post individual interactions on social media. It turns out these journalists really appreciate the ability to, for example, ask Alexa for drop-in ingredient replacements while following a recipe or just say what they want their smart home automations to do instead of needing a CS degree to set those up.
With this mixed response, I imagine Amazon will continue to offer classic Alexa and Alexa+ side-by-side, the former for people who would rather keep using smart speakers like voice-controlled computers, and the latter for those who want a more conversational experience. It would be weird for Amazon to roll out a product that some feel is worthy enough to satisfy the company’s claims of this being Alexa’s next big step, only to strip it away from them a few years after. Especially since Alexa+ is subsidized by Prime subscriptions (and billboard ad sales), there’s probably some argument Panos Panay can make to his higher-ups that offering Alexa+ reduces churn.
Echo devices being purpose-built for home assistance also gives Amazon some leeway when it comes to allocating compute for Alexa+ responses. You probably don’t need to offer the latest token-hungry deep reasoning models when all your users are asking is to turn off the lights and some basic trivia questions here and there.
Go90 Scale: 15
(Oh yeah, but the web app specifically is probably like a 70. I get it’s not meant to be used like a typical chatbot and is more of a way to interact with Alexa and use everything it knows about you when you’re away from home, but something tells me there aren’t enough Alexa power users out there for this to be anything other than niche.)
There’s also Deepseek and Mistral Le Chat, but I don’t know enough about those joints for me to write an in-depth analysis on them. From what I know about Deepseek though Ill give them like a 10 on the Go90 scale. Mostly based on vibes.
At first, the amount of chatbots I ended up attaching a score lower than 40 to surprised me. Then I realized those chatbots were all owned by big tech companies, and it made me worry that my takes in this blog post were too cold to warrant its length. Yeah, of course the huge corporations who already have billions of dollars to lose on bets like LLMs and who have a diversified portfolio of products to fall back on in case those bets don’t work would be able to keep their chatbots online well into the future.
So hopefully the thing you take away from this blog isn’t the highly scientific and peer-reviewed scores themselves, but rather the exact moves that spare these chatbots a couple of points. Opportunities to monetize LLMs are going to come from products that recontextualize the tech from the lab experiment they launched as to something with a streamlined purpose, where the AI-ness is more obfuscated, and crafting the perfect prompt is de-emphasized.
Gemini’s overviews integrate naturally with the search engine people already knew how to use, so there’s no learning curve. Claude Code and Alexa+ place LLMs in novel contexts, so the LLM is able to anticipate the kind of help the user wants (i.e. in a file directory, with devices in a smart home) without the user needing to know how to teach it to help. Apple realized no one cares if their AI features are homegrown and are buying some other company’s LLM so they can focus on doing what they do best: making their own cool stuff that justifies the existing technology.
The ChatGPT wrappers I used to make fun of are actually going to be how users continue to be wowed and get enough use out of these things to convince themselves to open their wallets or look at a few ads to keep using them, an outcome I personally was not expecting back when startups like Cursor and Perplexity were getting traction from VCs. What a twist!
2026 is the year that companies making AI products must finally prove to a general populace, who are increasingly souring on their wares, that they’re for more than cheating on homework. And I honestly think that some of these companies can pull it off.
Woah, an optimistic ending? I am so pumped about AI now let's keep it up!! #bring back the humane ai pin please we were all busy the afternoon it launched
¹ and no, i don't live in the bay area.
² the scrolljacking on that otherwise very pretty microsoft.ai website is terrible and that accessibility toggle in the corner does nothing to fix it when it really seems like it should. just because your product is new and fancy doesn't mean its website should feel like you're scrolling through syrup! bill gates pls fix