The AI bust seems to be upon us
If you have artificial “intelligence” company stocks in your portfolio you presumably had a disheartening, even frightening, week. Like many people I don’t see the reality as getting any better over time for this grotesquely overhyped industry/mega-scam. But don’t just take my word for it (or for anything else for that matter). One of the biggest names in AI has basically thrown up a white flag, though the company’s mouthpieces will never admit it and may very well not even realize that that’s what they’re doing, because they’re so far gone in their own fantasies of world domination.
Faced with this dilemma—where do you get a trillion dollars quick?—OpenAI is getting ready to run hat in hand to the taxpayer for subsidies, like every great Ayn Randian self-created entrepreneur, pulling themselves up by their bootstraps. At a recent Wall Street Journal tech conference, OpenAI Chief Financial Officer Sarah Friar suggested that a government loan guarantee might be necessary to fund the enormous investments needed to keep the company at the cutting edge…
Though Friar later walked back her suggestion, saying that she was advocating for structural support for AI in general, not just her company, it is likely true that some kind of huge subsidy or another is probably the only way that OpenAI’s preposterous business model—it is “worth” a supposed $500 billion—can be sustained.
(The American Prospect)
The hubris of too many of these pathologically greedy Big Tech assholes is just sickening. And I’ve gotta mention that it would be a really good deal if the Democratic presidential candidate in 2028 is NOT a Big Tech butt-kisser like Gavin Newsom. But I digress. Back to some highly relevant info:
The idea up in the C-suite is almost certainly that automation will be able to fill in those gaps, even though there’s little to suggest that it will actually play out that way. According to a study done by the Center for AI Safety, AI agents were only able to complete about 3% of the work assigned to them that humans can do reliably. Given that, it’s little surprise that a recent report published by research and advisory firm Forrester found that more than half of all employers who cut workers and tried to replace them with AI regret the decision.
(Gizmodo)
So don’t buy the claims that spiking job losses are mostly due to AI, if you want any advice from me. (Such claims already have a name: “AI-washing.”) The Trump economy has certainly started to nosedive. As was inevitable, though he and his whimpering acolytes sure managed to screw things up in probably record time.
Back to AI:
Second – despite the improvement seen in the BBC-to-BBC comparison, the multi-market research shows errors remain at high levels, and that they are systemic, spanning all languages, assistants and organizations involved. Overall, 45% of responses contained at least one significant issue of any type…
So, there has been progress in some areas, but there is much more to do. Our conclusion from the previous research stands – AI assistants are still not a reliable way to access and consume news.
(BBC)
I for one am not about to bet on that progress happening, and not just when it comes to AI’s dreadful “news” generation. The technology is fundamentally limited, and will stay that way until machines are created that are complex enough to produce anything akin to emergent consciousness. Everything’s a long, long way away from that.
And on the receiving end:
Menlo engaged Morning Consult to survey 5,031 Americans. Here are the takeaways:
– There are actually a ton of ordinary US consumers who have used a chatbot at least once in the past six months! 20% use it daily!
– This is because the chatbots are free and convenient.
– Consumers use chatbots for casual things. They’re toys.
– Only 3% of chatbot users pay anything. Though that number’s a bit weird.
(Pivot to AI)
I’m now more of an AI skeptic than ever. It will take a lot to move me away from that. Way, way more than I’m seeing.
Comment from Joe Musich: I have played with OpenAI a bit. But with art. However there is a lot of prep to do if you are looking for something usable by your own imagination’s creative standard. The machine must be fed with pretty damn near to how you want the outcome. A percentage can be fed into the creating for how much you want AI to vary from what you have done. A very low percentage can do some nice cleaning up without some kind of major change. The text prompts you have to feed the beast with after giving it an image can do damage if one is not careful. The money making for OpenAI seems to be the rental fees. The way that works is buying usage. There are ways to make your payment stretch a very long way. To do that you have to spend time planning the creation to the point of being easily able to visualize it. If I was not 79 my time might be better spent on taking art/cartooning classes and forgetting AI. That being said if the bean pushers are not taking advice from the hands on people doing any kind of endeavor AI will run away with the endeavors’ capital very easily. I may be in a minority but I sure would not want AI to create medicine on its own. It has to be watched carefully every step of the way. The tech bros that are pushing AI clearly are only concerned about lining their pockets.
Thanks for your feedback. If we like what you have to say, it may appear in a future post of reader reactions.


