AI’s dark side is becoming more evident
Some recent reality checks in opposition to the out of control babble of the (increasingly desperate) AI hypemongers.
Against the recommendations of experts and professionals, millions of people use AI chatbots for emotional support and advice. These are often people who are in vulnerable times in their lives, perhaps unable to pay out of pocket for the type of help and support they need. The self-help industry is apparently aware of this trend, has seen this demographic of people in search of help, and collectively said, “You know, we can probably make some cash off these people.”
The Wall Street Journal reports that there is a trend of gurus creating chatbots that replicate their style and voice, allowing people to “talk” to an AI-powered recreation of them to get “personalized” advice in the style of their life coach of choice. The bots are loaded up with all of the experts’ books, lectures, and interviews, and can spit out answers to questions in the style of the author. All you have to do is pay a monthly subscription to access it all.
(Gizmodo)
I’ve noted before that the one area where I don’t have any problem with artificial “intelligence” is when it’s being used by responsible scientists. This has me rethinking that. Perhaps, though, things will work themselves out over time.
As artificial intelligence tools such as ChatGPT gain footholds across companies and universities, a familiar refrain is hard to escape: AI won’t replace you, but someone using AI might.
A paper published today in Nature suggests this divide is already creating winners and laggards in the natural sciences. In the largest analysis of its kind so far, researchers find that scientists embracing any type of AI—going all the way back to early machine learning methods—consistently make the biggest professional strides. AI adopters have published three times more papers, received five times more citations, and reach leadership roles faster than their AI-free peers.
But science as a whole is paying the price, the study suggests. Not only is AI-driven work prone to circling the same crowded problems, but it also leads to a less interconnected scientific literature, with fewer studies engaging with and building on one another.
(Science)
This is rather comprehensive. And enlightening.
As artificial intelligence seeps more into people’s lives, making it ethical as well as functional is one research team’s goal. The group of academics, who hail from different institutions under the banner of the Responsible AI Collaborative, have been indexing stories of AI’s harmful outcomes since 2018. Back then, team leader and fellow at Harvard’s Berkman Klein Center Sean McGregor says, “we were peaking in terms of AI optimism without a balancing recognition of tradeoffs.” The researchers’ creation, the AI Incident Database (AIID), would help AI practitioners identify and address the technology’s weak points.
McGregor and his team compiled the first data points solely based on news stories. In 2020, they made the AI Incident Database public, and today anyone can submit an entry for consideration. Most incidents are anchored by a story in the press about, say, students creating deepfake pornography of classmates, or a wrongful arrest based on erroneous facial recognition. As a result, the database is not an exhaustive archive of all AI’s problems, but rather a compilation of its newsworthy issues. It captures emerging risks and especially significant issues in AI adoption.
(Bulletin of the Atomic Scientists)
Thanks for your feedback. If we like what you have to say, it may appear in a future post of reader reactions.


