AI ain’t ready
Not ready to do more good than bad, anyway, that’s for sure:
Every so often these days, a study comes out proclaiming that AI is better at diagnosing health problems than a human doctor. These studies are enticing because the healthcare system in America is woefully broken and everyone is searching for solutions. AI presents a potential opportunity to make doctors more efficient by doing a lot of administrative busywork for them and by doing so, giving them time to see more patients and therefore drive down the ultimate cost of care. There is also the possibility that real-time translation would help non-English speakers gain improved access. For tech companies, the opportunity to serve the healthcare industry could be quite lucrative.
In practice, however, it seems that we are not close to replacing doctors with artificial intelligence, or even really augmenting them. The Washington Post spoke with multiple experts including physicians to see how early tests of AI are going, and the results were not assuring…
The problem with tech optimists pushing AI into fields like healthcare is that it is not the same as making consumer software. We already know that Microsoft’s Copilot 365 assistant has bugs, but a small mistake in your PowerPoint presentation is not a big deal. Making mistakes in healthcare can kill people. Daneshjou told the Post she red-teamed ChatGPT with 80 others, including both computer scientists and physicians posing medical questions to ChatGPT, and found it offered dangerous responses twenty percent of the time. “Twenty percent problematic responses is not, to me, good enough for actual daily use in the health care system,” she said.
(Gizmodo)
Even some platforms that largely exist to pimp AI are seeing fit to acknowledge discomfiting realities. This lists and describes examples:
However, like any technology, AI isn’t perfect. Mistakes and unexpected behaviors can occur: from being biased to making things up, there are numerous instances where we’ve seen AI going wrong.
(Evidently AI)
Here’s a really bad one:
There is widespread concern about the ways artificial intelligence could impact the future. Yet somehow that concern seems detached from the fact that AI presents an immediate threat to millions of people in the present. Since last year, whistleblowers and investigative reporters have provided some insight into how AI is being used in the context of the ongoing genocidal campaign perpetrated by the Israeli military against Palestinians in Gaza, a campaign which Israel’s own leaders have repeatedly described as aimed at the annihilation of a people. Artificial intelligence and those selling it to Israel (including U.S. tech companies) are helping.
The thought leaders and CEOs who warned us all last year about the “existential threat” posed by an imaginary “artificial general intelligence” are notably silent on the use of AI, as it currently exists, to kill and maim.
(In These Times)
And of course there need to be more opportunities for endless grifting by our tech overlords.
OpenAI has called for increased US investment and supportive regulations to ensure leadership in AI development and prevent China from gaining dominance in the sector. Its ‘Economic Blueprint’ outlines the need for strategic policies around AI resources, including chips, data, and energy.,,
CEO Sam Altman, who contributed $1 million to President-elect Donald Trump’s inaugural fund, seeks stronger ties with the incoming administration, which includes former PayPal executive David Sacks as AI and crypto czar. The company will host an event in Washington DC this month to promote its proposals.
(Digwatch)
And a somewhat positive item:
Nearly all the big AI news this year was about how fast the technology is progressing, the harms it’s causing, and speculation about how soon it will grow past the point where humans can control it. But 2024 also saw governments make significant inroads into regulating algorithmic systems. Here is a breakdown of the most important AI legislation and regulatory efforts from the past year at the state, federal, and international levels.
(Gizmodo)
Starting in less than a week there will be no chance at meaningful AI regulation at the federal level for a while. But there’s no reason for states that are able to do so not to go for it. Bottom line, we need to regulate the hell out of AI, and make Big Tech spend its own money to improve it. If they can.
Thanks for your feedback. If we like what you have to say, it may appear in a future post of reader reactions.