For Immediate Release

FIR #466: Still Hallucinating After All These Years

Informações:

Sinopsis

Not only are AI chatbots still hallucinating; by some accounts, it's getting worse. Moreover, despite abundant coverage of the tendency of LLMs to make stuff up, people are still not fact-checking, leading to some embarrassing consequences. Even the legal team from Anthropic (the company behind the Claude frontier LLM) got caught. Also in this episode: Google has a new tool just for making AI videos with sound: what could possibly go wrong? Lack of strategic leadership and failure to communicate about AI's ethical use are two findings from a new Global Alliance report People still matter. Some overly exuberant CEOs are walking back their AI-first proclamations Google AI Overviews lead to a dramatic reduction in click-throughs Google is teaching American adults how to be adults. Should they be finding your content? In his tech report, Dan York looks at some services shutting down and others starting up.Continue Reading → The post FIR #466: Still Hallucinating After All These Years appeared first on FIR