Christopher Foster-McBride’s Post

View profile for Christopher Foster-McBride

The ‘AI Risk guy’, Co-Founder @Digital Human Assistants | Founder @AI for the Soul | Co-Founder @tokes compare | Founder @Medical Coding and Documentation GPT, also healthcare and public services

📚 "LLMs Will Always Hallucinate, and We Need to Live With This" by Sourav Banerjee and the team. 🧠 This is a foundational paper - if you're an AI / LLM practitioner or champion (or naysayer), this is worth reading - many of you will know this but the evidence is vital. 🔍 Summary: As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically (hence my work on the AI Trust / Verisimilitude Paradox). 🤖 This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. 🎭 The researchers demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. 🧮 As I have said before - we are playing a game of error minimization, so we need to understand risk and risk mitigation. 🎯 There is still utility in LLMs, but they need to be handled and managed with care. ⚠️ We can save you time, money and help you safely navigate the ‘Age of AI’ #AI Risk Guy Digital Human Assistants Paul Edginton Ricky Sydney https://lnkd.in/gRPDET6x

Paul Edginton

CEO Advantage Podcast - Company Director, Board Chair, Innovator

5mo

Carmel Crouch this what we were talking about on Saturday. It’s not a case of “bad product” but a case of “no one is perfect” or in this case no thing…

To view or add a comment, sign in

Explore topics