Christopher Foster-McBride’s Post

View profile for Christopher Foster-McBride

The ‘AI Risk guy’, Co-Founder @Digital Human Assistants | Founder @AI for the Soul | Co-Founder @tokes compare | Founder @Medical Coding and Documentation GPT, also healthcare and public services

Understanding Hallucinations and Fabrications in LLMs 🤖💭 Inspired by 'LLMs Will Always Hallucinate, and We Need to Live With This' - Sourav Banerjee et al. Large Language Models (LLMs) are powerful tools, but they sometimes produce errors known as hallucinations or fabrications. These errors are not just occasional—they are inherent features of these systems. Therefore, we need to take the time to train people on how to use LLMs effectively to navigate their limitations and raise awareness. 💡 Hallucinations and Fabrications - Let's break down the difference: Hallucinations occur when LLMs provide inaccurate or incomplete information, usually based on existing knowledge but applied incorrectly. Examples include: 1. Factual Incorrectness: Providing incorrect facts—e.g., stating a blood sugar level is 150 mg/dL when it is actually 120 mg/dL. ⚠️ 2. Misinterpretation: - Corpus Misinterpretation: Misunderstanding the context or meaning within the data, leading to incorrect conclusions. ⚠️ - Prompt Misinterpretation: Misinterpreting a user's question due to ambiguity, such as confusing "lead" (the chemical element) with "lead" (leadership). ⚠️ 3. Needle in a Haystack: When LLMs struggle to find the correct details: - Missed Key Data Points: Leaving out crucial information, like only mentioning one cause of World War I. ⚠️ - Partial Incorrectness: Mixing correct and incorrect facts—e.g., stating Neil Armstrong walked on the moon in 1959 instead of 1969. ⚠️ Fabrications are different—they involve creating entirely false information with no basis in the training data. Examples include inventing a non-existent scientific study or making up a fake quote from a historical figure (this happens a lot!). ⚠️ Summary: Hallucinations and fabrications in LLMs can lead to issues such as misinformation, legal and ethical risks, erosion of trust, and amplification of biases. Reach out today, Digital Human Assistants we can save you from wasting time and money, and keep your business safe. 🔒🤝https://lnkd.in/gRPDET6x Paul Edginton Ricky Sydney

To view or add a comment, sign in

Explore topics