I’ve been thinking a lot about AI agents lately and all the hype surrounding them this year—there’s no denying their potential, but I can’t help feeling skeptical about just how much we should trust them. AI has been nothing short of a game changer. Over the last few years, we’ve seen tools made from large language models (LLMs) transform productivity—whether it’s generating content, analyzing data, or even writing code. As impressive as this is, I find myself questioning how much we can—and should—rely on agentic AI, especially when it comes to tasks that require action, like pushing code to production or making financial investments. In my role as a BI Engineer. Every day, I use AI tools to help with everything from writing scripts to analyzing datasets. They’re great for speeding up the process and offering new perspectives, but here’s the thing: I can’t imagine fully trusting them to implement anything on their own. These tools are helpful, sure—but what if the code they push has a bug? Or worse, what if it introduces a flaw that no one catches until it’s too late? The risks aren’t just technical. Imagine an AI placing the wrong trade in a high-stakes financial environment or applying the wrong logic in a critical system. Mistakes like these could cost companies millions—or worse, harm people directly. And the big question is: who’s accountable when that happens? AI systems don’t “own” their decisions, so it ultimately falls back on humans. But if humans weren’t involved in the process, how do you assign blame? That’s why, for me, human oversight is non-negotiable. AI can assist with routine tasks, but we still need someone to review and make sure things are actually correct. AI doesn’t understand business context, edge cases, or the nuance of ethical decision-making. It follows patterns and data—but we all know that data can be flawed. This doesn’t mean agentic AI is a bad idea. Far from it—it has the potential to supercharge productivity. But there’s a fine line between automation and autonomy, and right now, we need to be cautious about crossing it. Building in checks, reviews, and accountability structures is critical if we’re going to trust these tools with bigger responsibilities. So, where do we go from here? Should we push ahead and let AI take more control, or slow things down and make sure we have guardrails in place? It’s a conversation worth having—and one we need to figure out before agentic AI becomes the norm.
👍
The ethical considerations surrounding agentic AI are crucial and timely. Just look at the recent debate around autonomous weapons systems it highlights the need for careful consideration of who is responsible when AI makes decisions with real-world consequences. How can we ensure that AI development prioritizes human well-being and societal benefit?
Right to the point. Even in my case of data analysis, it still requires optimization and a human overview. Data itself is not a stand alone support.
MSc Data Analytics & AI | Ex-ML & Data Analyst @ Synechron | NLP • SQL • Power BI • GenAI | Seeking Internship: ML / Data Science / Analytics | Built GenAI OCR Tool, 92% Gear Classifier, Cut Reporting Time by 35%
1moValid point