The risk of overreliance on AI
A study from MIT warns about the dangers of relying too much on AI and LLMs. We delve into the argument, and how AI can help--and hinder--society.
Michele Li-Fay
6/27/20255 min read
When we exhibited at the SMEXPO in June 2025, we noticed that around 10% of exhibitors were either AI companies or businesses that worked with AI. In addition, we were approached by at least a dozen AI developers, pitching their latest chatbot or AI capabilities to us, to see if we would integrate them into our services. It was clear that the topic of artificial intelligence is not only the hot topic of the decade, it is fast becoming the dominant topic of the decade.
As a digital consultancy, perhaps you would expect us to be feting the development of artificial intelligence and large-language models such as ChatGPT. After all, all the experts are talking about how AI is a must for every business, and the big tech companies, whether it's Samsung or Apple, Meta or Microsoft, are all rolling out different AI functionalities.
But we are bucking the trend when it comes to this topic. We are fans, but we are cautious fans. We believe, as society, we need to be cautious about how we implement and deploy artificial intelligence. Because if we're not careful, it could very well erode human intelligence, shut down the human brain, and hinder human and societal development.
We are not AI haters
Off the bat, we would like to point out that we are not AI haters. We are not adverse to the development of artificial intelligence, and actually, we are big advocates of considerate deployment. We have used AI and LLMs plenty in our lives, both personally and professionally. We have used ChatGPT to help solve coding issues, we have used DeepSeek to help with content ideas.
But the key to the discussion is considerate deployment. Just because you can, doesn't mean you should. Because it can affect how your brain retains and processes information, which, in the long run, can affect our ability to think critically, logically and laterally.
And just when we were discussing this with our friends and family, MIT published a study about how ChatGPT can erode critical thinking, framing our concerns in a succinct and concise manner.
The MIT Study
It must be said upfront that the MIT study has some caveats worth considering (and this is admitted by the lead author Nataliya Kosmyna): the sample size is small, and the results have not been peer-reviewed. But nonetheless, it creates good food for thought on a topic where the conversation is growing and evolving every single day.
The study divided 54 subjects aged between 18 and 39 years old into 3 groups, with the task of writing several SAT essay:
Group 1 used ChatGPT (LLM Group)
Group 2 used Google search only (Search Engine Group)
Group 3 used neither (Brain-Only Group)
Groups 1 and 3 were then tasked with an additional study of writing the same essays, but with reverse tools, i.e. the LLM Group used no tools, while the Brain-Only Group got to use ChatGPT.
The results were shocking, but--at least to us--not surprising in the least. The LLM Group "consistently underperformed at neural, linguistic, and behavioral levels", while the Brain-Only Group demonstrated better memory recall and higher brain activity. The LLM Group struggled to recount the content of their essays, even though they had just "written" them, showing that the information was not properly processed and retained by the subjects.
The purpose of the study, according to Kosmyna, is to spark debates and conversations about the introduction of AI into education, especially for younger years, since many education chiefs across the world have advocated early introduction of AI into the curriculum. While AI and LLMs are valuable tools to enhance learning, the paper raises the point that we need to be cautious of how we implement and deploy these resources, and whether they can actually hinder rather than help not just education but the development of the human brain.
Our real-life AI example
Like we've mentioned, we do use AI and LLMs in our everyday lives, and recently, we have used ChatGPT to help with a client project. The client was looking for a custom-coded application, and, while we have experience with coding, we are not web developers, so we needed help to troubleshoot some of the issues.
We got to a point in the development where we were looking to integrate two different pieces of software together so they could speak to each other. As they were software by different developers, there was no immediate, obvious way to create the connection. So naturally, we turned to ChatGPT to ask if the integration was possible.
ChatGPT said yes, and proceeded to provide detailed instructions of how to implement the integration. It would require a third-party hosted solution, and an incredible amount of code. As we read through it, we thought to ourselves, "this looks overcomplicated and convoluted", so we approached it with caution. From our experience, integrations should involve as few hosts as possible; ideally, you would have one system speak directly to another system. The more systems in the mix, the bigger the chance of communication breakdown.
After a few minutes, we decided to do some research away from AI and LLMs, and got to understanding the software better. We noticed functionalities that seemed to suggest there could actually be a direct solution, rather than a cumbersome integration with a third-party. So we asked ChatGPT if our newly discovered solution would work.
ChatGPT said yes.
Well, why didn't ChatGPT suggest this in the first place?!
The reality is ChatGPT and other LLMs and AI models are only as good as the information they are provided with. If people inject false data or information into the training data, or there have been updates that the machines haven't been notified about, the solutions that ChatGPT provide could be false. At best, you have a slight error; at worst, you end up spending unnecessary money, or you provide completely wrong information.
What about your business?
Nowadays, it's difficult to navigate life as a small business owner without someone shouting about AI in your face, whether it's on news articles, trade shows, TV shows or even LinkedIn and social media posts. You may feel like if you aren't implementing AI in some shape or form, you are falling behind on the times.
Our approach at Mpowering Solutions is possibly quite different to other digital consultancies and agencies out there. We believe that you should implement or use AI only when it is right for you. Some businesses will absolutely need AI to help automate processes and thrive. But if you're a small business that can handle existing queries or workload on your own or with your current setup, don't be pressured or bullied into implementing AI. AI is here to help us, not to be us. If you don't need AI, the chances are, you are right.
Want Digital Advice?
Still unsure if AI is the right approach for you? Get in touch and ask about our Consultancy services, where we use our digital knowledge and know-how to help you navigate the confusing world of digital.


Privacy Policy | Terms and Conditions
© 2025 Mpowering Solutions