When writing with AI-assisted tools, please make sure the human is actually in the loop
Living in the age of academic AI foolery.
The other week I came across a LinkedIn post exposing the blatant use of large language models (LLMs) like ChatGPT in academic publications clearly without any (or minimal) human guidance.
There are plenty of examples you can find online (just search LinkedIn and X). Not only is this embarrassing, but it also forces us to question how something like this could have passed peer review.
By now, most of us should know that outputs from LLMs have the potential to hallucinate or confabulate; that is, they can fabricate details or produce absolute nonsense unrelated to the input prompt. However, in my humble opinion, it's acceptable to use AI tools like LLMs to assist in writing articles only under human supervision.
But let’s not take the mickey like the examples above… A human in the loop approach will work IF a human is actually in the loop.