Interesting reads

Humans Shouldn’t Have to Prove They Are Humans: The AI Detection Debacle

Scammy AI detection tools are robbing some people of their money … and some of their jobs.

For several writers who write not just for the money it brings but for the fulfillment it offers, AI detection tools hit like a nightmare.

While large language models (LLMs) like GPT and Gemini are fun tools to play around with, they have their limitations (yes, even the paid versions!). Most serious writers had to abandon ChatGPT for the lack of originality and repetitiveness of patterned writing. Agencies and publishing houses gave clear guidelines of steering clear of AI-generated content to ensure human ideation does not start relying on AI-powered content, which is an amalgamation of already existing thoughts and ideas.

But here comes a new task at hand: writers who are chucking GenAI tools in an effort to preserve originality, are now being forced to prove their writings don’t come across as AI generated. And no matter what they do and how they modify their work, tools like ZeroGPT, Turnitin, Copyleaks and Quillbot are wrongly detecting the content as AI generated.

And isn’t the irony obvious? The tool that learns from humans will behave (apparently) like a human. GenAI is getting stronger everyday, which simply means that it is getting more human-like. So, how does a real human become more human (or less??) to avoid being detected as AI? Isn’t that the last thing humanity needs to be proving? As if the captchas and puzzles and math equations weren’t enough, now writers are having to introduce casual errors, grammatical awkwardness and even unnecessary punches in hopes to escape the wrath of AI detection tools.

Reddit forums are full of people- from freelancers, to professional writers to students complaining about the ridiculous inaccuracy of the ‘ accuracy’ claims of these tools.

And so, when a handcrafted piece is detected as AI, it shows how hurried the programming for such tools has been to simply capitalize on an anticipated market demand.

How Does an AI Detection Tool Even Work?

Just like LLMs, AI detection tools have been trained on large sets of human-generated and AI-generated content. It is said that they look for predictability: how unpredictable is one sentence after the other and structure variation which includes looking for differences in sentence structures, lengths, patterns etc. It is assumed the AI generated content is likely to have more predictability and less burstiness as compared to human generated content.

One integral yet concerning part of AI detection tool is syntactic analysis whereby the tool examines the grammatical structure and credits AI if it detects uniform usage across. Because of these reasons, content which is academic, literary or simply well structured and well- written information pieces can be wrongly detected as AI-written content. In a recent troll, AI detection tools detected content from religious books like Bible and Geeta to be AI-generated.

Why These Tools are Immature and Cannot Be Trusted

AI detection tools are in their infancy and whether they will ever be able to outpace AI content creation tools is a big question. More time, energy, and effort is being spent on helping AI become the best human-assistant there ever was. The launch of AI detection tools happened way too fast just to capitalize on a potential new market. The advent of Generative AI brought along a major concern for educators who wanted to limit academic misconduct. The global AI in education market size was valued at USD 1.82 billion in 2021 and is expected to expand at a compound annual growth rate (CAGR) of 36.0% from 2022 to 2030.

AI detection tools jumped on the bandwagon labelling themselves as accurate tools targeting educators and publishers, promising to help detect AI vs human generated content. This was more of a predatory approach for monetization instead of a research-led approach for consumer benefit. Just this month, Bloomberg reported how an AI detector too falsely accused  Moira Olmsted, a university student, of submitting an AI-generated essay, leading to a zero on her paper. Such harrowing incidents are demoralizing, often with devastating consequences—in Olmsted’s words, a punch in the gut. People have slowly started realizing the uselessness of such tools in their present state and how AI vs humans could be a battle not worth fighting.

What Does This Mean for Writers and Content Creators?

Writers trained to follow rules of grammar, flow and coherence stand to suffer if their pieces look too uniform, consistent, segmented and industry-relevant. A fair amount of unpredictability needs to be introduced in order to humanize one’s content. And even then, you never know if a tool will still detect your content as AI-written. The onus lies on writers now to inform stakeholders of how these tools do not really work.

The challenges is also to beat AI in a game that was ours and that can happen only if we are able to supersede the benchmarks AI has already met. Gone are the days of hygiene content that simply does the job. Human writing is expected to be more nuanced, engaging, and relatable- and that can happen only if it stems from experience, creativity and the courage to experiment. They will have to ensure they recalibrate their own ideas and thoughts and keep assessing the originality of their story-creation process. Leveraging GenAI for brainstorming and fine-tuning could, in fact, give great results but only when AI is treated as an assistant and not a master.

A Future of AI Acceptance and Co-existence

The purpose of AI detection stands defeated in a world where humans and AI co-exist and it is the piece of work which is co-created that gains glory. The way we don’t mind whether calculation is done by a human or by a calculator, we wouldn’t eventually mind what is the source of the structure of a piece of content as long as it is thought-provoking and contributes to the universal mind.

Share
techquity_admin

Recent Posts

OpenAI’s Blueprint for AI Innovation

OpenAI releases an AI strategy blueprint detailing what the US needs to do to stay…

13 hours ago

The Meteoric Rise of AI: Can It Steal Quantum Computing’s Thunder?

For years, quantum computing has been touted as the Holy Grail of computational technology, a…

2 days ago

The Great Talent Migration: Why High Performers Are Leaving Big Tech

Back when I was just graduating college, if you’d asked me what my dream job…

4 days ago

Can Homegrown Indic Language AI Models Scale in 2025?

Homegrown Indic language AI models promise to be a key driver of the Indian AI…

1 week ago

Event Alert! Global AI Conclave – From Hype to Impact

The Global AI Conclave, presented by Moneycontrol and CNBC-TV18, is set to bring together visionaries,…

2 weeks ago

The 3-Day Workweek: Pipe Dream or the Definitive Future of Productivity?

Imagine waking up peacefully on a Thursday morning without the worry of rushing to the…

2 weeks ago