top of page

Welcome to Our Blog

MA Editorial is a dynamic content writing and editing service that offers proofreading and editing services for 1. Academic Writing; 2. Literary Analysis; and 3. Blog Content Writing. Plus, we offer 1. Content and 2. Grant Writing Services. Read our blog for advice on editing and content writing or get in touch directly.

Search

Should LLMs like ChatGPT Be Used for Editing?

Updated: Dec 27, 2025

Despite the temptation to do so, it would be best to refrain from using ChatGPT and other LLMs to edit any professional writing. It has now become quite popular to use ChatGPT and other LLM software to edit. This is true in both academia and content marketing.   Of course, LLM software is even more popular to use when writing content. However, more and more professional editors are now using it to edit content. 


So the question stands: “Should you use ChatGPT or any other LLM-based AI to do your academic writing?" I say no. As a professional writer and editor, I see no need to use AI. This is not because I am anti-technology or progress. It is simply because LLM-based AI is not up to the task of either writing or editing professional-level work, either in academics or content marketing. 


In this article, I discuss why relying on traditional proofreading software makes more sense. For a discussion on why AI proofreading software cannot replace professional human editors, check out this article: AI Proofreading | Can It Replace Proofreading Professionals? 

Female robot typing on a computer while sitting on the floor.

What is generative AI, and how does it work?

Generative artificial intelligence (AI) refers to large language models (or LLMs) that have been "trained to follow an instruction in a prompt and provide a detailed response." What is meant by training? Training AI refers to a process where a program is fed millions of units of information and data to come up with meaningful patterns that humans can relate to or make meaning out of.


ChatGPT and other LLM tools are trained primarily on the data that is available on Google and other parts of the World Wide Web. This means that it is something of a hivemind. Just think of the millions of answers generated by the millions of questions asked in a Google search. ChatGPT has been trained to access, assess, and synthesize all that information to provide you with the “most correct answer.” 


The user interface used by LLMs is that of a chatbot. You type in your question or request, otherwise known as a prompt, and press enter. In the case of editing or proofreading, you would have to copy and paste your essay or content into the LLM. Some LLMs also have the option of allowing you to upload your document in the form of a Word or PDF document.


This makes it fundamentally different from traditional proofreading software. With traditional proofreading software such as Grammarly and PerfectIt, your document is examined for errors. These errors are flagged and you are given the choice of accepting or rejecting these errors. This is similar to Track Changes in Microsoft Word, where Track Changes indicates where corrections are made, giving you the option of reversing corrections.


Modern proofreading software also works with apps like Google Docs or any word processing software with a window that allows you to write. Proofreading software, no matter how modern or updated, is far from perfect. They are prone to making errors, which makes it necessary to have a human review all suggestions and corrections before the document is ready for submission or publication. So what makes LLMs so unreliable?


Why LLMs are unreliable

As mentioned earlier, LLMs tend to hallucinate, and it’s not simply a bug. It’s a feature. This is because language models are not designed to be databases of factual information. Instead, they are meant to simulate the way in which humans use language. This means they are encoded to structure sentences, connect words, and follow the basic rules of grammar. 


This ability from LLMs is based on their training.  By training, I mean the fact that they have been exposed to vast amounts of text, and as a result, they have learned patterns and structures from this text. However, for factual accuracy, these models can only work when probability matches with the truth. If there's a gap in the information available, they’ll fill it in with what is most likely. It doesn’t matter if it’s true or not. 


If you are a student, I would recommend making use of traditional proofreading software such as PerfectIt, QuillBot, and Grammarly. This proofreading software is traditional in the sense that it shows your errors and give you a chance to either accept or reject its corrections. This is a function that you don’t get from LLMs, even as their tendency to hallucinate means that it is very much required. You can also learn more about the human-powered editing services we provide here.


In short, I recommend not using LLMs to proofread your work, both for academic purposes and for content marketing purposes. In addition to being prone to errors and hallucinations, LLMs also suffer from a faulty tone. They simply don't sound human, and that's obvious to everyone. An experienced professor reading your essay would easily recognize that fact and question the integrity of your essay. A potential customer reading the copy in your ad or blog will also recognize that fact and question the quality of whatever you're selling.

Cite this MA Editorial article

Antoine, M. (2025, December 26). Should LLMs like ChatGPT Be Used for Editing? MA Editorial. https://www.ma-editorial.com/post/using-llms-for-editing


 
 
bottom of page