Fenestration Review

Articles Business Intelligence
Editorial: Will AI replace human writing?

Not for a very long time to come. Here’s why.

April 23, 2024  By Patrick Flannery



When you read something written by a human there are layers of assumption involved that we aren’t even conscious of. The most fundamental of these is that the person who wrote the text understood what they were writing and intended the specific message conveyed in the text to be received by the reader. AI completely lacks both these attributes. It is simply putting words and letters together in response to a set of algorithms that will generate an intelligible response. All deeper context
is absent. 

A simpler way to put this is to say that a human writer has some basis to know whether what they have written is correct or not. Why? Because they are human and all their thought rests on a lifetime of experience. Anything they produce is consciously or subconsciously checked – even before it is written – against that experience. Absurdities jump out immediately and are probably not generated in the first place. AI is pretty good at not generating absurdities (though much worse at it than humans) simply because it is drawing exclusively from things humans have said and written…and humans don’t usually publish absurdities except for fun.

But go even one layer of analysis deeper into the truth or falsity of statements and AI completely loses its way. That’s because, to a surprising degree, what we acknowledge as true or false depends on something called “perspective.” Where does the statement fit into our larger picture of reality? And does it address the topic we are intending to address and deliver the message we are intending to deliver? Nothing an AI produces has any reference to perspective because the AI has none. It doesn’t know what the topic is; what the distinction is between real and fantasy; why it is writing something in the first place; who might be reading it and why; what the message is; or what the intended effect of the message is on the reader. 

But who cares as long as the information is not factually inaccurate and is intelligible, right? 

Advertisement

To illustrate why the lack of assumed perspective is a problem, let’s look at an example statement that an AI could produce.

“The shape of the Earth is a controversial topic.”

An AI can say this with a straight face because there are whole groups of kooks on the internet who like to argue with varying degrees of seriousness that the Earth is actually flat and all the science showing it is round is a hoax. Given those surface facts, nothing in the statement
is wrong. 

But a human knows what’s going on here. The world is full of weirdos who say silly stuff. The shape of the Earth is not controversial among anyone who’s opinion matters. When you read something written by a human, you can apply the context that they bring this perspective to what they’ve written. That affects how you read and receive it. Presenting AI-generated copy as if it were written by a person is therefore, in my opinion, fundamentally dishonest.

As AI continues to get better at producing original text, we have been having many conversations here at Annex Business Media about whether it’s a threat or a challenge and how to use it or not in our content. For the reasons above, I’ve come down on a policy that there won’t be AI-generated content in the Fenestration Review channel unless its identified as such. It’s important for you to be assured that everything you read here has human perspective behind it…no matter how weird.


Print this page

Advertisement

Stories continue below


Related

Tags



Leave a Reply

Your email address will not be published. Required fields are marked *

*