Alan Jacobs


human voices

#

My friend Rick Gibson makes a fascinating argument here. You need to read the whole thing, but a brief summary would go like this: No matter how vast the corpus of text on which chatbots currently draw, in order to be successful in the future they will need to have an ever-expanding and ever-developing corpus. They won’t be useful to us unless human beings keep adding high-quality information, high-quality ideas, high-quality text, that they can draw upon. 

I’m trying to decide whether this is right — or rather, I know that it’s at least partially right but I’m trying to decide how right it is. I can conceive of certain circumstances in which it would not be true. For instance, programmers often use chatbots to write code for them, and one of the things that they often say is that the code written by bots is verbose and ugly. It gets the job done, sort of, but in a bloated and inelegant way. But it’s easy to imagine that the bloated and inelegant code written by bots will eventually become the norm. Programmers who become habituated to getting their code written or at least drafted by chatbots will never develop a sense of what concise and elegant code is; and when they don’t have that sense they won’t value it and therefore won’t miss it when it’s absent. Elegance in code could just cease to be a thing. 

I guess what I’m asking is whether in programming — and in certain other areas, for instance business correspondence — what the bots provide could reshape our sense of what counts as good enough

I’m thinking here about something I wrote about a few years ago, also at the Hog Blog:  

Why can computers sometimes pass a Turing Test? Erik Larson, in his book, points out that in one test a few years ago people were told that the computer was human but not a native English speaker — which didn’t fool everyone who interacted with it but fooled enough people to make some of us worried. Why were the deceived deceived? I suggest that there are two likely answers, neither of which excludes the other.

The first was offered some years ago by Big Tech critic Jaron Lanier in his book You Are Not a Gadget. Lanier writes that the Turing Test doesn’t just test machines — it also tests us. It “cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?” That is, many of us have interacted with apparently thoughtful machines often enough — for instance, when on the telephone and trying, often fruitlessly, to get to a customer service representative, that we have gradually lowered our standards for intelligence. And surely this erosion of standards is furthered by situations in which, even when by some miracle we do get to speak to another human being, we find that they merely read from a script in a way not demonstrably different from the behavior of a bot. Lanier says flatly that “the exercise of treating machine intelligence as real requires people to reduce their mooring to reality.” 

What if that’s what the pervasive presence of AI does? — reduce our mooring to reality sufficiently that we cease to notice the difference between text written by chatbots (however out of date the corpus on which they draw) and actual human language? I think about the phenomenon of young singers singing like Autotune because that’s what they think singing sounds like. Standards of quality, like standards of beauty, are mutable.