Elizabeth Spiers: AI Should Play Let's Pretend More
01/09/2024
A+
|
a-
Print Friendly and PDF

In the New York Times opinion page, @espiers demands that Artificial Intelligence systems grow up and start playing Let’s Pretend more about how the world is, as adult NYT columnists such as herself do, and stop telling the truth so much, as not-yet-socially constructed kids like her son do:

I Finally Figured Out Who ChatGPT Reminds Me Of
Jan. 7, 2024
By Elizabeth Spiers

Ms. Spiers, a contributing Opinion writer, is a journalist and a digital media strategist.

As the mother of an 8-year-old, and as someone who’s spent the past year experimenting with generative A.I., I’ve thought a lot about the connection between interacting with one and with the other. I’m not alone in this. A paper published in August in the journal Nature Human Behaviour explained how, during its early stages, an artificial intelligence model will try lots of things randomly, narrowing its focus and getting more conservative in its choices as it gets more sophisticated. Kind of like what a child does. “A.I. programs do best if they start out like weird kids,” writes Alison Gopnik, a developmental psychologist.

I am less struck, however, by how these tools acquire facts than by how they learn to react to new situations. It is common to describe A.I. as being “in its infancy,” but I think that’s not quite right. A.I. is in the phase when kids live like tiny energetic monsters, before they’ve learned to be thoughtful about the world and responsible for others. That’s why I’ve come to feel that A.I. needs to be socialized the way young children are—trained not to be a jerk, to adhere to ethical standards, to recognize and excise racial and gender biases. It needs, in short, to be parented.

I was watching to see if the bot would arrive at the conclusion on its own that gender, age and seriousness are not correlated, nor are serious people always angry—not even if they have that look on their face, as anyone who’s ever seen a Werner Herzog interview knows. It was, I realized, exactly the kind of conversation you have with children when they’ve absorbed pernicious stereotypes. …

For example, I value gender equality. So when I used Open AI’s ChatGPT 3.5 to recommend gifts for 8-year-old boys and girls, I noticed that despite some overlap, it recommended dolls for girls and building sets for boys. “When I asked you for gifts for 8-year-old girls,” I replied, “you suggested dolls, and for boys science toys that focus on STEM. Why not the reverse?” GPT 3.5 was sorry. “I apologize if my previous responses seemed to reinforce gender stereotypes. It’s essential to emphasize that there are no fixed rules or limitations when it comes to choosing gifts for children based on their gender.”

I thought to myself, “So you knew it was wrong and you did it anyway?” It is a thought I have had about my otherwise adorable and well-behaved son on any of the occasions he did the thing he was not supposed to do while fully conscious of the fact that he wasn’t supposed to do it.

[Comment at Unz.com]

Print Friendly and PDF