This article drives at something that’s been bugging me recently. After all the images coming out of DALL-E Mini, I saw a headline on Google News along the lines of, “Will DALL-E put artists out of a job?”.
But, even if the output of an AI/machine learning tool, whether for images, text, code or whatever, is authentically human-like, that’s all it is: human-like, not human.
From the article: “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good.
Getting fixated on this idea that AI is already, or very close to, sentient, detracts from the very real and very present opportunities that AI technologies offer, as well as the biases and risks involved in their blind use.
Humans can see faces in anything, so it’s not much of a surprise that we can be fooled into seeing sentience in a chat-bot. I don’t know if I should be surprised that someone like this works at Google, but it’s disheartening to say the least.