I got the idea to try these weekly posts after realising how many links I was saving while browsing the I want to share some links from my tabs about AI, or to be more precise, how we live with AI.
I wasn’t sure if I should start this week’s HyperThursday with this link, but this misguided anthropomorphism of an AI chatbot speaks to something more than just this one instance. Google Assistant lets me know it appreciates when I say please and thank you, and I know I am not the only person to give their robot vacuum cleaner a name.
But speaking to a person is, in many ways, very different from speaking to a voice assistant. When I shout at a smart speaker to turn on the lights or find me something to watch on TV, it feels like an action without any physicality. I make noise and something digital happens; that’s not an accurate picture of what goes on.
Despite its impressive complexity, the principle of “garbage in, garbage out” still applies broadly in computing, and especially in machine learning. Biased data sets can result in inconveniences like not being recognised by an auto-focussing camera or not being heard by a voice assistant, but they can also lead to major injustices such as unfair sentencing in court or worse healthcare outcomes. Part of the problem is that most of us have no knowledge or involvement in what data is used to train AI. Common Voice is a nice initiative to counter that, making a training data set for voice recognition with true diversity.
Bias in AI and machine learning isn’t limited to what systems can recognise as input. Microsoft was famously forced to apologise after it released a chatbot onto Twitter, which promptly learnt from the worst of the Birdsite.
I’m going to finish this week’s post with an interview with Safiya Noble, author of ‘Algorithms of Oppression’. The whole interview is interesting, but I’m drawn to this quote, not only in this context, but for its intent across how we regulate the digital environment:
“Also, we can look for and back candidates who have a sophisticated, critical technology agenda. And by “critical,” I mean an understanding of the whole host of attendant power issues associated with these technologies and a sense of how to protect the public from the extractive models foisted upon us by Big Tech.”