ChatGPT told 2M people to get their election news elsewhere — and rejected 250K deepfakes
Now that the election is over, the dissection can begin. As this is the first election in which AI chatbots played a significant part of voters'
Now that the election is over, the dissection can begin. As this is the first election in which AI chatbots played a significant part of voters’ information diets, even approximate numbers are interesting to think about. For instance, OpenAI has stated that it told around 2 million users of ChatGPT to go look somewhere else.
That's crazy. How might ChatGPT's approach to handling election-related content impact future elections?
How should these platform handle it?
It is based upon the information put in along with the algos. Funny how fakenews has a way of being only on one side and then proving to be true later.
It didn’t just give them a cold shoulder, but recommended some trusted news sources like Reuters and the Associated Press. ChatGPT gave this type of “I’m just an AI, go read the actual news” response over 2 million times on Election Day and the day after, OpenAI explained in an update to a blog post on its elections approach.
In the month leading up to election, ChatGPT sent around a million people to CanIVote.org when they asked questions specific to voting. And interestingly, it also rejected some 250,000 requests to generate images of the candidates over the same period.
For comparison, Perplexity, the AI search engine, made a major push to promote its own election information hub, resulting in some 4 million page views, the company claimed (per Bloomberg).
It’s difficult to say whether these numbers are low or high. Certainly they are nowhere near leaders in the news: CNN’s digital properties saw around 67 million unique visitors on Election Day and a similar amount the day after.
Article