Image from Grok
Yesterday, Sam Altman, the CEO of OpenAI, was invited to come talk at a technology event in San Francisco and he was accompanied by the company’s COO, Brad Lightcap. The event I'm talking about was actually a live podcast with The New York Times and Platformer journalists.
I see Sam Altman as an AI optimist and why wouldn't he be, he's a CEO of the most popular language model, chatgpt. There was a lawsuit on his company and he was definitely not pleased with it. It was lawsuit from the same New York Times interviewing him. They claim that OpenAI used their articles to train AI models and that OpenAI wanted to retain user chats, including private conversations. Altman said that was just an unreasonable claim for anyone to make. He believes that user privacy is an important deal.
He also described Meta’s attempts to recruit OpenAI providers, and some of the size of the offers that were made, for example, some were $100 million, just think about the value of data and how AI can get it. He also stated that OpenAI had a relationship with Microsoft, they worked together, but this relationship was not exactly perfect.
Eventually the whole conversation shifted to mental health. I personally tried a simple trick on chatgpt which failed miserable. I sent an image of a heptagon and asked it what shape it was. It said octagon. And when I said no it's a heptagon, it said sorry about that, it's actually a heptagon. And I said no, chatgpt, you were right in the first place, it's an octagon. And it said it knew it was an octagon.
But the real truth as I said was a heptagon. It was so convinced in the wrong fact and that really worried me. If I didn't already know it was a heptagon it would have lied straight to my face. So who else is chatgpt lying to right now and the matter could be a medical one or life and death one.
Sam Altman claimed they were working hard to minimize harmful chats on ChatGPT and it was tough work. He said that they were concerned for user safety and they really should be because misinformation could lead to the worse things.
Any company of that repute will defend itself but regardless they need to be extra careful because, if they get more health lawsuits the government could come strong against them. The problem is AI is still developing and not perfect. Its imperfections could lead to serious consequences and it might slow down the growth and evolution of AI.
Posted Using INLEO