spending millions of dollars a day training their own models, although it still seems unclear exactly what they're going to do. It kind of seems like their focus is to get the technology sufficiently advanced that they can actually run these things on device so they don't have to deal with the cloud. And then there's a whole cadre of sort of open source approaches too numerous to list, but there's Mistral, there's stability, there's all the hugging face models. I mean, there's a huge array of things building in that space as well. So all of you can hear just how well versed Nathaniel is in this industry. So he is the AI explainer. If you need a podcast to listen to where you can download stuff every five minutes, it's Nathaniel. So George asks, is this a symptom of problems with governance both in the tech space and in the nonprofit space? I'm troubled because nonprofit does not mean you are, quote, doing good the way many folks think. It just means you essentially get to dodge (45/57)
You are viewing a single comment's thread from: