You are viewing a single comment's thread from:

RE: LeoThread 2025-08-17 05:22

in LeoFinance2 months ago

on the timing. So what we would summarize is that if you take Dan's definition of AI, which I'll summarize as computers that do things that are human like, and you extrapolate the modest things they can do today, like manage suggestions, help you with your research, help translate things and so forth, the collective industry believes that we will end up with human like intelligence, which is not the same thing as human intelligence in computers in a reasonably short period of time. That has enormous implications. If it's achieved, this idea was popularized maybe 10 years ago is that the term the singularity, the idea being that at some point in our future, hopefully when we're all still here, that AI will be able to accelerate itself. And so we speculate of what happens when you have AGI and I'll give you the punchline, which is that when you think about these systems that are AGI powerful, but remember they think deeply and they see everything more than a human brain can see, you can (8/45)