You are viewing a single comment's thread from:

RE: LeoThread 2025-05-01 19:47

in LeoFinance5 months ago

Think about how limited a CEO's knowledge is today. How much does Sundar Pichai really know about what's happening across Google's vast empire? He gets filtered reports and dashboards, attends key meetings, and reads strategic summaries. But he can't possibly absorb the full context of every product launch, every customer interaction, every technical decision made across hundreds of teams. His mental model of Google is necessarily incomplete.

Now imagine mega-Sundar – the central AI that will direct our future AI firm. Just as Tesla's Full Self-Driving model can learn from the driving records of millions of drivers, mega-Sundar might learn from everything seen by the distilled Sundars - every customer conversation, every engineering decision, every market response.

Sort:  

Unlike Tesla’s FSD, this doesn’t have to be a naive process of gradient updating and averaging. Mega-Sundar will absorb knowledge far more efficiently – through explicit summaries, shared latent representations, or even surgical modification of the weights to encode specific insights.

The boundary between different AI instances starts to blur. Mega-Sundar will constantly be spawning specialized distilled copies and reabsorbing what they’ve learned on their own. Models will communicate directly through latent representations, similar to how the hundreds of different layers in a neural network like GPT-4 already interact. So, approximately no miscommunication, ever again. The relationship between mega-Sundar and its specialized copies will mirror what we're already seeing with techniques like speculative decoding – where a smaller model makes initial predictions that a larger model verifies and refines.

Merging will be a step change in how organizations can accumulate and apply knowledge. Humanity's great advantage has been social learning – our ability to pass knowledge across generations and build upon it. But human social learning has a terrible handicap: biological brains don't allow information to be copy-pasted. So you need to spend years (and in many cases decades) teaching people what they need to know in order to do their job. Look at how top achievers in field after field are getting older and older, maybe because it takes longer to reach the frontier of accumulated knowledge. Or consider how clustering talent in cities and top firms produces such outsized benefits, simply because it enables slightly better knowledge flow between smart people.

Future AI firms will accelerate this cultural evolution through two key advantages: massive population size and perfect knowledge transfer. With millions of AGIs, automated firms get so many more opportunities to produce innovations and improvements, whether from lucky mistakes, deliberate experiments, de-novo inventions, or some combination.

As Joseph Henrich explains in The WEIRDest People in the World,

cumulative cultural evolution—including innovation—is fundamentally a social and cultural process that turns societies into collective brains. Human societies vary in their innovativeness due in large part to the differences in the fluidity with which information diffuses through a population of engaged minds and across generations