You are viewing a single comment's thread from:

RE: LeoThread 2025-05-01 19:47

in LeoFinance5 months ago

If human-level intelligence is more than 1 trillion parameters, is it so much of an imposition to keep around what will, at the limit, be much less than 7 billion parameters to have most known facts right in your model? (Another helpful data point here is that “Good and Featured” Wikitext is less than 5 MB. I don’t see why all future models—except the esoteric ones, the digital equivalent of tardigrades—wouldn’t at least have Wikitext down.

Sort:  

This evolvability is also the key difference between AI and human firms. As Gwern points out, human firms simply cannot replicate themselves effectively - they're made of people, not code that can be copied. They can't clone their culture, their institutional knowledge, or their operational excellence. AI firms can7.

If you think human Elon is especially gifted at creating hardware companies, you simply can’t spin up 100 Elons, have them each take on a different vertical, and give them each $100 million in seed money. As much of a micromanager as Elon might be, he’s still limited by his single human form. But AI Elon can have copies of himself design the batteries, be the car mechanic at the dealership, and so on. And if Elon isn’t the best person for the job, the person who is can also be replicated, to create the template for a new descendant organization.

So then the question becomes: If you can create Mr. Meeseeks for any task you need, why would you ever pay some markup for another firm, when you can just replicate them internally instead? Why would there even be other firms? Would the first firm that can figure out how to automate everything will just form a conglomerate that takes over the entire economy?

Ronald Coase’s theory of the firm tells us that companies exist to reduce transaction costs (so that you don’t have to go rehire all your employees and rent a new office every morning on the free market). His theory states that the lower the intra-firm transaction costs, the larger the firms will grow. Five hundred years ago, it was practically impossible to coordinate knowledge work across thousands of people and dozens of offices. So you didn’t get very big firms. Now you can spin up an arbitrarily large Slack channel or HR database, so firms can get much bigger.

AI firms will lower transaction costs so much relative to human firms. It’s hard to beat shooting lossless latent representations to an exact copy of you for communication efficiency! So firms probably will become much larger than they are now.

But it’s not inevitable that this ends with one gigafirm which consumes the entire economy. As Gwern explains in his essay, any internal planning system needs to be grounded in some kind of outer "loss function" - a ground truth measure of success. In a market economy, this comes from profits and losses.

Internal planning can be much more efficient than market competition in the short run, but it needs to be constrained by some slower but unbiased outer feedback loop. A company that grows too large risks having its internal optimization diverge from market realities.