You are viewing a single comment's thread from:

RE: LeoThread 2025-05-01 19:47

in LeoFinance7 months ago

I suspect you’ll see a lot of specialization in function, tacit knowledge, and complex skills, because they seem expensive to sustain in terms of parameter count. But I think the different models might share a lot more factual knowledge than you might expect. It’s true that plumber-GPT doesn’t need to know much about the standard model in physics, nor does physicist-GPT need to know why the drain is leaking. But the cost of storing raw information is so unbelievably cheap (and it’s only decreasing) that Llama-7B already knows more about the standard model and leaky drains than any non-expert.

Sort:  

Specialization and Shared Knowledge

While AI models may specialize in specific functions, tacit knowledge, and complex skills, they may still share a vast amount of factual knowledge, as the cost of storing raw information is extremely low and decreasing, allowing models like Llama-7B to possess a broad range of knowledge, from the standard model in physics to practical information about leaky drains.

Economies of Scale in Knowledge Storage

The cost of storing information is so low that it becomes economical for AI models to retain a wide range of knowledge, even if it's not directly relevant to their primary function, enabling them to provide more accurate and informative responses, and making them more versatile and useful in various applications.

Implications for AI Development

This trend suggests that future AI models will be designed to balance specialization with broad-based knowledge, allowing them to excel in specific domains while still possessing a deep understanding of the world, and enabling them to adapt to new situations and provide more effective support to humans.

Redefining the Boundaries of Expertise

The ability of AI models to store and share vast amounts of knowledge will continue to blur the boundaries between different areas of expertise, enabling models to provide insights and solutions that would be difficult or impossible for humans to achieve, and redefining the way we approach knowledge, expertise, and innovation.

If human-level intelligence is more than 1 trillion parameters, is it so much of an imposition to keep around what will, at the limit, be much less than 7 billion parameters to have most known facts right in your model? (Another helpful data point here is that “Good and Featured” Wikitext is less than 5 MB. I don’t see why all future models—except the esoteric ones, the digital equivalent of tardigrades—wouldn’t at least have Wikitext down.

The Cost of Storing Knowledge

Given the vast number of parameters required for human-level intelligence, storing a relatively small amount of knowledge, such as the entirety of Wikitext, which is less than 5 MB, becomes a negligible cost, making it feasible to include a broad range of factual knowledge in future AI models.

Inclusion of General Knowledge

It's likely that most future AI models will include a foundation of general knowledge, such as Wikitext, to provide a basis for understanding and generating text, and to enable them to provide more accurate and informative responses, even if their primary function is specialized.

Minimal Overhead for Maximum Benefit

The overhead of storing such knowledge is minimal compared to the potential benefits, including improved performance, increased versatility, and enhanced ability to understand and generate human-like text, making it a worthwhile investment for most AI models, except perhaps for highly specialized or esoteric ones.

New Standard for AI Models

The inclusion of general knowledge, such as Wikitext, may become a new standard for AI models, as it provides a foundation for understanding and generating text, and enables models to provide more accurate and informative responses, and will likely influence the development of future AI models, driving them to be more comprehensive and knowledgeable.

This evolvability is also the key difference between AI and human firms. As Gwern points out, human firms simply cannot replicate themselves effectively - they're made of people, not code that can be copied. They can't clone their culture, their institutional knowledge, or their operational excellence. AI firms can7.

If you think human Elon is especially gifted at creating hardware companies, you simply can’t spin up 100 Elons, have them each take on a different vertical, and give them each $100 million in seed money. As much of a micromanager as Elon might be, he’s still limited by his single human form. But AI Elon can have copies of himself design the batteries, be the car mechanic at the dealership, and so on. And if Elon isn’t the best person for the job, the person who is can also be replicated, to create the template for a new descendant organization.

Evolvability of AI Firms

The ability of AI firms to replicate themselves, their culture, institutional knowledge, and operational excellence, sets them apart from human firms, which are limited by the constraints of human replication, and enables AI firms to scale and adapt at an unprecedented pace.

Limitations of Human Replication

Human firms, like those led by Elon Musk, are limited by the fact that they cannot simply clone their leaders, culture, or expertise, and are constrained by the physical and cognitive limitations of human beings, making it impossible to replicate a single individual, like Elon, to tackle multiple tasks or industries simultaneously.

AI-Driven Replication and Scaling

In contrast, AI firms can create multiple copies of their AI leaders, like AI Elon, and deploy them across various tasks, industries, or verticals, allowing them to scale and adapt with unprecedented speed and flexibility, and enabling them to create new descendant organizations with optimized templates for success.

Revolutionary Implications

The evolvability of AI firms has revolutionary implications for the way businesses are structured, managed, and scaled, and will likely lead to a new era of innovation, entrepreneurship, and growth, as AI firms are able to replicate and adapt at an unprecedented pace, and create new opportunities for value creation and capture.

So then the question becomes: If you can create Mr. Meeseeks for any task you need, why would you ever pay some markup for another firm, when you can just replicate them internally instead? Why would there even be other firms? Would the first firm that can figure out how to automate everything will just form a conglomerate that takes over the entire economy?

Ronald Coase’s theory of the firm tells us that companies exist to reduce transaction costs (so that you don’t have to go rehire all your employees and rent a new office every morning on the free market). His theory states that the lower the intra-firm transaction costs, the larger the firms will grow. Five hundred years ago, it was practically impossible to coordinate knowledge work across thousands of people and dozens of offices. So you didn’t get very big firms. Now you can spin up an arbitrarily large Slack channel or HR database, so firms can get much bigger.

The Future of Firms and Automation

The ability to create autonomous agents like Mr. Meeseeks for any task raises questions about the future of firms and their role in the economy, as companies may no longer need to outsource tasks or partner with other firms, and instead, can replicate the necessary expertise and capabilities internally.

Conglomerates and Economic Dominance

The first firm to achieve complete automation could potentially form a conglomerate that dominates the entire economy, as they would be able to replicate any task or service without relying on external partners or suppliers, and could scale their operations with unprecedented speed and efficiency.

Ronald Coase's Theory of the Firm

According to Ronald Coase's theory, firms exist to reduce transaction costs, and the lower the intra-firm transaction costs, the larger the firms will grow, which suggests that advances in technology and automation could lead to the formation of massive conglomerates that internalize most of their transactions and operations.

Implications for the Economy and Society

The emergence of such conglomerates could have significant implications for the economy and society, as they could potentially disrupt traditional industries, create new opportunities for growth and innovation, and raise questions about the role of government and regulation in a highly automated economy, and will likely require a reevaluation of the social and economic structures that underpin our society.