Tau - The Human - Man Machine Sybiotic Super Computer

in Tauchain8 months ago (edited)

Many will remember my posts from the early days of Tau. I have not posted about Tau or on Hive in at least 2 years. According to Sr developer Andrei we are now on the brink of something we have waited to see for 7+ years. Andrei has inspired me to take a closer look at Tau again and this post below are my thoughts so far.

  1. Tau is the embodiment of the work of the work of individuals such as J. C. R. Licklider, David Chalmers, Andy Clark, Jon Von Neuman, and many others. But the first three names on this list are in my opinion philosophically significant so I will dive deeper later on into what is meant by the "extended mind" in relation to Tau.
  2. Tau allows for Alice to speak to Bob in a new way. In fact, Alice and Bob can speak code into existence through Tau if Tau works as intended. In my opinion this will unleash a new era of computing in and of itself but that isn't all.
  3. If Tau can scale discussion, it can scale computation. If we think of each human brain as a human computer similar to the last book written by John Von Neuman, then what happens if we connect these brains using our machiens in a way where they can combine knowledge and compute in parrelel? With the help of machines, now we have extended minds which operate in parrelel which is even better than to connect brains.

Extended Mind Thesis

The extended mind thesis is explained simply. Think of the great Pyramids of Egypt. Think of the stone tablets. Think of the US Constitution. Our ability to write is an ability to have an external memory outside of the brain. The brain no longer is the limiting factor because we have both language and an increasingly better medium from which to express ourselves.

The problem prior to Tau is that we only can express ourselves using natural language. Natural language doesn't really allow for scaling or for deciability. For this reason we cannot arrive at a True or False conclusion at the end of the discussion. We cannot reach the kind of agreement that the machines can use. Tau allows us to reach agreements in the manner the machines can use with ease (as said by Andrei).

In theory each agent (user in the Tau network) would have according to theory an extended mind. This would be their full computation capability not just the limit of their brain. This would mean however many terabytes or petabytes of memory they have. This would be however many extend CPUs they have. And Agoras is how each agent in the Tau network would upgrade it's extended mind.

The Man Machine Sybiotic Super Computer

J. C. R. Licklider wrote in his paper about sybiosis. It is extremely important for the alignment problem that we figure out symbiosis in my humble opinion. Tau will attempt to achieve symbiosis by going with the logical or "good old fashioned AI" approach rather than with the large language model (GPT) style approach. One of the reasons why is because statistical pattern matching or auto-complete methods of doing AI do it in such a way that the human beings don't know exactly what the AI is doing or how it achieves what it achieves. It's important for AI safety that you have as much transparency into the for lack of better phrase "thought processes" of the AI.

Tau scales discussion and potentially computation as well. The scaling of discussion is obviously understood at this point. You can have a million agents in a network express opinions, and over time converge toward shared agreements, which solidify over time into what some others in the computer science field would think of as a sort of formal specification. In general, if you can scale discussion into specification, then you can also scale the knowledge which goes into that discussion as well.

For example if you have a million agents in a network using Tau then you have the expertise of millions going into these shared agreements. In other words laws, rules, code, will not only be provably correct, but the quality can scale over time too. The limiting factor in this is how much total computation and knowledge does the Tau network have at any given time?

The Agoras token tracks all computation resources or as some call it "computronium". In general, knowledge that you have in your brain is currently intangible. But if you can communicate it in the right way it becomes tangible once the machine can precisely capture it. Knowledge representation languages allow machines to precisely capture knowledge. Code or software can leverage the knowledge of 1000 lawyers for example, or of the top physicists, or of the greatest business minds, to produce an app in such a way that every mind involve is rewarded for their contributions.

So can we think of Tau as a super computer? Is there a way to measure a network such as this? We can measure the machines quite easily. Machines have CPUs which we have ways to measure their performance. The human beings in this network who compute in different ways are not so easily measurable, but as we can expect some humans will be better "human computers" than others and my opinion on this is that the humans who best figure out human-machine symbiosis methods, will be the ones who become the bigger nodes in the Tau network, and will receive the most Agoras tokens as a result.

Will Tau achieve it's aims?

We simply cannot know yet whether Tau can achieve it's aims. We can know in theory that mathematically something should work. We can say now with a strong degree of certainty that Tau works. What we don't know yet and can't know until testnet is whether Tau will be implemented in a manner where it's useful in a practical way.

ChatGPT went viral because the figured out a way to implement a necessary technology in a way which dramatically raised the productivity of all who use it. Tau in my opinion if it similarly dramatically raises the productivity, and if the decentralized network part of it works well, then yes it will for sure go viral for the same reasons ChatGPT went viral.

I currently still have some reservations. Is Tau going to be fast enough to scale? Proof of Execution seems dramatically more efficient than Proof of Work or Proof of Stake proper, but this is something which we need to see in operation before we can know. This is where the implementation becomes important more so than the theory. If they get Proof of Execution right, if it's faster than for example Ethereum/Bitcoin, this in my opinion bodes well.

The success of Agoras is also currently a question unanswered. In order for Tau to scale in a meaningful way the Agoras token itself has to find a price, it has to be tradable somehow, it has to be accessible on other blockchains somehow so that people on for example Ethereum can trade the Agoras token from their chain, and connect to Tau in some way.

In theory Agoras represents knowledge/computation in a real sense. It in theory would have more utility than Ether, more utility than Bitcoin, or any token we have ever seen. Imagine if ChatGPT released a token or NFTs which represented access levels to future GPTs like GPT5, GPT6, and so on? These NFTs would sell out fast. In Agoras you have a token which could be thought of as a kind of access ticket in some cases, or in other cases as something else, but the utility of the token would depend entirely on the software built on Tau which at this time nobody knows what that is.

Not much more to say for now. I look forward to test net and main net.

References

  1. https://en.wikipedia.org/wiki/Extended_mind_thesis
  2. Man-Computer Symbiosis J. C. R. Licklider https://groups.csail.mit.edu/medg/people/psz/Licklider.html
  3. https://en.wikipedia.org/wiki/The_Computer_and_the_Brain
Sort:  

So nice to hear from you again! I read some of your old Tau posts recently and was wondering where you were these days. Anyway, welcome back and I'll be reading this post again tomorrow when I'm more awake. :)

Welcome back, Dana!
Since years I also follow Tau (but not as close as you), and they always seem to be on the brink of something. From one theoretical breakthrough to the other. I am still sceptical. Also it seems the core of the concept has shifted several times. Was it initially about decision making and to find a computer language depicting opinions and arguments, is it now an "AI safe" software development tool. Like Andrei had to squeeze in the buzzword AI somewhere, as today there can´t be any software without AI.
At least now one can easily trade the AGRS tokens (via Uniswap).