- It might create a filter bubble or not depending on the criteria the participants use. My opinion is people should not be forced to invest resources against their values. My opinion is that people have the right to invest their resources in alignment with their values. Agoras is a resource exchange network where knowledge is one of the resources but also computation resources as well.
- I don't think it would minimize flows of information. It would for sure filter information if the participants want that. In other words, noise is the information that isn't wanted by the participant because it isn't useful with regard to the values of that participant. For example if you value animal rights then Tau could use deduction to conclude you'd only want to see vegan friendly products. Why would you want noise about the flavor of steak when you're trying to promote animal welfare?
- Values matched participants who do "group thinking" would be a problem if thinking is only done in the values matched network. For example, if a person who doesn't share our values contributes something to the shared knowledge base then if what they shared is scientifically correct then what difference does it make if they shared your values? The resource (knowledge) is valuable as an asset to your values network regardless of where the knowledge originated.
- Trust is something which cannot entirely be solved by an algorithm on the Internet. It's not really about creating trust. I see humans are inherently unreliable but I see trustworthiness as "reliability". So you cannot "create" trust without finding a way to make the humans you interact with more reliable than human nature allows (if that makes any sense). Even with legal consequences between Alice and Bob for violating the trust of each other it does not prevent it from happening. Alice and Bob are human, humans are irrational, unreliable, but the good news is economic actors dont have to be entirely "human". A business is not a human but is an economic actor. A cyborg could be an augmented human with enough machine intelligence assists that they become more reliable than they ordinarily could be.
So on trust I need more context but I don't see it being solved by any technology we have until the day where mind reading is portable and contracts can be verified on the blockchain as "deception free".