You are viewing a single comment's thread from:

RE: Ten Amusing Questions Creationists Mistakenly Believe Science Has No Answers For

in #christianity8 years ago (edited)

What's wrong with UBI, in a future where most or all labor is performed by robots? UBI would then just be a token/rationing system entitling people to have the robots make them a certain amount of stuff they want per month.

Sort:  

The computation for UBI is based on an index of automation. It's something similar to what happens with oil dividends in oil rich countries, with similar problems to this countries economies. In the beginning, you would probably have a lot of development in the population. I think the experiment of Hawaii is gonna be successful in the middle term and many other places will implement it.

To make a long story short, as technology and rationality increase the incentives for irrational behavior also increase geometrically, leading to the expansion of the prisoner's dilemma into a Keynesian beauty contest.

This is secondary to a denial or forgetfulness of 3 problems with technology as seen from mathematical game theory.

  1. Technological bubbles (expectations surpass the real capabilities of actual technology, exhausting the market)
  2. Technology loss and crash (Technology is resource intensive and can be lost, many people don't realize this happens even today. Elon Musk has a talk about it)
  3. Centralization mobility (the companies and governments can shut down access to technology if necessary, e.g.natural disasters make service to an area too resource intensive)

There are alternatives or complements to UBI, like self-sufficient automation or economic gamification. UBI on its own it's extremely naive mathematically speaking.

If by 'on its own' you mean in the absence of mature and widespread automation, I would agree.

No, even with fully spread automation is naive. If you have half the population without the cognitive skills to even interact with the technology the same problem arises. The real problem is how you keep people useful in a world that no longer needs them. The answer according to game theory is technological handicaps. Anatole Rapoport's approach to automation.

Why do people need to be "useful"?

They don't need to, but if they are not they disappear. Just like when an embryo is aborted because of a disease.

.... :/

That is a view I can't get onboard with. I like people, warts and all. I don't think we should be building a civilization that views humans purely in terms of utility. The point of it should be to support human life and happiness.

I completely understand your point. In my case, my expectations are a lot lower. I became a doctor and scientist in hopes of hedging the bet in life's favor by a little if possible. I don't know what will be the consequences of it or if it will bring happiness or despair.