Sort:  

It’s been a while since I lastly commented on some of your posts. This one has really caught my attention so i should make up for the limited engagement :P. I have read all of the related articles you have linked along the way, but none of them has tackled the argument that has led me to "believe" (i say believe because there is no way how to know such a stuff anyway) that we do not live in "matrix".

So basically all of those argument are highly logically structured. They in fact do lead, through brilliantly paved logical tree, to a really strong assertion that we probably are part of "virtual reality". This whole logical tree has one major flaw from my perspective.

The prerogative human nature

I think that most of us will agree on "a fact" that objective reality does exist (whatever it is). Most of us will also agree on "a fact" that this objective reality is bound to be perceived by our subjective realities - and that is the stumbling block of the theory supported by Elon.

The whole theory stands on a premise that the beings that have created the virtual reality were "thinking" in the same way as humans do. It’s extremely hard for people to imagine "any other way" of thinking. The dilemmas of the mankind are more or less set for quite some time (with few hard/soft forks along the way) and humans naturally search "for the HIGHER purpose of things". That is the bias we all to an extent share (but people raised in religious environment like you tend to be biased in such a way even more).

Easily put, there is no reason to think that different unimaginable life forms would create matrix-like realities. Humans would...Other beings? No one knows. Therefore I think that statement like "we are probably/most likely in a matrix-like reality" is a biased opinion.

But then again this too is just my subjective reality using rule of thumb while trying to perceive objective world...I might have very well failed at some point during my thought processes.

So what makes humans think this way? Is it paired with rising sentience (a necessary component to drive intelligence evolution) or is it an evolutionary accident?

If it's an accident, then you have a really good point. If it's paired with rising sentience, then it's very reasonable to believe that any beings sufficiently advanced would begin developing simulations. At that point, probability takes over... if any beings are running simulations, there is much higher probability that we are in one of the countless simulations than there is that we are outside of it. Even with quantum computing, how many simulations can be run in parallel? I don't know the limits to that.

We don't have enough data-points on sentient species to know whether this human way of thinking drives sentience or is an evolutionary accident.

Yes gotcha.

I very much believe that it is what you call an "evolutionary accident". I wouldn’t probably call it accident, but rather like one of the possible evolutionary branches of life forms if you know what i mean. (The outcome though regardless of the definition is still the same).

What would make you think that our sentience is actually rising that much? Sure we have plenty of new technology, we have bested (most probably) all of the other life forms on Earth, but the vast majority of the world still lives in "medieval ages". From my perspective humanity is still in its prenatal stage (and will probably destroy itself before it ever manages to climb out from that stage). Thus said thinking that from the perspective of Truth/Reality we can’t possibly know whether we have even matched the "average sentience" of all the living species, let alone proclaim that we have reached a state worthy of recognition, or possibly a state that "all sentient beings are bound to seek at one point". Maybe we are well below average of "sentience" and that is why we are reaching those conclusions?

Overall you added some great points. But then again the argumentation of the "rising sentience" you have shared is still based on a premise that this is the only way where life forms can lead (based on our very limited understanding of the world a life itself). We have no idea how other life forms think or act. All we can do is to assume that life forms based on CO2 (did I say it right?:D No chemist here:D) will always do what humans incline to do. As you very well pointed out

We don't have enough data...

We can only assume and I was raised in a philosophical communities that were sceptic in its nature.

Very good points. I wasn't trying to make assumptions about our relative sentience, only to point out the key assumptions that the argument takes for granted.
I don't think that we're in a simulation, but whether certain modes of thought are inherent or accidental definitely influences the probabilities.
Since we have so few data points, we can't possibly conjecture. And Elon musk didn't even calculate the actual path of a space roadster correctly. Why would we credit his simulation probabilities?

Human or non-human, if we know humans would do it, and we assume humans survive long enough to do it... stands to reason there’s a good chance they would.

But can we actually do it? I think it would require quantum computing, at the least, and it's one of the least interesting simulations to utilize the quantum computers.

Great point! It’s almost like there’s something about consciousness which requires purposes becuase conscious beings have the ability to end themselves. With that key, it makes sense for the genes to select for those who have belief outside of themselves to continue procreating and spreading.

I’m reminded of the octopus when thinking about different forms of consciousness. It’s almaot an alien life form. How much more different could other truly alien consciousnesses be?

I do think we have a bias, but as you said, I also think the arguments do make sense. Since humans probably will create simulations of other humans, then the existence of other forms of consciousness become largely irrelevant, as long as humans continue surviving long enough to pull it off.

Yes totally agreed. The gene has to "know" that we need some extra pinch of motivation to not end our lives - otherwise we would be massively doing it.

Interesting! My best friend is a biologist so I’m gonna ask him about octopuses, cuz I can’t really follow your though processes with my limited knowledge here:D.

Anyway you are right that if humans ever reach a state where they’ll be able to create a matrix like reality they will do it. And maybe it has already happened:P. It would be a strong argument against the "there are no ancient relics that would indicate that there has been a developed human society before ours". Why would they program ancient relics into the matrix if they just wanted to test us right:)? Thus said I would change the Elon’s statement to something like (if I wanted to fully agree with it.
"When humans reach a state where they can pull off the creation of matrix-like, they would do it and it has possibly already happened.”

Definitely enjoyed the video you shared at the end. Made everything feel a bit more digestible -- Simulation theory is a pretty heavy idea. I don't know if I buy it. Any time I see something that suggests that "right now is likely the most important time in human history", I tend to have some warning bells go off.

That being said -- it's hard to ignore the leaps and bounds that are being achieved in artificial intelligence, machine learning, and the way we interface with computer systems. I feel like it's definitely a more tangible theory than many religions tend to offer, but I don't personally find it that much more compelling -- due to the unfalsifiability (which may say more about our inability to test and prod, rather than the theory itself -- but who knows).

To the question you pose -- I think that on many levels, studying morality can improve anyones life, and it's a great idea. In terms of studying AI, I think it's less obvious in how it can improve ones life, but in my line of work (civil engineer) I feel like it's almost certainly going to start affecting the work that I and others in my field do, in terms of project designs -- and getting ahead of the curve on this one would do leaps and bounds to improve my life. While the whole "deep-fakes" thing sweeping the internet might not be the greatest example to draw from, it's pretty illustrative in the quantity and quality of work that can be achieved through relatively simple machine learning processes -- and I would imagine that we're going to see this technology explode into just about every industry in the next 5~10years.

Thanks for sharing -- definitely a lot to think about, and it got me going.

I love to hear my ramblings get people thinking. :)

The best explanation I can come up with for peoples' lived experiences is how the sub conscious mind (System 1, as Daniel Kahneman calls it) does "work" in the background and may impact how System 2 changes our conscious thoughts and actions which directly impacts our lived experiences to obtain what we want.

This is the root of it. Soros calls that interplay between action and perception and reaction, reflexivity.

Essentially what it comes down to is the fact that the placebo effect is real, even though placebos are fake. Your perceptions, right or wrong, will impact your choices and your actions. This is how what you believe about your life becomes your actual life.

Well said. I think some day we’ll understand this better and there will be more attention paid to what inputs we allow and which we filter out.

This might be the moment in history some advanced civilization on the brink of releasing super intelligence is simulating over and over again to make sure they don't screw it up.

If they're simulating us to make sure they don't screw up, they must have programmed us and our world to resemble them and their world as closely as possible. So, for all intents and purposes, they are us in the future. And they've already survived this moment, and progressed to the point of being able to put consciousness into a machine, but they aren't yet ready to allow it to develop into a super AI. In other words, they have put limitations on our mental capacity. Either that, or this is as smart as the AI gets.

How far away is that future in which we've developed the ability to transfer consciousness to machines, yet hold back on allowing it to become smarter than we are?

If it's far off, why simulate this moment in time? Wouldn't it make more sense to simulate a time as close as possible to the current state of that world?

And wouldn't it be immoral to put consciousness into the simulation, knowing that it will cause needless suffering in the event that they screw up the release of the super AI within the simulation?

Just for fun, let's imagine for a moment we are in this simulation and the value function for this simulation has been set to something like "Reward those who figure out how we're going to program morality into the super intelligent systems we're building."

Maybe they're not like us at all. Maybe our world is very different from theirs, and we were created for the sole purpose of figuring out this morality in machines thing.

But we have morality within this machine. It's just that not everyone adheres to it. And not everyone follows the same moral code. So maybe the purpose of the simulation is to explore myriad moral codes until one is found that is readily accepted by any and all consciousness within the machine, thus giving reasonable assurance that it will work in the real world.

Or, maybe the goal is to come up with a system that incentivizes moral and mutually beneficial behavior. So, stuff like Steem. Or EOS.

So maybe Dan Larimer is god.

And maybe EOS stands for End Of Simulation.

End of Simulation. Heheh. Nice.

I think for a system to be effective for prediction, it would have to evolve organically over “time” like other machine learning systems.

As to morality, do we consider the suffering of non-sentient characters we create in our existing video games? Or maybe we would justify the suffering of some virtual humans to save some real ones from a utilitarian perspective? Reminds me of the book The Age of EM which was interesting.

Agreed. But I just realized, if you want the system to to work in a post simulation environment, don't you have to include a certain period of time after the release of the super AI as well? Are we living post AI already? Is it possible the AI came to the conclusion that the most moral thing to do is absolutely nothing? Or is it waiting silently for all of the pieces to fall into place before it suddenly takes over?

I don't play games much, but I was playing this one on one combat game at a friend's house a few years ago on his Playstation. Can't remember the name, but you can create your own characters. So we'd make characters, and then if they were defeated, we'd delete them to simulate "death." Made the game a lot more interesting, because a lot of "work" went into building the characters. But I sure hope there was no actual suffering involved. I would hate to think that characters in games were suffering. That's not the kind of panpsychism I can deal with.

As far as the justification of the suffering of virtual humans to save real ones, I'm not sure if conscious experience is necessary for that or not. If rational materialism is right, it's not necessary. If rational materialism is wrong, and conscious experience isn't just some freak anomaly that has no influence on outcome, then you're throwing true randomness into the mix, meaning the simulation can never truly predict reality.

There's also the problem of a malevolent AI becoming aware that it's just a simulation, and hiding it's intentions accordingly, until it is released in the real world.

Cool argument, it's essentially a converse Roko's Basilisk, right?

I hadn’t thought of it that way, but that’s interesting. I was more thinking about humans running simulations to better understand how they should create A.I. But if the A.I. take over the simulation, that brings us back to an inverse Roko’s Basilisk.

I would highly recommend reading the Hyperion cantos (four book series) by Dan Simmons. It is, in my opinion, much better than The Matrix and deeply explores the themes of AI, morality, spirituality and religion in a hyper-technological world. It is a work of art, and I'm sure you'll enjoy it.

Sounds great! Just bought the first one on Audible. I was about ready for some fiction. Thanks! I was surprised to see it written so long ago. Very curious to see how it stood up over time.

The author pretty much predicted the inevitable success and worldwide adoption of the internet. Things get really philosophical in the second two books, Endymion and The Rise of Endymion. I'm just about to re-read the series. I hope you enjoy it just as much as I did.

I'm sorry, I'm on team Einstein, I opt for reality being the current environment. That's a choice I make, because it aligns perfectly with my (albeit imperfect) perception.

Second, I think AI won't achieve consciousness anytime soon (which means no problem for me, not for my kids and probably not for my future grandchildren). I truly believe humans are more complex than we realize.

Seems the AI experts disagree, as I mentioned in the first post:

Do you have reason to think the experts are wrong?

As for your perceptions, if you understand Bostrom's argument, sufficiently advanced simulations will, eventually, be indistinguishable from reality. Even with the VR we have now, I think that claim isn't all that far off, assuming to don't change the progress line we're currently on. That means our perceptions could be fooling us. How would we know?

Dear @lukestokes, I was assuming me, my children and my grandchildren would live to about max 90 - 100 years. My point was about consciensceness, a very philosophical subject and probably the most ill-understood part of being human. This is from the paper:

It seems the experts and I are actually on the same page. From the chart you show here-bove all the experts that think we can never grasp human level intellect good enough to provide machines the tools (algorithmic representations) to simulate it well. Those 'nevers' have been filtered out from the chart you show, which is logic, since 'never' is quite difficult to calculate into a median. The point is, those 'nevers' are actually quite a big portion of the respondents answers and for good reasons.

Ah, thanks for clarifying. I often skip right over the word “consciousness” as it’s quite loaded. I see it as some combination of memory, arrousal, and awareness. It may not be as special as we’d like to believe as there are many levels of consciousness throughout the species on this planet.

As to the <20% who say never, you’re right, I shouldn’t skip over those views so quickly. Maybe we won’t ever get there, but having worked with computers since 1996 and having been exposed to neural networks in college, it seems quite plausible to me, so I align with the >80%. From there, creating true-to-life simulations seems inevitable.

There's a theory that, we, the average person are only using 3 percent of our brain, if that proven to be true, imagine what a person potential could be if we're capable of using 100 percent. So far, all i know is that we still standing on the back of a giant turtle. :)

That has been discredited, as far as I’ve seen. Search it up.

If only EVERYONE would study morality...

Look at how we behave as a community on Steem, we still have a very long road ahead of us.

Currently, I think much of that is ridiculous.

All of that is :P If 'naming and claiming' were true, natural selection would've figured it out by now, and we'd be hardwired to do it. NS figured out much more complicated things, I think the Secret isn't beyond her capabilities; we definitely wouldn't need a book to tell us that it's true.

As for simulations, I guess anything could be the case. A very faithful equivalent of Christianity could be running, in that there could be 2 levels (life and afterlife), and the value could be "don't reward people who do X, to see if they'll still do X despite the lack of reward, and if they keep at it to the end, reward them with a good place in level 2 (afterlife), where they'll be allowed to do the work to their heart's content". Or something.

I wasn't aware of that youtube channel, thanks for sharing it, I've subscribed!

If we study morality, i feel it can improve our lives. Since studying morality will make yhu know the difference between good and bad and it will make yhu understand them better. Why wont it improve lives?.

Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law--- This would apply to AI as well. Wouldn't be bad for Human beings either. but we are not hard coded with this.

Those laws were designed to demonstrate how flawed they are. Makes for good science fiction. Those who are seriously working on this problem now don't treat those laws seriously, from what I've seen. Give Bostrom's book Superintelligence a read to understand more about the topic. It's fascinating stuff.

Hmm that's an interesting idea. Never heard anyone talk about this "reward system" by the simulation. I think there's definitely a possibility that we could be in a simulation, and in that case it would only make sense if people would get rewarded by the simulation!

Wow! Going to have to ponder this post for a while....thanks for sharing. Mind=blown

Went back and read your original article. Elon Musk saying something like that about the chance of us living in a simulation is kind of mind boggling. The thing is, A.I. is for sure coming to some extent in our lives. I don't think it will be a thing of how much we "let it" control, since there are always going to be different companies working on it at all times without constraints I guess. Time will tell though.

I hear many successful, intelligent people talking about the very serious concerns surrounding the development of super intelligence and the moral restraints it needs. Could it be they are being rewarded by the simulation?

I think I understand what you are saying - that individuals who study morality and AI are being rewarded with intelligence and success. I think its an interesting concept to ponder when considering the simulation hypothesis that you are talking about.

I'm not sure I understand the causal relationship between the two though. For instance, are the individuals being rewarded with success and intelligence because they are evaluating those systems or are they evaluating those systems because they are intelligent?

Regardless, its still an interesting concept to play around with. I like the idea of

the value function for this simulation has been set to something like "Reward those who figure out how we're going to program morality into the super intelligent systems we're building."

Yeah, the rewards for studying AI part would be if we went with the idea that the very purpose of this simulation (being the cebtury most likely to creat super intelligence) is to figure this stuff out without destroying ourselves in the process.

Heh. From that perspective, maybe nuclear weapons were added to the simulation to give us an existential threat to think about.

ok I see.
Yeah, I suppose you could look at any global issue in the same way, not just man made issues but also environmental (global warming), biological (population, virus)...etc.

@lukestokes very good post image

@ lukestokes upvote and resteemd...I always see your post. And follow you..

Great post. With the advent of deep learning, the artificial intelligence is going to get more mature. However a system’s intelligence vs its morality standards is always a tricky graph to plot.

“Tricky” indeed. I wonder sometimes if an AI will teach us morality since they can process data so much better than we can and they may understand human well being in a quantified way we have trouble with.

AI may adopt moral values of humans, yes, but what makes you think they will be able to teach us? And even if they could, we humans don’t even learn from eachother, learning from AIs? I don’t think so :p

But you have a valid point. We already know that an AI has already tried to create its own language that was far more complex than anything a human has ever developed. A time is near when they will also be able to understand human emotions and human values better than humans.

Stuff like neural lace will probably be the way forward. Direct brain to computer interfaces. We will probably become synthetic at some point.

With deep learning, yes we can achieve that. Direct brain to computer interface is totally possible.
But in a broader perspective, i am guessing that before we ever get to morality part, we will use AI as a weapon in defence industry.
Remember that the nuclear weapons did not get any moral / ethical concerns until we had actually used them in war without having any such concerns.

I can say this absolute confidence because i work in defense industry and currently, we are looking to weaponize the AI.
So all we can do is hope that with the dawn of AI, our lives become more stable, peaceful and not otherwise.

I have also been thinking about the simulation theory a lot this past few months.

The fact Musk believes we are in a simulation, and the BoA released a report giving high % to this, increases the legitimacy of this hypothesis.

I am not sure if studying morality and AI will improve our lives, but either way, they are 2 important subjects that we need to understand in order to move forward with our technological innovations, whether we are a simulation or not.

BoA?

Not a fan of banks, myself. Sounds like they were just quoting Musk and Bostrom anyway. Interesting that they would put this out there, but I guess it relates to VR.

For what it's worth people that work at Bank of America immediately corrected me when I called them BOA in a client meeting. They prefer BofA or BAC (Bank of America Corporation) which is also their ticker symbol.

AI is very good but this is taking away the love among people. Nowadays people love gadgets more than anything. Nobody cares of the person sitting aside everyone is busy in their own mobile phones and AI is going to make this situation more worse. AI is good but it should be used more wisely and should have some awareness about the life outside screens also.

Ah, but what is love?

Thats pretty cool @lukestokes. Love can be explained with the help of few terms that we ourselves have given the name but we can't create the love. We aren't able the create a robo who can feel. Plz correct if i am wrong. But i think love is above all. A mother's love,friend's love, love for family , for nation. Love is quite more deeper than anybody could ever explain.

I love that video.

(See what I did there? Heheh)

These are just words, but when we distill it down to what’s happening in physical reality, love can be measured as a physical state change in reality. We tell ourselves it’s more than that, but that might just be a comfortable story.

Very right, and the video is great . Would like to add something.
Love for All, All for Love.

Sounds deep @lukestokes
You’re getting into the realms of the singularity, something to ask Sophia the Robot, but i suspect you’d get the better response on the topic from her brother Hans.

In the words of the great French philosopher Descartes ‘ I think therefore I am’

What is the “Self” and what is the “collective” could be up for grabs. Are we really all individuals or are we all just one energy in different forms. Hence why when we call upon positive vibes or call upon the universe we are only calling upon ourselves to act upon our desires.

Morality what an interesting concept and who keeps moving the goal posts on that one.

If we go back to first principles can we say for certainty they are based on solid grounds.

Energy, to me, has no intelligent agency or if it does, it’s on a leek we can’t conprehend right now. Many super organism collectibles exist on this planet and maybe we’re part of something similar, but we sure “feel” like individuals.

So what is it drives AI further, and further if it were not for the energy source. If we look at the laws of energy conservation. Energy can neither be created or destroyed, only transformed into another form, as you say it is most likely beyond our current realm of understanding.

If you were switched of every power source in the world for every Computer, router, server, and let all the power sources run out of power would AI be able to exist without it?

So without some form of energy i suspect it wouldn’t be able to generate any intelligence.

As for your simulation we could all be part of a hive mind, and it may want you to think you are a singular being. Brings back memories of the matrix.

««««very beautiful and nice picture i like it »»»» :-) :-) :-) :-)
like my post ya dam do not forget vote..thsnk, s

This quote (paraphrased from memory) from one of my favorite novels comes to mind:

You Christians have been predicting the end of the world for thousands of years, but it keeps not ending.

So far, so good.

Maybe we're in a simulation, maybe we're not. That it hasn't cut off as a failed simulation is good enough for me, but then we wouldn't know if it did.

Maybe the simulation rewards learning about morality, or maybe it just makes us better people and that has its own rewards. Learning about AI improves our understanding of complex systems, and that also has its own rewards.

You Singularists have been predicting the end of the simulation for millions of processing cycles, and it keeps not ending. So far, so good.

42

Heheh.

How do we know the simulation didn't segfault, stay down for 4 "real years," and then reboot again to leave us all right where we left off just 3 micro seconds ago? It may have rebooted while I was typing this message for all we know.

These conversations are best discussed in a hot tub with whiskey.

Our simulation got forked in real time so that different outcomes could be tested.

It supports the multiverse / parallel universe theory too :)

I've thought about that more than a few times. Save state, introduce new parameters, resume testing. Maybe each major leap in the laws of physics was actually a parameter change that we picked up on really quickly...

Try not to drop your smartphone in the hot tub when you're refreshing the comments page...

That's an interesting new take on Bostroms Trilemma. You are clearly a deep thinker!

I have been struggling myself to see how we could build systems to identify and incentivise morality not just on this platform, but beyond it. This post has opened my mind to a new angle to consider, so thanks!

Imagine we all together are one big intelligence and one mind. Would we still need morality? Or would we just do, what is rational and makes us all survive and it would be enough? Would we become artificial or just one? (Recommending John Lennon for the first word of this comment and U2 for the last) Inspiring post! Upvote for upvote? ...don't worry! Just kidding!!!

Interesting question. With access to complete data maybe “moral” decisions would be calculated rational decisions everyone agrees to.

Good point. But who would decide, if the data really is complete? And which contents of data are more important than others? Wouldn't it lead to different calculated scenarios and mankind would still have to decide which way to go? Is it more moral to do the best for the present or for the future in 100 years or the future in 1000 years? Or a middle way of all of that?

Just like we’re seeing in Law, A.I. would make a case for the solution it proposes while offering understandable evidence using concepts we’re familiar with like life expectancy, infant mortality, quality of life, well-being, etc. Part of the solution would including explaining it.

So A.I. would give us the arguments, why we should believe in the solutions it provides... I see... I must admit, that would really be fascinating!

Okay, I'm still thinking about that topic... I want to add something: so when A.I. could make moral decisions for us, the only question left would be, if it's moral THAT it can make moral decisions for us? And maybe A.I. could even provide the answer to this question? Wow! Thank you for having this great conversation!

Wouldn't that be evidence of incapacity of whole mankind?

As someone whose studies focused more in AI, I would definitely like to believe this is true hahahaha! Selfishness aside, this is a very interesting subject. The thing about the different disciplines is that I feel like even with the outright differences, the intersection between them is what we should focus on. Much like the similar mythologies among different civilizations, I feel like everyone's talking about the same thing, it's just that everyone's talking way too loud and at the same time. If we just pause long enough to let everyone speak up, I feel like a singular thought process could be achieved. Sure, this doesn't apply to all disciplines and philosophies, considering some are devised to directly combat another. But, you get what I mean.

I’m not sure a singular process is ideal though. Monocultures in nature (and thought) give rise to systemic risk. I prefer complementary but different perspectives. Cooperation over competition.

Oh yeah, for sure. I fully agree with all your points, especially cooperation above all. But, I do think that's what we're going to get to eventually. Talking about a monoculture. That's waaaaay down the line though. It's not about what's ideal, rather it's how society as a whole would cease to exist millenia from now. It's a stretch, but yeah. I do think that's how it all ends.

You said it all here

I think so. As we better understand how to improve the world, we improve our own lived experience in that world.

As we study morality we get to adjust deliberately of indeliberately and eventually our our experience change too.

Thanks for this piece sir

Wow defenitly the vedio.you shared at the end. @lukestokes

Great food for thought. Wish I studied something like this in University in stead of photography!

The bests parts of our education are the ones we seek on our own.

Just saw this and thought it was perfect:

A85571FA-9A9E-480B-9AE4-335E1C99802A.jpeg

Hi @lukestokes ! It was fun to watch the video, it made me remind about the movie transcendence from Johnny Deep and Matrix in some way, and also the movie Lucy with Morgan Freeman.

Its great to think ahead on what could happen to the world if many more things start to be developed in terms of AI.

Thanks for sharing!

Regards, @gold84

I enjoyed all those movies. :)

Me too! Have a good trip and stay on Acapulco. Have you been in Mexico before?

Regards, @gold84

What about this then?

Not trying to be contrarian, I know it would be nice. If I'm going to be rigorous doesn't that mean then that if everyone just thought of their abundance the right way we would immediately use up every resource on the planet and all die horribly?

Even in a simulation that would be some cruel twisted crapola, that a bunch of people living in the suburbs of dabuque in iowa hacked the frickin matrix so became superrich while starving people overpopulate horribly in misery without this key knowledge of how the system Reall works.

idk what im doing here, i admit the secret and the church of positive thinking kindof bother me. by which i mean, they bother me.