There’s a very interesting topic that’s been popping up in my group chats lately, among my friends and family, and it’s about people relying too much on AI. I think it was fun at first because we were simply curious about what answers we’d get and whether they would align with our own thoughts or perspectives. Maybe it was our way of testing our own knowledge or analysis of a question, just to see if what we know is good enough for the AI to come up with the same answer.

Of course, in general, using AI for day-to-day activities is perfectly fine in my opinion. It helps us get things done more efficiently without stressing over simple tasks. For example, in my case, it’s been a big help for making grocery lists, scheduling chores, organizing my daily tasks, and so on.
However, there are times when we become too reliant on AI, to the point that we even ask it for investment advice or guidance on financial matters. That, for me, is a bit dangerous. Regardless of what I think, I have to admit that I’m guilty of doing it too. But every time I do, I get this lingering feeling that I’m taking a risk, and for good reason.
My major concern is privacy. To get an accurate response from AI, I often have to provide personal information about myself and my finances, essentially doxing myself. And that worries me because I fear that AI could collect my data, sell it somewhere, or expose it to people who might misuse it.
While there are settings that supposedly prevent AI from collecting personal data, I still can’t say for sure whether that’s completely safe or if issues might arise in the future.
I can’t really blame myself or anyone else for using AI for financial advice because, honestly, the answers often sound accurate and convincing, especially from paid or “Pro” versions. Sometimes the responses are so polished that I almost don’t want to doubt them.
But there’s one thing we should always consider when consulting AI about financial decisions, the human factor, emotional intelligence, and gut instinct. AI doesn’t have that, and that’s why I believe humans are still superior.
Human financial advisers also have a level of accountability that AI currently lacks. Even paid AI services include disclaimers saying they could be wrong. In other words, AI doesn’t carry the responsibility for the accuracy of its advice.
There’s another thing that human advisers have that AI doesn’t, the ability to read between the lines.
When you talk to a human financial adviser, the conversation often branches out in different directions to uncover details that help them fully understand your situation. That’s important because they try to grasp the complete picture, something AI still can’t truly do.
Unlike humans, AI usually bases its responses on online information that may not exactly match your specific case. Of course, part of the issue lies with us, sometimes we don’t provide enough context or the right question, so the AI assumes and connects our situation to something only somewhat related.
At the end of the day, AI still lacks the ability to analyze or sense the things you’re not saying, the subtleties, emotions, or hidden details that only another human can pick up on.
In the end, I believe AI is an incredible tool that can make our lives easier and more efficient, but it shouldn’t replace human judgment. We should always be mindful of how much we rely on it, especially when it involves personal or financial decisions. While AI can process data faster and more accurately than we ever could, it still lacks the emotional depth and accountability that humans possess. There’s a balance to be found between using technology for convenience and trusting our own instincts and experience. At the end of the day, AI should assist us, not define the choices we make.
click here ⏩ City Life Explore TikTok Page 🎦

Well said bro. Man inputs are still very much required
Exactly! Human input still matters a lot... AI can assist, but it can’t replace real judgment. 👍
At the end of the day, the machine is still a machine. Until it gains it's own consciousness... well then... that's a whole other problem if that day ever comes.
Haha, true! Once AI gains consciousness, that’s when things get really interesting… or scary. 😅
I am old school guy so I have been there since the total analog way of people handling themselves. As a tech guy I was frustrated how people rely on word of mouth so much and very little situations did they verified their information with data. Meaning that very small amount of people 'google' their beliefs. This led to a lot of myth and ignorance to be dissipated. Myths like avoiding to eat before jumping in the pool, or thinking men actually lack a rib because God used it to build women. 🫨
Jump a few decades and titles like this made me think of how much has humanity changed. AI can now settle discussions real quick and also be able to have a better quality of information. Is it perfect? no is not. But is way better than the alternative.
So I am very pleased to see more and more people are using AI for getting their facts straight and be able to really educate themselves with data and not just trust.
I also don't see anything particularly wrong to be so dependent on AI. I was in Japan recently and I was super dependent on Google Maps to be able to know which train to take and where in the station was the best exit to where I was going. I also see many Uber drivers dependent on their GPS navigation and other things that might seem like too much. But at least is doing their jobs as opposed to be 'guessing' the place and just having no idea how to get there.
That’s a solid point! Even though in my post I said that it's dangerous to use AI, I'm actually benefiting from it especially when I do research of something. I think balance is still key... We can't totally rely with the tech regardless of how superior it is.
To that point, I was arguing with a friend 2 days ago about what will happened to Bitcoin if there was an internet blackout. Which I do think is a good example of relying 100% on tech as all of our money is on crypto. I said, well if there is an internet blackout, all stores won't be able to process the sales as they are attached to a system that record the sale for each branch across the country on a centralized database.
Same with stocks, banking, and all the other non-crypto,** tradfi tools** as your ATM won't work or even the card readers. And lets not think about automation. I even think we already fully rely on electricity which is sit on tech. So think about the smart street lights to control traffic flow and not to mention our car jams at peak hour in major cities. So my guess is that AI is just one more nail in the coffin of an already 100% society reliant on technology.
I recently saw myself relying more on Ai and it affected my thinking capacity hence I am withdrawing myself before I become dependent. Thanks for sharing your thought
I get that! It’s easy to get too comfortable with AI. Good move stepping back... balance really is everything. 💯
AI is not bad—it's quite practical when used well. In the end, the problem isn't AI, but the humans who overuse it. The future will reveal the harm this will cause them. But... well, it's just my personal opinion 😊
Well said... AI isn’t the enemy, it’s how we use it that defines the outcome. Responsible use is key. 😊
Well said! Totally agree, helps a lot, but nothing beats human judgment and instinct 👏
Totally agree! Nothing beats human instinct and emotional intelligence at the end of the day. 👏
People who don't know how to ask the right questions, and use AI will not find it useful. People who know how to ask the right questions and use AI will find it helpful, but find the answers too limited in scope, or failing to take into account the nuance of the question, or a set of previous conditions or constraints.
AI forgets too many things that drop out of its context window too quickly. Larger contexts help, but an enormous amount of the capacity is wasted by poor prompting and people not thinking through what it is they want to achieve.
Very true... prompting makes all the difference. The quality of the output really depends on how well we frame our questions.
In my case, I'm still trying to come up with a good prompt or line of questioning. Sometimes I noticed that answers seem odd 😂
!PIZZA
!INDEED
!HOPE
!LOLZ
lolztoken.com
An udder failure.
Credit: reddit
@curamax, I sent you an $LOLZ on behalf of cryptoyzzy
(2/10)
NEW: Join LOLZ's Daily Earn and Burn Contest and win $LOLZ
Haha thanks for the fun combo of tokens! Appreciate the support. 😄🍕
$PIZZA slices delivered:
@cryptoyzzy(2/5) tipped @curamax
Come get MOONed!