You are viewing a single comment's thread from:

RE: Outsourcing Empathy To AI

I am often on the same side as you, not knowing how to express empathy, sympathy, or needing to endlessly clarify my intent with others - to assure them that I don't have any ulterior motives, or to constantly clarify my own meaning or intent in responding to their suffering.

I am learning to listen better. That means not saying anything, and asking if the person wants a solution (what I would do) - or comfort (what can I do for them?).

I think that it would be insincere to use AI as a crutch on this, but it may well help us identify patterns or flaws in our behaviour, I am still learning about my own patterns, and the only way I do, is by having people I trust tell me about them.

To put a layer of abstraction or a "mask", if you will, of AI between me and a situation weakens me as a human - as the faults in our character and behaviour are what make us human.

There's also something here about the Turing test. I think that if I am interacting with someone who is using AI to do these things, then I'm the one failing - as opposed to the machine convincing me that they're human.

Sort:  
 2 months ago  

I think that if I am interacting with someone who is using AI to do these things, then I'm the one failing

That's basically it and I just found out these days there are many AI videos that could pass as a reality. Even my own mom was falling to notice that it was AI.

I think that it would be insincere to use AI as a crutch on this, but it may well help us identify patterns or flaws in our behaviour, I am still learning about my own patterns, and the only way I do, is by having people I trust tell me about them.

It's definitely insincere to use AI & this is also why it's taking me a couple of days just to get back here haha. I am sure with AI, my reply would've been faster.

But hey, I appreciate the sincerity.