Quick question: When’s the last time you really knew if you were chatting with a real human on a customer-service call? Turns out, in more places than you think, it’s not a person at all. It’s AI.
That might be fine when you’re asking a dumb question like, “What time do you close?” But what about when you’re dealing with a bank, insurance company or police report? Yeah, it matters.
👨🏽⚖️ Laws incoming
Utah and California rolled out new rules that say companies have to tell you if you’re talking to a chatbot instead of a real person. And in California, cops have to fess up if they used AI to write up part of a report.
That little disclosure is about trust, transparency and knowing who’s really calling the shots. If a bot’s behind the scenes making decisions that affect your money, your medical care or your legal rights, don’t you want to know? I do.
🕵️ Here’s what to watch for
Next time you hop into a customer service chat, pay attention. If there’s no mention that you’re talking to AI, ask: “Are you a real person?” You’re not being rude, you’re being smart.
AI doesn’t always get it right. It might deny a claim, mess up a bill, make crap up or give you incorrect info. And if it feels like you’re stuck in a loop? You probably are. Push to talk to a human.
Oh, and don’t assume this is only happening at big companies. AI tools are cheap, and everyone from your gym to your doctor’s office could be using them.
Look, I love tech. You know that. But I also think you have a right to know when you’re talking to a machine that’s pretending to be a human.
So tell me what you think: Should we have laws that force every company to disclose when AI is in use? Do you have a funny AI customer service story? Let me know when you rate the newsletter at the end. I read every single note there. Include your email address if you’d like to talk about it on the show.
The post You might be yelling at a toaster appeared first on Komando.com.