AI for Everyone–Whether we like it or not!
I recently spotted a small, colorful ring icon on WhatsApp. I already had a feeling what was coming—and sure enough, tapping it brought me straight to “Meta AI.” Just a few days earlier, I had kicked Copilot out of office and had disabled “Aria” (Opera’s built-in AI). The fact that my insurance company first routed me to a (rather dumb) chatbot didn’t help my mood either. Is it even possible to avoid AI anymore?
Artificial intelligence pros and cons
As a tech-savvy editor, I naturally use AI in my work. The saying “AI won’t replace you, but someone using it will” is spot on. It cuts out some of the duller work and helps preserve your will to live during the boring bits. It’s great for churning out cookie-cutter text or as a brainstorming tool—and as long as a human editor gives it a once-over before anything goes live, there will likely be no disasters. But using AI to write the piece you're reading right now? That’s where things get tricky. AI doesn’t have a personality: It can’t focus, doesn’t get humor, and tends to ramble instead of getting to the point. Once you understand its limitations, you can work with it. But I'm talking large-scale AIs with access to enormous datasets—and they're expensive to run. They're the exception, not the rule. What we encounter in everyday life, especially as customers, is often neither powerful nor smart.
Most AI doesn’t deserve the name
There’s been endless debate about whether artificial intelligence is actually intelligent. Spoiler: It’s not! AI (still) doesn’t truly comprehend anything; It recognizes patterns and makes data-driven predictions. It has no awareness, no intuition, and no deeper understanding like a human does. It simulates intelligence by generating fitting answers—but it doesn’t “think” in the human sense. That’s often good enough—as long as it’s been trained with solid data. But the chatbots that companies love to throw at customers online are more frustrating than helpful. My mood always takes a nosedive when the next “virtual assistant” can’t help me because my question strays even slightly from the script. And so I end up digging through endless FAQ pages or doing my own research.
Hallucinations and the importance of standards
Talk to a hardcore developer, or mathematician, and you’ll hear some eye-opening stuff. One colleague told me that the more technical the topic and the longer the text, the more likely AI is to “hallucinate.” The longer it talks, the more it drifts off into fantasy, making up things that sound plausible but just aren’t real. I’ve seen it myself: I once asked AI to summarize a (very) long software article—only to find it referencing features that didn’t exist in the text or the software. Everything sounded legit, but it was a fabrication! AI also fails miserably when something doesn’t follow the norm. That same colleague developed a tool that uses AI to detect depth in photos to create cool image effects. But flip a typical vacation photo upside down and you'll find the AI has no clue the sky is now at the bottom—it just doesn’t work. It was trained on regular images and can’t process anything else. That’s why so many so-called “smart assistants” in apps or online completely fail the moment something unexpected happens.
Not everyone wants the Office Copilot to read along
The top feature nobody asked for
Just days after Microsoft rolled out “Copilot” for Office 365 in January, search queries exploded—mainly about how to disable or uninstall this “top feature.” Users raised privacy concerns, complained about poor integration into existing workflows, and called the results underwhelming. The fact that future Office licenses bundled with Copilot will be significantly more expensive didn’t help either. No matter how proud a company is of “their AI,” not every user needs, wants, or is willing to pay for it. Microsoft’s business model—selling laptops with hardware optimized for Copilot—may appeal to specialists, but probably not to the general public. As with any shiny new tech, it’s cool to adopt it early, but–let’s be honest–most people won’t use half the features being thrown at them.
Quality will prevail(?!)
The more AI tools are unleashed on people, the clearer it becomes that their usefulness varies wildly. Do we even need AI everywhere or would sometimes a plain-old search bar or spell checker already do the trick? Was the AI trained well enough, or is it just bothering customers with irrelevant questions and vague answers? Will companies have to bite the bullet and put a real human (back) on the phone to keep their customers from walking away? We’re clearly at the beginning of a major , and sometimes terrifying, development. Use of AI will keep spreading—whether we want it or not. Today, AI can detect skin cancer in images, check structural integrity in buildings, or process hundreds of thousands of pages so they can be accessed in a simple chat. Pandora’s box won't close back shut, but every use of AI needs to be considered carefully and with a healthy dose of common sense.