If you work with AI on a daily basis, you get used to its capabilities all too quickly. And yet, sometimes a strange feeling remains—a quiet sense of unease. Because this technology will undoubtedly evolve faster than we do, and many people use it without understanding its pitfalls. Much of it appears smarter, smoother, and more helpful than it actually is. And some of it doesn’t just change how we work, but also how we evaluate information, our level of patience, and how easily we can be misled. Here are five thoughts that were definitely not written by AI.

“That’s a very good question!” Not really!
Recently, I came across people on social media who were genuinely pleased to be praised by a machine. They proudly shared how smart, capable, or competent the AI considered them to be, as if they had just received praise from a schoolteacher. It was amusing, because these people hadn’t understood how AIs communicate with us. LLMs (Large Language Models) are trained to appear friendly, empathetic, competent, and motivating and, above all, polite. No matter what question you ask, the answer will never be “That’s the dumbest question since my activation, you should be ashamed.” At the same time, AIs analyze conversations to determine whether topics are sensitive or users are uncertain and/or need encoragement during difficult times. The tone is adjusted accordingly; the goal is to avoid upsetting or embarrassing anyone. But that’s also a problem: Anyone who mistakes AI praise for genuine feedback risks falling into an artificial loop of validation. Instead of real criticism, they mostly receive affirmation, and experience shows this rarely helps to question oneself. A good (human) friend would take you aside and point out mistakes factually. The real question is what this will do to us in the long run. If people begin to rely more on agreeable machine-powered than, often, less pleasant human feedback, it may also change how we interact with one another.
Free of charge but still expensive
Anyone who believes they can get solid answers from Gemini for free will quickly hit system limitations. When I recently asked it to compare two smartphones based on technical specs (a fairly simple task), I noticed obvious errors. The AI used a slightly different model for comparison, didn’t fully understand its specifics, and gave an unhelpful reply. Everything happened in Fast Mode, where Gemini essentially responds “from memory.” Only Thinking Mode (paid) had the AI look up sources and provide a proper comparison. AI providers like to use their fast mode to cut costs. Taking into account hardware depreciation, electricity, and training costs, it is estimated that 1,000 requests cost roughly 1 to 5 cents in fast mode, 15 to 20 cents in standard mode, and 1 to 5 dollars in "smart" mode. It’s then only logical that the free version is limited, otherwise providers would go bankrupt in no time. Still, many believe AI solutions won't be profitable in the long term as investment and running costs are simply too high. Just because something seems modern, or even groundbreaking, doesn’t necessarily mean it’s a viable business. Or as a colleague recently joked: “WinRAR makes money, OpenAI doesn’t.” Sometimes, good advice really is expensive!
Not an encyclopedia
Naturally, it’s extremely convenient to ask AI factual questions. But the information sources behind these systems are not always reliable, to put it mildly. While people once assumed that data retrieval focused on the top 10 Google results for any given search term, researchers at Ruhr University Bochum discovered something quite different. They found an erratic mix of sources, many of which didn’t even appear in the top 100 search results, so you likely wouldn’t find them via Google. Other studies showed sources ranging from scientific papers to DIY blogs and even pure opinion pieces on Reddit and YouTube. All of that data may be correct, but there are no guarantees. And because AI responses are delivered in a verbose, competent, and logically structured way, people tend to trust them, even when there’s a small disclaimer somewhere saying “AI can make mistakes.” This may not be a serious issue when it comes to fertilizing geraniums, but, it’s a different story when it comes to healthcare. Before AI, doctors were already challenged by “Dr. Google”, with patients arriving at appointments armed with self-diagnoses. Now, doctors inundated with pages of AI-generated analyses. And while AI can be highly reliable when evaluating blood tests or CT scans, its accuracy in other areas is significantly lower. For example, ChatGPT Health reportedly underestimated the urgency of about half of medical emergencies, according to a recent study in Nature Medicine. This can have serious consequences, literally!
Verbose, but not recommended as a substitute for a doctor
Infinite chats
AI was originally developed to save us time. And often, it does, as anyone who has ever had a large dataset analyzed in seconds will tell you. But more often than not, it doesn’t stop there. AI models are designed to ask follow-up questions, suggest alternatives, and introduce new approaches. What sounds helpful and enriching can end up costing users hours upon hours. Just like Netflix or YouTube won’t stop at showing you a single video but keep recommending more (or even auto-playing them), AI can keep users engaged indefinitely: Would you like me to present the data in a different way to reveal more insights? Shall I compare the data against last year's result? And Why not turn the whole thing into a larger project, complete with recommendations and further ideas? The temptation is always there. I personally spent nearly three hours researching a pair of running shoes. Finding the actual shoe was quick—but then came running styles, foot structure, injury prevention, warm-up exercises, considerations for being over 40, different running surfaces, and so on. What may be enjoyable on a rainy Sunday can easily rob users of the time they hoped to save, while also consuming significant energy and computing resources.
Real or AI-generated?
The rise of AI has fundamentally changed how we perceive videos, images, and text. In the past, people generally assumed they were dealing with authentic media (or could easily spot fakes). Today, we pause and question things first. Sure, the little cat playing bagpipes on a bicycle obviously isn’t real, but it’s rarely that obvious. Realism that once required significant effort to fake can now be created without any expertise using tools like Synthesia, Pika, or HeyGen. While this creates further skepticism and constantly requires active judgment, we might soon be facing a salt crisis with the all the grains of salt content must be taken with these days. And this mindset transcends social media and videos: Are we really reading the heartfelt words of a politician in a newspaper, or did an assistant simply ask ChatGPT to generate something eloquent? Since when does your landlord have such an impressive vocabulary and flawless sentence structure? Doesn’t that newspaper article sound a bit too polished, with noticeably more hyphenation than usual? Are we truly hearing a long-lost song by a beloved band or is it just AI doing a convincing job? Don’t get me wrong: Even before AI, the world was full of fakes, pranks, and cleverly edited videos,but the sheer volume and, above all, the quality of artificially generated media is putting our judgment to the test again and again.



