Normalization as a service and how it's a preview of something less visible but more significant.
You've undoubtedly noticed it, maybe it's even starting to bother you a little. The bland, yellow-tinted cartoon-style drawings. The diagrams that look polished but say nothing. The LinkedIn carousel images that all seem to come from the same place. AI-generated visuals have a look now - technically competent, aesthetically flat, instantly forgettable.
Something tells me that the same thing is happening in software development. There's absolutely no doubt to me that tools like Claude are excellent for productivity. But what happens when some of the best coders in the world stop doing the boring tasks, stop thinking about novel ways to solve simple problems because an LLM can generate something just as good in a fraction of the time?
I recognize this from my own work - I no longer do certain things, because I now have a better tool. At the same time, I find that I genuinely learn more complex things faster, because when you make an effort, LLMs are great at making somewhat complicated things easier and faster to grasp. It's a trade-off I'm still trying to fully understand.
This normalization has visibly become part of creative work, and invisibly, but vocally with tools like Claude or Codex. One thing I haven't seen discussed much, however, is that I believe the same dynamic extends to business decisions, strategy, even how we think about problems. When everyone has access to the same AI assistants, trained on the same data, suggesting the same frameworks - do we end up with more options, or have to act on the same insights?
One way that the normalization is vocal is displayed in a new kind of AI-posting I've seen on LinkedIn, where I'd categorize people posting in two categories: 1) those out of a job proving their AI competences, 2) people with impressive titles trying to justify them in an AI-first world. Both tend to complicate systems like Claude - displaying how many agents they run, or a complex system they set up. There's certainly lots of complexity to dive into but, at least to me, these posts are somewhat missing the point. AI is seen as a big liberator, democratizing things like coding, yet many seemingly want to make it feel less accessible to their readers. It makes me think of Pike's fourth rule of programming: unnecessary complexity leads to more errors, which seems to rhyme with how I've previously described AI agents as a powerful but narrow technology.
Speaking of LinkedIn - given that you made it this far into my post, I imagine you spend some time there, and I have no doubt you've started to notice a pattern in what the algorithm shows you. Posts that feel interchangeable. Insights that could have come from just about anyone.
AI can certainly be good for creativity or your ability to learn. But I also think that the access it offers and barriers it removes make us lazy and complacent. Whether you are a musician, a painter, a designer, or a coder - some of the novelty comes from the lack of access, from overcoming barriers, from learning, from doing the hard things that others won't - daring to be different, letting your unique experience and gathered skills shine through.
One kind of post I admittedly find some joy in is when people share how they are replacing an expensive subscription with a tool they have now been able to create in-house. I recently read a post where the author (sorry, I didn't note down who!) went into detail about how they vibe coded an app that does the same as a subscription they were paying for. Only, their in-house version was better tailored to their specific needs. What I genuinely liked was the reflection being displayed, stating that they'd need somewhere between 35-40 hours to actually code, test, and integrate the app, but that time investment felt worth it to them. Genuine knowledge sharing and reflection on possibilities is something I often find gets lost in the uncertain times of AI, but that I try to contribute to myself, and really enjoy reading from others.
Another place normalization is happening is with Google's Stitch. You describe the UI you want - speak it, even - and it generates it in a Figma-like interface with production-ready code. While the tools not quite living up to the hyperbolic language used in the announcement, some UI/UX designers are jokingly calling it the start of a dark age for their profession. It made me think about what I've previously commented on: the next big step in AI not being better models, but how they are used.
I think tools like this promise a strange and almost dim future, where interfaces are generated through the same model, optimized for the same notion of "what works". The average quality of interfaces rises, but at the expense of the oddities at the edges. The weird, distinctive choices that made certain products memorable start to disappear.
I fear that if we are not careful about how we use these powerful tools we all of a sudden have access to, those sparks of creativity and joy might disappear. The edges get rounded off, standardized, instead of being something we have to confront.
My thesis is that more creation does not necessarily mean more novelty, we might slowly be ending up with a million variations of the same thing - technically of a higher average quality but harder to distinguish.
This won't just happen to creative fields, but slowly bleed into all facets of business and society. Maybe true differentiation now comes from where AI can't easily reach. Whatever remains distinctly human in taste, judgment, and the willingness to be wrong will be the edge compared to who has the better prompts.