Others Thoughts

The Abstraction Trap: What AI Is Really Selling to Business Users

AI is selling business users an illusion: that the friction of building software was the problem, when actually the friction was the point.

Let me explain where I am coming from before I make that case.

A bit of context

I have been following the AI transition for a while. I was even learning how to build my own LLM with Harrison Kinsley back in 2020 — I actually backed his Kickstarter project on it. I was doing Kaggle competitions (mediocrely) and learning Data Science back in the day. Then, for about three years, with the rise of LLM chatbots, this universe grew so rapidly — and life was so hectic (we had a kid) — that I did not feel I could catch up.

Over the past year, I have slowly been getting back into it: MCP, Skills, testing models, understanding the use cases, talking with “experts” (definitely not OpenAI or Anthropic level, but very knowledgeable people).

I learned, I tried, and I think I have found my setup for now. For my work — mostly a data architect and data engineer role — I use Claude 4.6 and 4.7 (this won’t age well 😀 ). For more agnostic work (life planning, excursions, learning about a subject, reviewing) I use Gemini 3.1. I personally felt that Claude is really good at code understanding, and Gemini is better at reasoning. Funny enough, I came across some papers in the past few weeks that actually back that up.

After trying to generate completely new code, build applications, or ship features through AI, I have landed on three go-to use cases:

  • Code review: checking whether any issue in the code logic is apparent.
  • Discovery: when faced with a very large codebase, leveraging AI to understand the structure and how it is built.
  • Summarizing: when faced with large content (code or documentation), using AI to ease into it.

I know people who went much further — building apps, automating tasks, shipping features end-to-end with AI. The things I have observed about these more complex use cases are the following:

  • The code is sometimes (often?) not that good. It functionally works, but it is not structured in a way that makes it easy to scale or debug.
  • The model does not know about undocumented edge cases. So when something breaks, you are left with a very large piece of code to debug, with no real way out — other than feeding more AI into it.
  • The people doing these use cases are often not that strong at coding in the first place. So it is hard for them to understand what is happening under the hood, and they do not know what they are missing. The unknown unknowns, to borrow Rumsfeld’s phrase.

And that last point is where I feel this whole AI field is heading toward a dead end.

What worries me is the gap between the people making decisions about AI usage and the people actually using it — what benefit we reap from it, and what danger it brings to an organization if you are not careful.

The Abstraction Layer

AI is, in essence, an abstraction layer over real-world software development.

We now have a tool capable of generating code that generates applications. Business decision makers, who have often been limited in these domains, suddenly have a super-powered tool to realize their ideas. This is a good thing. There are many tasks that are time-consuming and worth automating. There are many ideas worth exploring, and software developers’ time has always been the bottleneck.

But here is where the trap is set. Developing an application — or even just a feature — usually involves a lot of considerations that software developers and good Product Managers are very much aware of: code integration with existing design, readability, ease of debugging, fit within the application architecture, security, optimization, and so on. These are not afterthoughts. They are usually discussed, worked on, and designed over weeks of thinking.

Now, thanks to AI, it is instantaneous — or so it seems.

The idea of AI is that it abstracts away the code creation part for the end user. What business users do not understand is that the code itself is built on top of countless abstractions, designed to work well together for future projects, scalability, and integration with existing systems. A one-line df.merge() call hides decades of decisions about hash joins, memory layout, null handling, and index types. Business users see one line; the system underneath is enormous. Choosing the wrong join, or the wrong index, doesn’t break the code — it breaks the system three months later, at scale, and then nobody knows why.

Some may say this is all about context, prompt engineering, and AI capabilities that are simply not yet fully utilized. In my opinion, the ability to generate code in such a frictionless way — without going through the grind of code development, without the actual thinking of it — is a curse on multiple layers.

One should not confuse speed with haste.

It is true that things are going faster. But the developers who are vibe-coding, who do not fully follow the code they produce, are not actually that much more productive. Yes, you produce more output, more lines of code. But this gut feeling is now backed by data. METR, in a 2025 randomized controlled trial, asked experienced open-source developers to work on real issues in their own large codebases, with and without AI assistance. When developers used AI tools, they took 19% longer than without — AI made them slower. The kicker: after the study, the same developers estimated that AI had sped them up by 20%. They were wrong about their own productivity by nearly 40 percentage points.

Felt faster, was actually slower. That is the illusion in one sentence.

It gets worse when you look at what the code actually looks like. GitClear analyzed 211 million lines of code from 2020 to 2024 and found that refactoring dropped from 25% of changed lines in 2021 to under 10% by 2024, while code duplication grew roughly four-fold. Developers accept AI output without the iterative improvement they would apply to human-written code. AI lacks whole-codebase context. It regenerates similar logic instead of reusing existing functions. And developers don’t cross-reference before accepting, because that would eliminate the time savings. The codebase is growing, but the architecture is rotting.

This is exactly what I meant by “functionally works, but not structured to scale or debug.” It is now measurable.

When we used to write everything by hand, even though we did not produce more lines committed, we were productive in a different way. We were building knowledge: an understanding of the code logic, the business reasons behind decisions, what really matters, which parts of the application are critical, what makes up the domain and how it integrates.

Due to context size limitations on current AI models, it is hard for the model to fully grasp that part — to truly understand the detail layer underneath the abstraction. So I believe that AI has built a false sense of abstraction. By abstracting so much for end users, the details are not just hidden — they are not formed in anyone’s head in the first place.

AI Benefits the Already-Skilled

I really believe AI is a great tool. And sure enough, the people who used to do somewhat meaningless work — compiling information, templating presentations, tasks that did not actively build knowledge — will have a harder time.

The ones who benefit the most from AI are the ones who are already knowledgeable. They know what they want to build, so they can express it better. They know how it should be written, what to look for in terms of interoperability, integration, and so on.

I want to be honest here: this point is more contested than it sounds. Some studies (Microsoft, Accenture) show that junior developers gain more raw speed from AI than seniors do — it scaffolds boilerplate and answers questions an experienced developer would not need to ask. But raw speed isn’t the whole story. As one analysis put it well: AI gets you 70% of the way, but the last 30% is the hard part. For juniors, 70% feels magical. For seniors, the last 30% is often slower than writing it clean from the start.

That last 30% — production readiness, edge cases, architecture fit, real testing — is exactly where the abstraction breaks down. And it is exactly where you need to know what you are doing.

For the younger generation, I am afraid the use of AI will cut short the time they need to learn and actually understand what they are doing. This is not just a hunch. An MIT Media Lab study on essay writing — yes, essays, not code, but the mechanism is the same — split participants into LLM, search engine, and brain-only groups, and recorded their brain activity over multiple sessions. The LLM group later showed severely impaired ability to even quote from essays they had written themselves. They did not internalize their own output. The researchers called it cognitive debt.

That, to me, is the cognitive version of what is happening with code. The abstraction layer prevents knowledge from forming in the first place.

In a world where everyone can ship code, the real difference will be made by the ones who understand what they are building and what they are aiming for, architecturally and strategically. To make decisions, you need to know where you want to land. Some decisions, when not made properly, become very costly later and require large refactoring. And refactoring is expensive — when no one truly understands what is going on in those lines of code, it requires AI, it requires additional time, and the time you gained by shipping faster, you lose now. It also introduces new bugs, at a cost few will be able to understand. Refactoring is always a hard sell to business users: everything will work the same, just better. Hard to pitch compared to a new or much-needed feature.

When I started coding, I did not know anything. I had not done any computer science degree. I had to learn, I had to comprehend what I was doing so I could reproduce and reapply that learning later. I had to crawl, then walk, and now I run on certain things.

For newcomers, I am afraid we will ask them to walk and run directly. But by going too fast, these young people are losing time today that they will need to take back later, when the real implications and real stakes show up.

Maybe I am from another era already, and in the future human involvement in business development will be kept to a minimum because it will be deemed not cost-efficient. But the recent developments around the cost of AI are not really pointing that way. For some time now, we have been “six months away” from no longer needing new developers — yet OpenAI and Anthropic still have software developer roles open on their career pages.

The friction was not the problem. The friction was where the thinking happened.


References

Leave a Reply

Your email address will not be published. Required fields are marked *