Over the last few weeks or months I've been working a lot with AI. I've tried several approaches; in fact, about 6 to 8 months ago we did some live tests, but a lot has happened since then. And the way I use artificial intelligence has changed massively in the professional environment.
Table of contents
1 - Artificial intelligence changes the way we work
Until recently AI was a joke, let's be honest. When OpenAI or ChatGPT became popular it wasn't at the level of IDE autocomplete; that improved and, about a year or so later, it became capable of writing methods.
Nowadays it can develop entire features, or better, entire applications with a single prompt, and the problem is that at first glance the result looks perfect. But when you dig a little, all the problems surface.
- It works, yes, but the whole lacks cohesion.
- In the long run, that code is a maintenance nightmare.
- Technical debt is generated at a speed never seen before.
The key isn't to let it run wild, but to know how to direct it. AI accelerates your typing speed, but you still have to be the architect.
2 - Avoid random code and hallucinations.
First of all, you need to understand context and accept reality: if you don't explain well how you want it to do the task or the specific change, it will do it however it feels like.
Or, put more precisely: it will do it in the most common way on the internet. And that doesn't have to look anything like your application, your conventions, or your way of working.
To fight this we have rules, where we define certain structure, patterns, examples, ways of working, etc. The application, whether it's (TUI, IDE, whatever you're using), should read these rules before starting to work on the selected task.
On day one you might not have any rules defined and the AI does whatever it wants, but as you progress you can add more, especially when you see it doing things you don't want it to do.
From here we can use the plan, which is basically forcing an intermediate step before touching the code, where the AI explains the change with steps and explanations.
But what I personally, in the current state of AI, prefer is to think the plan through myself, and then little by little tell the AI what it has to do, being very specific about those changes, exactly as I would if I were explaining the situation to another coworker.
Of course, it's key to review absolutely everything it writes and understand it completely. If you don't understand something, do NOT commit it. That's where all the problems start, and once they start they're unstoppable.
Review AI code as if it were human code: check whether it's coherent, the tests, whether it's understandable, etc.
3 - We are still engineers
One thing I've noticed, especially online, is that a lot of people complain that "programming loses its fun", that we're going to forget how to program, that we'll lose our jobs, and other nonsense. I don't share that opinion, but I get where it comes from. For years, being a programmer was confused with being the one who types fastest, the typist. That figure is the person who translates requirements into lines of code without contributing anything else. And that figure is ending, not because it will disappear, but because that mechanic is now done by a machine.
When this wave (or rather, tsunami) passes, developers are going to come out well positioned. In fact, many will come out stronger. Precisely because what will be worth more is not writing code, but knowing what code to write, why, where it fits, or what it might break. That's where AI doesn't replace you, it pushes you to level up. And for the person who only came to grind tickets, it's going to hit hard.
I don't see 100% replacement of developers as viable in companies that want to do well, because you risk shipping code that doesn't work or is impossible to maintain. To have reliable software you need people who truly understand the system and are responsible for the outcome. There is less and less room for the profile that only "meets" the bare minimum, because the quality bar rises when producing cheap code is easy. The difference is made by the person with judgment: the one who thinks about limits, trade-offs, security, maintainability... and the one who answers when something fails.
In my environment I've noticed the way we program is changing a lot. Let's be honest, nobody programs 100% of the time. For the vast majority of developers, actual development is around 70 to 80% of the time; the higher you go, the less time you have to program. Right now I'm coding maybe 30% of the time, but in volume it's almost the same as when I was at 50%. Why? Because many hours of typing are being compressed. And that's another reason why the typist is ending: if AI writes the code for you, the value you bring can't be "writing code".
So no, AI is not going to make you a 10x engineer, but it will increase by roughly 30 to 40% the amount of code you can produce in the same time (if you already know what you're doing). But of course: programming is only one part of the job. The other 70% remains basically the same: understanding the problem, talking to people, choosing the approach, reviewing, maintaining, operating, securing, and making decisions. And that's where two worlds split: the one who knows how to build software... and the one who only knew how to type it out.
Conclusion
I'll sum it all up in one sentence.
AI does not replace the engineer. It forces you to be more of an engineer.
If you don't raise your level of control and intention over the code, the only thing you'll achieve is producing garbage code at high speed. Having final control over the result is still, and will always be, your responsibility.