Is GenAI Really Going to Eliminate Software Developers?
Indeed, good software developers and software engineers are more relevant today than ever
Every day, we read strong claims and bold statements about the death of coding and software engineering practices and professionals. GenAI technology and new paradigms, such as Vibe Coding, have taken the IT industry and community by storm and induced the perception that code can be created almost autonomously by an almighty, omnipotent, and powerful armada of AI agents and services.
Indeed, GenAI is transforming all facets of society, and the IT and Software industry is no exception. Nevertheless, as Mark Twain famously put it, “The reports of my death are greatly exaggerated.”
Software development and software engineering skills and expertise are more critical than ever.
Many observers and experts repeatedly warn that we are rushing to unsound and unreasonable conclusions. This hurry is creating two critical and hazardous risks. First, we are increasingly undervaluing the criticality and complexity of building reliable, effective, and secure software systems. Second, we are weakening the role of software engineering and software development, thereby reducing, if not eliminating, the field's appeal to new generations of students and professionals and, consequently, our ability as a society to develop a much-needed skilled and professional workforce.
One of the latest warning signals in this direction is a paper by two software engineering and machine learning experts from LinkedIn and Microsoft: Why AI coding agents aren’t production-ready: Brittle context windows, broken refactors, missing operational awareness. The two experts raise several issues that deserve close attention. In particular, here are some of the most critical ones:
“Limited domain understanding and service limits.” AI agents are incapable of capturing the complexity of enterprise systems and domain-specific knowledge.
“Hallucinations over repeated actions.” Too often, developers are unable to overcome the problems and mistakes introduced by GenAI allucinations, thereby complicating software development rather than simplifying it.
“Lack of enterprise-grade coding practices”. Coding agents tend to underestimate the complexity of coding activities, “reinventing the wheel,” and produce redundant code.
“Constant need to babysit. Despite the allure of autonomous coding, the reality of AI agents in enterprise development often demands constant human vigilance.”
One may argue that this is just a limitation of current technology: as GenAI and computing capabilities scale up, these problems will be reduced, if not eliminated. Actually, the idea that GenAI can improve over time by scaling has been disputed vigorously by many scholars, such as Gary Marcus:
[Scaling] may well be better than anything else we currently have, but the fact that it still doesn’t really work, even after all the immense investments that have been made in it, should give us pause.
These comments resonate with the view we have adopted in writing our new book on software engineering. Our position is well explained by a sentence in the book's introduction:
AI can also undoubtedly assist software engineers in the development of any type of software by generating and reviewing portions of code and supporting testing and modeling activities, but it should not and cannot substitute the designer’s expertise in conceptualizing, building, and deploying practical, large-scale software solutions.
Indeed, as demonstrated by multiple experiments published in scientific papers, AI can be used as a software engineering tool. Even if it does not produce perfect results, with appropriate inputs, it can generate good initial outcomes that can be further reviewed and refined in an interactive dialogue between AI and humans, with humans ultimately responsible for finalizing the results.
Professionals should take into account the following guidelines:
Responsibility for content. Human authors are solely responsible for the content they produce, including code, text, diagrams, figures, and references, even when they use AI. While they may use any software or tools to support their coding, writing, and preparation, they must ensure that all materials are correct, original, and clearly documented. Human authors should be able to explain, justify, and defend every aspect of their work in detail.
Responsibility for ethical and privacy aspects. Some tools—including free or online ones—may store input data for future model training. They should be used cautiously, after reading the terms of use. The expected level of privacy and confidentiality should be maintained in all cases. Given the potential production of inaccurate or misleading (“hallucinated”) outputs, it is the authors’ responsibility to verify the accuracy and integrity of all generated material.
Accountability: If authors use AI tools in their work, they must explicitly state this within the results they deliver by describing:
Which tools were used (e.g., specific chatbots, such as ChatGPT, Gemini).
The inputs provided (e.g., prompts, datasets, source materials, parameters, or constraints).
The outputs obtained.
How those outputs have been verified, refined, and integrated.
This will enable traceability of results and, in the medium term, help improve how humans interact with AI.




