Will AI Devour Software Engineering (SE)?

Image: Depositphotos

Source: Irving Wladawski-Berger, CogWorld think tank member

“Social media provide a steady diet of dire warnings that artificial intelligence (AI) will make software engineering (SE) irrelevant or obsolete,” wrote CMU computer scientists Eunsuk Kang and Mary Shaw in their September, 2024 paper, “tl;dr: Chill, y’all: AI Will Not Devour SE.” [tl;dr: a summary of a longer text]. “To the contrary, the engineering discipline of software is rich and robust; it encompasses the full scope of software design, development, deployment, and practical use; and it has regularly assimilated radical new offerings from AI.”

“Current AI innovations such as machine learning, large language models (LLMs) and generative AI will offer new opportunities to extend the models and methods of SE. They may automate some routine development processes, and they will bring new kinds of components and architectures. If we’re fortunate they may force SE to rethink what we mean by correctness and reliability. They will not, however, render SE irrelevant.”

What’s with the cries of alarm about AI’s threats to SE?,” asked Kang and Shaw. Social media is full of “angst about the imminent demise of SE at the hands of AI.” Their paper referenced a few recent articles about AI’s potential threat to SE: “The End of Programming,”which noted that “The end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit”; “ChatGPT Will Replace Programmers Within 10 Years”; and “NVIDIA CEO says the future of coding as a career might already be dead in the water with the imminent prevalence of AI.”

“It seems that either AI systems are so different from ‘regular’ software systems that SE knowledge has become obsolete or irrelevant, or else AI will soon take over programming, and by extension software development.” I found this last comment very telling, because, as I read related articles, — like “The rise — and fall — of the software developer” and “Tech Jobs Have Dried Up—and Aren’t Coming Back Soon,” it became clear that software engineering is not a well understood discipline and is often confused and used interchangeably with computer programming and software development, important tasks whose scope is significantly more limited than software engineering.

Software engineering  (SE) is a rich, robust discipline that covers software systems from idea through their lifetime,” explained professors Kang and Shaw. SE is “the branch of computer science that creates practical, cost-effective solutions to computing and information processing problems, by applying the best-systematized knowledge available, developing software systems in the service of mankind.” It “encompasses the full scope of software systems from concept to retirement — a full spectrum of issues from understanding what problem the software should solve through overall design, tradeoff resolution, performance, reliability, sustainability, usability, fitness for purpose, programming of components, composition of components, validation, adherence to policy and standards, and evolution.”

Engineering has long been the practice of applying well established knowledge in natural science, math, and design processes to building physical things, such as machinery, vehicles, and materials. The distinctive symbolic and abstract character of software raises special issues about its engineering: software is more constrained by its inherent complexity than by fundamental physical laws, software systems are design-intensive, and manufacturing costs are a minor component of the overall software product costs.

In addition, software engineering suffers from a number of myths and misconceptions, such as software is created by professional programmers by writing code based on a formal specification, and software systems are created by (just) composing program modules. Such a narrow view of software, leads to the misconception that “software engineering is simply programming, that all software should have specifications, and that most people creating software are trained professionals. Even though SE has since moved beyond that origin myth, the mindset lingers in the form of emphasis on having at least informal specifications, on continued focus on correctness, and on poor support for vernacular developers.”

“How, then, should SE engage with generative AI?,” asked the authors. Let me summarize some of their key points.

SE has a long history of embracing radical new ideas from AI

“AI has long been a source of new programming and software development techniques that initially seem radical or impractical but acquire respectability and are eventually adopted — with their origins in AI forgotten. Many features of modern software development originated in AI and were assimilated into programming languages and techniques. This has been a process of evolution, not revolution.”

AI emerged as an academic discipline in the mid-1950s, — the very early days of the computer era. The field’s founders believed that human intelligence could in principle be precisely expressed as software and executed in increasingly powerful computers. While early software systems were mostly oriented to computing with numeric values. AI expanded the scope of software to a broader view of computing that included manipulating symbols as well.

Over the next few decades, AI was one of the most exciting areas of computer science. The AI community introduced novel programming languages like Lisp for symbolic processing of linked data structures, and Prolog for logic programming and theorem proving.

AI brought new design approaches to software for dealing with ill-structured problems that were not well defined in advance, such as expert systems, which were intended to solve complex problems by reasoning through bodies of knowledge represented mainly as if–then rules rather than conventional procedural programming software.

In addition, a number of software innovations that originated in AI have become integrated into software engineering, including garbage collection for automatic memory management, backtracking for constrained satisfaction problems, and search engines.

SE will evolve to handle generative AI

“AI has the potential to contribute to SE in several ways, provided it can become trustworthy,” wrote Kang and Shaw. These include:

  • Generative AI may raise the level of abstraction of programming, but it won’t eliminate the jobs. GenAI will  contribute to programmer productivity by increasing the leverage of each line of code, as has long been the case with programming language innovations.

  • Generative AI may lead to new sorts of software system architectures. “This might take the form of new types of components and connectors, and it might take the form of variants on established components that respect the stochastic nature of generative AI.”

  • GenAI does not threaten higher level concerns like requirements engineering, design, and reliability. GenAI-based tools have the potential be an effective aid for context dependent tasks that require engineering judgement, flexibility, common sense, and a great deal of tacit knowledge.

  • GenAI may improve support for vernacular programmers. “Vernacular developers — people who are not highly trained programmers or software engineers but who create and adapt software for their own goals —vastly outnumber trained programmers.” They are often professionals that develop software to solve problems within their particular professional fields. While tools for vernacular programmers will improve, they will likely require supervision.

SE needs to re-think its concept of correctness; generative AI may force SE to do this

“Although SE’s fixation on formal correctness has softened somewhat over the years, there is still a widespread cultural mindset that specification and correctness are major objectives, even if full verification is unachievable,” wrote the authors. “This traditional view of correctness is inadequate to serve many conventional software systems.”

“We suggest that ‘fitness for intended purpose’ is a better goal than formal ‘correctness’, they added. “Traditionally, establishing correctness relies on the existence of a formal specification that unambiguously captures the intended behavior of a piece of software. … ‘Fitness for purpose’ retains the ability to mean traditional correctness for critical systems where formal proof is the best way to establish fitness. However, it also recognizes the legitimacy of more informal ways to show that software is ‘good enough’ for its purpose.”

“Expressing the developer’s intent through natural language prompts will likely emerge as the dominant paradigm for writing software in the future. The code generated by AI tools must eventually be validated against the original intent, which, unfortunately, will rarely be available as a formal specification and will likely be imprecise.”

“Generative AI is now eagerly inflating our aspirations, but its capability is not yet trustworthy and robust enough to be part of the stable core of SE methods,” wrote Kang and Shaw in conclusion. “AI is already demonstrably useful under careful supervision, and we can expect its utility for routine programming tasks to improve quickly. It may serve as an assistant for higher levels of design, but the tacit knowledge that drives those activities is largely inaccessible to AI training sets, so experienced software engineers will retain the initiative. For the foreseeable future, though, we expect AI outputs to best be treated as suggestions for review.”


Irving Wladawsky-Berger is a Research Affiliate at MIT's Sloan School of Management and at Cybersecurity at MIT Sloan (CAMS) and Fellow of the Initiative on the Digital Economy, of MIT Connection Science, and of the Stanford Digital Economy Lab.