For much of the history of computing, programming was an exercise in precision, logic and algorithmic design. In the earlier decades, when memory was scarce and compilers rudimentary, the intellectual work of writing software was inseparable from crafting efficient algorithms. Syntax mattered, of course, but the heart of the craft was in reasoning about control flow, data structures and performance. Great programmers distinguished themselves by the cleverness of their solutions and the elegance of their designs.
Over the past decade or so, however, the practical work of building software has shifted dramatically. Modern ecosystems are suffused with libraries, frameworks and services that encapsulate huge amounts of domain expertise. Want a web server? There are half a dozen mature frameworks that spin one up with a few lines of code. Need a database migration tool, an authentication stack, a JSON API, a charting UI? All are available as packages maintained by communities. In this world, the programmer’s role often becomes one of assembly: the careful selection and configuration of existing components so that they interoperate. The core challenge is less “how do I sort numbers” or “how do I parse a protocol” than “how do I wire library A to library B to satisfy use case C?”
This composition centric phase of software engineering has obvious value. It dramatically reduces the time to prototype and ship. It allows a much broader set of people to create non-trivial applications. But it also alters the nature of expertise. The bottleneck shifts from pure algorithmic reasoning to systems literacy: knowing what exists, what versions play well together, and how to manage complexity at the level of dependencies, deployment and operational behavior. Naming, in this environment, remains important, but it is often dwarfed by other concerns: build pipelines, system reliability, backward compatibility and the like.
The advent of large language models (LLMs) and AI-based coding assistants such as GitHub Copilot, OpenAI Codex, and others is now quietly rebalancing this equation. These tools can generate code from natural language prompts, translate informal descriptions into concrete API calls, and even stitch together multi-component systems with minimal human intervention. The primitive work of remembering syntax and boilerplate is largely offloaded to the model. The developer’s job becomes, in essence, the problem of description: precisely capturing what needs to be done in a form that the AI can interpret reliably.
This shift has two important consequences. First, it brings to the fore what many humanists and computer scientists have long recognised: clarity of expression is a form of cognitive leverage. In mathematical tradition, particularly in Indian classical mathematics and Taoist philosophical approaches to reasoning, emphasis is placed on deep structural understanding and the elegant naming of concepts. Translating an idea into words with precision is not trivial; it is itself a form of thought. In the presence of AI assistants, this act of translation — from human intent to descriptive specification — literally becomes the primary input to the programming process.
Second, this reorientation exposes a gap in the skill sets of many highly technical practitioners. Mathematically gifted individuals are often comfortable with abstract structures, formal reasoning and algorithmic nuance. But natural language is messy, context dependent, and frequently underspecified. Being able to describe a problem in colloquial language that nonetheless captures all the necessary constraints, edge cases and interactions is not a skill that follows automatically from mathematical fluency. The intelligent use of AI tools thus privileges not just logical rigor but linguistic precision and an ability to externalize tacit knowledge in forms that an AI can meaningfully parse.
This growing emphasis on human-AI collaboration invites us to rethink what it means to “program.” Historically, programming meant writing lines of code. Now it often means writing lines of specification. And specification is, by definition, a linguistic act. It is about choosing the right words, ordering them in a way that eliminates ambiguity, and crafting the context that makes the resulting output correct.
Agents and specification files such as AGENTS.md and other structured configuration formats exemplify this transition. They are not algorithmic in the traditional sense. They don’t encapsulate control flow or data transformations directly. Instead, they describe roles, responsibilities and interaction patterns between components — a meta-level description of the behavior we want systems to exhibit. Tools like spec-kit take this concept further by generating such descriptive files automatically, and in doing so they reinforce the idea that the act of naming and specifying is itself the central work.
There is a historical parallel here with mathematics. In mathematics, especially in advanced fields, the choice of notation and the definitions we use are not neutral. A well-chosen notation can make an intractable proof trivial; a poor one can shroud insight. Some of the most profound advances in mathematics have come from re-framing a problem with the right conceptual language. When Category Theory reframed many disparate constructions under the umbrella of arrows and objects, it did not invent new theorems so much as reveal deep common structure. Programming with AI may be analogous: the power does not originate in any particular library call but in how the developer frames the intent.
This is why many mathematically brilliant colleagues struggle with the new tools. Their training has optimized for rigorous abstraction and internal consistency, but not necessarily for communicative clarity in everyday language. They may have deep insight into what a system ought to do, but expressing that in a way that an AI can operationalize is an acquired skill. It requires an ability to think in terms of scenarios, user stories, concrete examples and edge cases. It requires an awareness of context that lies outside the formalism of equations and proofs.
In practical terms, this suggests that teaching and practice in software engineering should adapt. We should not simply teach frameworks and algorithms; we should teach how to frame problems, how to decompose requirements into stable, unambiguous descriptions, how to iteratively refine prompts and specifications with AI assistants. We should value linguistic precision as a technical competence on par with algorithmic insight.
If programming is increasingly the act of finding the right words, then mastery of language becomes not an optional adjunct to technical skill but one of its core dimensions. In this sense, the modern programmer increasingly resembles the figure of Cyrano de Bergerac: not the originator of desire or intent, but the one who supplies articulation where it is lacking. A future in which machines generate code from human description is not a future without human programmers; it is a future in which human programmers are precisely those who know what to ask for, and how to say it, even when the final words are spoken in another voice.

Leave a Reply