AIDrivenProgrammingParadigm
ThoughtStorms Wiki
Context: ComputerAidedProgramming
Will AI give us new ProgrammingParadigms?
Continuing this thinking. OliSharpe wrote a post on LinkedIn and I responded with this "manifesto" :
Well, I disagree that there's much similarity between AI programming assistance, and graphical / visual programming.😀Graphical programming has always been a delusional red-herring. Whereas "chat as interface" is a very substantial innovation in the UX of coding, and IMHO changes everything. (It's certainly the most fun programming I've ever had in my life).
But I agree with the broader point.
At the risk of regurgitating my standard cliché yet again, we didn't invent maths because we couldn't speak English. Humans are experts at natural language but we still turn to formal notation when we need to communicate, even with other humans, concisely and unambiguously.
This will continue to be true in software development. Natural language is extremely useful for communicating with computers. But there's no uni-direction of evolution where natural language, with all its ambiguities and verbosity, 100% replaces the need for concise, formal and unambiguous expression of our requirements.
Quite the opposite, the two types of language are complementary. Software development is going to be done through a mixture of both.
What's exciting, I believe, is that right now, nobody knows the right ways to blend the informality of natural language with specific gestures of precision in formal notations.
And this is a huge, open space for us all to explore and experiment in.
I'm sure everyone's initial guesses are going to be wrong. But it will be hugely productive when we figure it out.
What I recommend, while everyone is chasing vibe coding, is zagging to the zig, and going back to explore and learn about formal notations, advanced type systems, proof-checkers, rules-engines, domain specific languages and other advanced areas of formal notation and formal specification.
Because these will become increasingly important as ways of constraining and managing generative AI.
However, the good news is that we won't use them in the traditional way. Now it will be easier to get into these things. AIs can teach us about them. AIs can help us write them. MCP can help integrate them into our workflow.
In the last six months, the software tooling stack completely changed. We now have a swarm of MCP servers, that can run compilers, linters, static type-checkers, rules engines etc. And the multi-agent workflow tools that everyone is developing for other applications are also usable to orchestrate all this together for the benefit of assisted developers.
So, my thinking is that the LLM, speaking natural language, should become a front-end to the swarm of more formal languages and more formal tools. It can help us navigate those things. Identifying which ones are useful and firing them off as and when needed.
My current experiments (which I hope to demonstrate to you soon) are based on the idea that we feed our precise requirements in snippets of DSLs. For things like UI layouts or data-schema or state-machines etc. The AI hands that off to specific compilers to expand into real code. Then it takes that code and integrates it back into the code-base.
It's still an open question how well it will integrate that code back without introducing new mistakes of its own. I've certainly seen it sometimes try to bypass calling the compiler and just do its own interpretation of the snippet of DSL. But I think/hope this can be resolved with the right system prompts etc.
Like I say, this is a huge open unexplored area that's really exciting to play in. 😁
In summary. For me the question isn't really, "will LLMs ever be good enough to get the last 20% of the code right, by themselves".
Or will they ever miraculously figure out EXACTLY what we want when given only a rough description of our requirements in an ambiguous natural language. (Answer: obviously not.)
The question is "can they good enough to manage a bunch of dumber deterministic machines that we still send our requirements to in formal notation?"
And I'm optimistic they can. And that this is going to change everything. 🥳
This is now VibeCoding?
Chat Oriented Coding : https://sourcegraph.com/blog/chat-oriented-programming-in-action
https://solveit.fast.ai/#learn-more
Virtual Machinations: Using Large Language Models as Neural Computers : https://queue.acm.org/detail.cfm?id=3676287
Thoughts on the next programming
I just wrote this on LinkedIn. (https://www.linkedin.com/feed/update/urn:li:share:7334310690427465730/) Putting it here to digest
ChatGPT gets me started : https://chatgpt.com/c/683a4928-e80c-8010-bb3a-3ce5d1e7c688
I'm a programmer. And what programmers love to do is either invent their own programming language. Or at least have some fantasies along those lines.Programmers rightfully laugh at ourselves for this. But not too much. Because where do all the new programming languages come from, if not from idealist dreamers having a go?
This includes me, of course. I have years of notes and ideas and half-started sketches for new programming languages. (Although nothing ready to show anyone else yet. 😉 )
But a couple of years ago, I started worrying that the whole thing was pointless.
Because AI.
After all, if AI is now going to replace programming with natural language, aren't new programming languages going to be unnecessary? I didn't think that AI would replace all programming or all programming languages. But probably it would work OK for translating mostly natural language requests into mostly C/Java/JS. So why invent the next programming paradigm?
Well, in the last few months I've pretty much done a 180 on that.
I'm now more bullish on the idea of new programming languages and programming paradigms than I've been for a long time.
I love programming with AI. I do it all the time. Not exactly "vibe coding", if by that you mean asking simple prompts and trusting the AI. I do everything by asking prompts, but I look at the output pretty carefully. I "micromanage". I unit test. And I ask a lot of "why did you do this, that way?" type questions afterwards.
And what I'm starting to feel is this : the AI does a huge amount of work for me. But what it's mainly doing is helping me cope with what programmers think of as "the unnecessary complexity" of writing software.
To give a simple example, one of the biggest gripes I've had for years is why code-bases take so much faff to navigate.
Why, I asked 20+ years ago, can't we add hyperlinks in our editors, so we can quickly add a link in a comment to jump to a related bit of code in another file? Why, I wonder now, can't the IDE dynamically fill a buffer with just the functions relevant to the current problem I'm working on, allow me to edit them there and then, and then slot the functions back into the actual files they came from, afterwards?
Well, I've never seen an IDE that could do that. But Cursor, when I ask it to change something, does indeed let me jump from one point of intervention to another, from file to file, without searching through tiny tree navigation or having to remember where everything is.
It's not AI "coding" that's the value here. It's AI assisted navigation.
Now when I look at the code I and the AI write together I increasingly think : 95% of this code and the work in the editor is still "boilerplate" and "unnecessary complexity". We could still compress this to a 20th of its size while capturing whatever real novelty I'm describing.
This is a great opportunity for new programming languages and paradigms.
We now have the taste for trying to make software by writing very little.
But we then struggle to keep the AI on track, because ambiguities in the natural language we use to write our prompts, allows it to go off in unwanted directions. We spend the time pushing the AI back towards what we really wanted. And then, sometimes, it gets itself into a knot as it fails to truly capture what we wanted. Or struggles, itself, to overcome some particularly awkward difficulties in rendering an algorithm correctly.
But could a new programming language / paradigm, be "snuck in" via this infrastructure?
What would it look like? At first blush - and note I am freestyling here as I write this, I've thought of some of this before, but some is new - ... at first blush, it would look like a bunch of "little" or "domain specific" languages to unambiguously describe certain things. This could be data-schemas. Or UI layouts. Or wire protocols. Etc.
We'd then ask the AI to turn these into appropriate code. (For example, I've asked ChatGPT to turn an Instaparse grammar (more or less BNF-like) into a working parser in Haxe. And it does a pretty good job.)
However, at some point, if you've got a formal little language you want to turn into some formal object code, you don't really want the quirkiness and stochasticity of an LLM to do that for you every time. You want something mechanical and reliable.
So why not write a little compiler for it? And how would you integrate this compiler with the AI? Well, in 2025, the obvious answer is an MCP server. (Model-Context Protocol is becoming the standard way for LLMs to use other tools.)
So we'd provide our new development paradigm in the form a bunch of little compilers for little-languages, as MCP tools. Then allow our AI to orchestrate them together. We'd say "here's my data-schema ... please use the tool to make this into Java objects". The LLM would pass the schema to the compiler as a parameter, and would take the resulting Java code and slot it into the appropriate parts of the code-base.
This would be even more important for schema changes. The schema change can be represented formally in the little-language. The compiler would make the new Java code. And again, the LLM could direct the editor where to patch the changes in.
That's just a first sketch, but I think it's a plausible way we might evolve software development : to combine the virtues of the AIs and AI ecosystem, with the need for more formality and reliability.
Backlinks (2 items)