Pretty good, yet obvious to a master programmer, which I define as someone that programs on the conceptual level and ubderstands the thinking about code deserves a lot more attention than the mere effort of banging out code to realise a specific desired output result.
Because most programmers are NOT master programmers, they just bang out code that is a means to an end - bang out code that other people cannot read and immediately retrieve the highlevel concepts that sits behind it.
Master programmers instead make sure that at the highest level the code first explicitly encodes the conceptual design, and then at a lower level of abstraction does all the necessary things that let the code actually produce the required program result.
A master programmer never divorces the logical conceptuap structure ("the theory" in the terminology of this article) from the implementation code.
Most peogrammers however bang on code in ways that quickly and radically diverge from the conceptual design, and they don't care because as long as the program produces the desired results, the code is "perfect" in their minds, never mind that they themselves won't be able to understand the code they wrote today in six months time!
A master programmer can go back to a code base that he hasn't seen in 10 years, and be be productive on it within an hour, because all the core concepts ("the theory") is explicitly encoded in the program source code.
Not impossible that I misread the article, but isn’t one of the points that the theory can’t be explicitly encoded in the code? Again, perhaps I ought to reread it, and I don’t disagree with the general thrust of your comment.
No it's not. We build programs with less insight into theory than a civil engineer who builds bridges.
It's all black box experimental testing with little theory. That's why we have unit tests.
When there's a complete theory about something. You can actually build that something very accurately as a theoretical model and rely on that model as an accurate blue print for the real thing. No such theory exists for programming. The complex application you designed has no theory behind it at all.
bridge builders on the other hand first build the bridge theoretically before building the actual bridge. There's some testing but physical engineers don't rely on testing as much as programmers do.
Imagine if we built airplanes the same way we build programs. Let's pretend those airplane builders just use their gut feelings like programmers do and then those builders verify their built planes with a suite of unit tests only.
First off the cost would be insane as likely a failed unit test is a destroyed plane.
Hence why all other engineers besides software rely more on theory while software engineers rely more on the scientific method which is largely less effective.
> No such theory exists for programming. The complex application you designed has no theory behind it at all.
The theory isn't a theory of programming or software engineering, but of the problem domain. From the paper:
> In term's of Ryle's notion of theory, what has to be built by the programmer is a theory of how certain affairs of the world will be handled by, or supported by, a computer program.
I've worked on plenty of commercial software with a theory of how some particular domain ought to be handled. A business process is almost always the starting point for the theory.
Similarly, software like Make (targets + recipes), Xorg (client-server model) or TiddlyWiki (reusable content + transclusion) have specific theories about how their respective tasks ought to be thought of.
So can you prove things with this "theory" are there theorems? What are the axioms?
More like there is no theory. You just build it off of your gut feeling. Maybe you use big words like "dependency injection" or "domain knowledge" to make it feel like you have a theory but really you don't.
When you have a theory, aspects of the design are calculated. Diagraming the architecture on a whiteboard is not theory. It's still gut feelings.
The underlying model can be thought of as the axioms. Going back to the Make example, it is trivial to think about axioms that describe the system's base abstraction ("Let T be a set representing targets...."). Generally speaking, the theory backing a system gets used to prove a few things.
1. Given a set of requirements we want to make sure that our model for the system can satisfy those requirements. If you cast your model as a series of logical formulae, then you can certainly decide to frame questions of requirement satisfaction as proofs or derivations.
2. Does a given codebase implement the theory/model we postulated? This is generally a code or design review conversation, but could certainly be the subject of formal verification.
3. Given a new requirement (formula) can we handle it with the existing system?
The third option feeds into another of Naur's points throughout the paper, namely that there are limits to the evolution of a theory before it must be tossed out. If you are thinking of some systems that you have maintained that have ceased to make any kind of sense (a programmer described a system we were working on as a Winchester House for this very reason), then you have reached the point where the theory is lost which is equated with program death in the paper.
I do find it curious that you refer to design patterns and diagramming. In my mind, design patterns are a way to communicate implementation details to others working on the codebase. Saying a service uses dependency injection is like saying a building uses an A-frame. Useful information to be sure, but no one thinks of "use an A-frame" as an architecture in itself.
Similarly, diagramming is always a method of communicating. The map is not the territory. Some methodologies (I'm looking at you, UML craze) forgot that but it this paper far predates that particular period of folly.
Depends. There's a pretty good theory on building compilers, for instance.
One interesting observation made in the post, on the other hand, is that the source code of a program does not necessary contain the idea(s) behind the program or even its intent. I can attest to that - it is indeed often the case that people who are involved in writing or updating a program's code may have different levels of the (mis)understanding of the code itself and, sure enough, of the "theory" behind it. (Unit tests reflect this situation.)
"Depends" is a misnomer. There are exceptions when it "depends" but the overwhelming majority of programs written are not theory based at all.
So much so. that the alternative is negligible. The language compilers compile is theory based but the implementation of said compiler has a huge part of it getting unit tested and not theory based at all.
Because most programmers are NOT master programmers, they just bang out code that is a means to an end - bang out code that other people cannot read and immediately retrieve the highlevel concepts that sits behind it.
Master programmers instead make sure that at the highest level the code first explicitly encodes the conceptual design, and then at a lower level of abstraction does all the necessary things that let the code actually produce the required program result.
A master programmer never divorces the logical conceptuap structure ("the theory" in the terminology of this article) from the implementation code.
Most peogrammers however bang on code in ways that quickly and radically diverge from the conceptual design, and they don't care because as long as the program produces the desired results, the code is "perfect" in their minds, never mind that they themselves won't be able to understand the code they wrote today in six months time!
A master programmer can go back to a code base that he hasn't seen in 10 years, and be be productive on it within an hour, because all the core concepts ("the theory") is explicitly encoded in the program source code.