Note the move to falsificationism here : Fortifys Chess adds that there has been a fundamental philosophical shift in how we approach the issue of source code analysis. Early researchers were interested in program correctness, he says. The goal was to prove that my program will, under all circumstances, compute what I intend it to compute. Now, he says, the emphasis has switched to a more tractable form of proof: that there are specific properties my program does not have. Buffer overflows and deadlocks are examples of such properties. (Compare GenerateAndTestInParallel)
How does hacking compare with this kind of brute-force, automatic testing and analysis. Is this the end of the lone programmer? Or can there be elements of this analysis that can help the individual with his code? Maybe Lint ...
At the heart of every source code analyzer are the rules that describe patterns of error. Analyzers provide a general set of rules and typically enable customers to add new rules codifying knowledge of their own systems and programming practices. The analyzer included with Compuwares DevPartner Studio, for example, can be extended with rules that match patterns in the text of source code. According to product manager Peter Varhol, this technique is used often to enforce rules about coding style.
- example? : http://www.aivosto.com/project/features.html
- Analysis of commits to source-code repositories : http://lemonodor.com/archives/001216.html