I am all about working with multiple representations, combining induction with deduction, merging probability with logic, etc.
When we write software, we desire it to be correct, and demonstrably so ("demonstrably" in the sense of being able to convince someone: perhaps "demoably" is a better term). There are many ways of doing this:
* empirical testing: we see that it does the right thing.
* agreement: testing, not against the world or our belief of what it should do, but against an "independent" implementation. We could unify these two by defining the concept of "agent", which could encompass a human test-judger, another program, etc.
* formal proving: more definite than the above two, but we have to rely on the system in which the proof is built.
Software development routinely involves all of the above, except for the "proving", which is informal. But, (as would be typical of me to say) we do make real deductions when writing or judging a piece of code.
The mainstream of software development follows an engineering / instrumentalist epistemology: they know that the program has bugs, but they don't mind them as long as the program is still useful. As a consequence, they "abrem mão" of formal proofs and are satisfied if the program passes a particular class of empirical test cases.
The purpose of test cases is often to convince one that the program would work on a broader class of test cases (otherwise all you would be proving is that the program will work on a non-interactive demo). This is induction. Perhaps the most important epistemological question for software is: how far and how confidently can one generalize from a set of test cases?
Another question has to do with the dynamic development process: when and how should we test? Common-sense testing methodology tells us to write simple test cases first. This is Occam's razor.
Btw, this is similar to mathematical epistemology: how do we combine experimental mathematics with normal deductive math? How do we combine different pieces of intuitive knowledge in a consistent, logical framework?
--
If you ask a doctor (or any specialist) to write down probabilities about a particular domain that he/she knows about, these numbers will almost certainly be susceptible to
Dutch book.
Bayesians consider this to be a bad thing.
I believe that, by playing Dutch book with him/herself, our doctor would achieve better estimates. I would like to see experiments in which this is done. Actually, I should probably write some of this software myself. The input is a set of initial beliefs (probabilities), and the output is a Dutch-book strategy. This Dutch-book strategy corresponds to an argument against the set of beliefs. This forces our specialist to reevaluate his beliefs, and choose which one(s) to revise. This is like a probabilistic version of Socratic dialogue.
--
Do you see the connection between the above two? Please engage me in dialog!