According to the History of Programming Languages website (currently offline) there have been somewhere around 8500 programming languages developed since the Autocodes of the early 50s. The internet archive has scraped HOPL's
taxonomy at least as well as a few
references and at least the primary year indexes (
the year of my birth, for exampleIt's
(
Read more... )
For what it's worth: I never got the impression that the challenge was a "pick-a-winner" competition of systems. If you look at the initial posting it seems clear to me that the organizers just wanted to get a very general "sense of the field", stimulate discussion, see some approaches people would take on benchmark problems (which are still "toy sized" relative to industrial PL problems, but whatever).
As a (now-ex) paid PL designer (not a logical-framework expert, much less the author/designer of one) I still don't know how how to evaluate the POPLmark results. I don't understand the submissions especially well, I can't give a particularly compelling description of the difference between a HOAS solution and a locally nameless one; I can barely decipher the simplest proofs in each framework, and certainly couldn't make any of my own at this point, about any of the languages I've worked on.
As I pointed out earlier in this post, my work as a PL designer has been about making human/computer interfaces and most of that work honestly falls on the "human" side. The human is the more-fallible, harder-to-predict part with the much, much more limited (cognitive) budget. I can design languages with cool features that I can figure out a machine implementation for much more readily than I can figure out how to make it palatable to a human. To my eyes, the biggest problem with all proof assistants is similar: not that they're not powerful enough as technical artifacts, but that too often, humans can quite figure out how to use them to their full power.
That is: either I am incompetent (you're welcome to reach that conclusion, it doesn't offend me; in some sense we all are) or else the cognitive load of all the formalisms and their logical-framework mechanizations is probably still too high for the target audience. This is why I find Redex and K a little more appealing: for reasons I can't even articulate I find their models more approachable. Maybe it's just that they give up on the "proof" side and aim for "high-confidence model checking". I don't know.
Personally I think it's telling that most of the submissions to POPLmark still seemed closely associated with the particular academic lineage surrounding the tool's designers. That is: few people outside the very top experts on the planet could operate these tools to the effect required. That suggests to me that none of them are quite approachable enough yet to be used regularly in language design. Whether approachability is the goal or not is a different question, possibly of no interest to the tool designers. I don't know.
Reply
Reply
Leave a comment