Haskell is certainly very much like a grail. Of the changes that could be made without compromising its core nature, only making its syntax less quirky would really substantially improve it.
On the other hand, my patience for theoretical programming languages is low. The functional purity of haskell makes a number of relatively simple alterations of one's program just astoundingly tedious. For instance, "If this large, complex calculation takes more than ten wall-clock milliseconds, give up and toss me an exception" is all but impossible to express if you didn't design the entire system for it from the start. Worse, if you did design for it from the start then you had to write the whole thing in a slightly different, decidedly worse part of the language.
It is interesting how the idea of "a good theoretical language" is almost wholly divorced these days from "a language where it is theoretically possible to actually accomplish basic engineering tasks." I think our field's theory has too many parts Turing and too few parts Brooks.
To be fair, haskell is quite capable of handling this situation. What it handles poorly is the software engineering activity of going from the "enh, I'll just try it" case to the timeout case -- the implementation has to change completely because once you read from something like system time you are no longer in the pleasant functionally pure world but rather have entered the rather uglier world of The IO Monad (or The Monadstrosity, as I have come to know it).
Small differences in what you want, when they involve things like time, cause large changes in implementation.
Debug-by-print runs into this issue too -- calling putStrLn forces you into the IO Monad, which forces your caller into the IO Monad, which... But then once you've solved your issue, you don't *really* want to be there, so...
Debug.Trace exists for exactly this purpose, and works reasonably well. This is the case because trace is just a wrapper around exactly the evil thing you are trying to avoid: It calls unsafePerformIO. I am slowly reaching the conclusion that unsafePerformIO is the answer to a great many interesting problems. Unfortunately it is bloody impossible to reason about.
Actually, the more I think about this, the more it seems emblematic. When I talk about the grimy dangerous don't-mess-with-this unsafe parts of C/C++, I am talking about setjmp and custom allocators. When I talk about the dangerous parts of haskell I am talking about reading the system time.
Yes. ghci and haskell-mode help some with this, but printf-debugging is awkward in most Haskell programs.
If I don't know the computation model I'll eventually need, I tend to write functions with type (Monad m) => a -> m b. If m turns out to be the trivial monad, well and good. If it has to be IO, well, that'll work too.
The biggest conceptual shift with Haskell programming isn't the knowledge of the types of your expressions-capable Python programmers know those just as well. The big shift is the prior knowledge of the computation model in which you'll work.
Right. I generally expect that in a large software project I am going to be seriously wrong about something fundamental a few times, and so I like a language that lets me refactor. The monadic-vs-nonmonadic distinction really fights that by making it expensive to be wrong. And if every function is going to end up monadic then it's not clear that I haven't just traded away a lot of the simplicity of expression that made haskell appealing in the first place.
It would be interesting to rank programming languages by how expensive it is to be wrong about some fundamental design choice. Haskell does pretty well on some of that -- I did a massive refactor of one of my objects from lists-of-lists to sets-of-maps recently and the whole change was under an hour, far less than it would have been in C++ or even Java, pretty much on par with python. But to be wrong about anything related to time or state seems to be comically expensive.
I spend most of my time being wrong about something.
How odd, given that monads were introduced to Haskell as a way of easing later modifications: . I read Wadler as advocating knowing when you'll be mucking about, writing with a loose monadic constraint as a safety net, and reaping the benefits later.
I write Haskell and Python code that often looks pretty similar; when I'm in Haskell I wish for easy "print" statements, but when I'm in Python I wish for foldr and mapM and (.). What keeps me coming back to Haskell is the type-checker's help in finding refactoring bugs. My most common bugs in Python come from attempts to replace a cross-cutting subsection of a program with a new implementation. I always miss a bit, and I find out at runtime. I could practice better test-writing... but when I write in Haskell, the types won't line up, and I'll have compile-time pointers to every line I haven't yet rewritten to the new model. I rely on the type system to verify that I've completed a big change.
Oddly, I've never felt the urge to massively refactor to hook in monadery. If your timeout is buried that deeply in your program, you're pretty screwed anyway---there's too much junk to unwind when the computation doesn't work out.
That said, at least there are now readily available libraries to do stuff like "time out if a computation takes too long." I could even come up with a mostly-safe pure interface to them, as long as you don't want your results to be repeatable on different machines.
I do miss control over data structure layout and copy-free structure reuse. I don't miss writing allocators (which no one should ever waste their time doing). And now that I'm writing a bunch of C++ for a living, I really hate all the time I spend copying objects. I thought imperative programming was supposed to spare us all of that nonsense (and ironically, Java and C# are more successful in this respect).
Oh, I'll note that I've made several attempts over the years to fiddle with the pure part of the language so it looks monadic as well. They run aground dealing with questions of "How monadic is this bit, anyway?" Well, that and the completely incomprehensible type errors that would result---a lot of time has been spent making the ones you get suck less, but pushing the level of abstraction up to 11 makes things pretty bad.
Ugh, yeah. It is hard to see being happy with solving a problem by making it more monadic. Any type error with more than one monad in it is cause to throw the code away. Except for Maybe and perhaps List of course.
Though I have to say the Haskell world is doing a way better job of type error reporting than the template world in C++, which is pretty shameful when you think of the market sizes involved.
I think the market size for that feature is people who would have the slightest chance of figuring out what even a well-written template error means. If the feature is bad enough, good error messages just aren't worth it, and C++ templates are case zero for that.
On the other hand, my patience for theoretical programming languages is low. The functional purity of haskell makes a number of relatively simple alterations of one's program just astoundingly tedious. For instance, "If this large, complex calculation takes more than ten wall-clock milliseconds, give up and toss me an exception" is all but impossible to express if you didn't design the entire system for it from the start. Worse, if you did design for it from the start then you had to write the whole thing in a slightly different, decidedly worse part of the language.
It is interesting how the idea of "a good theoretical language" is almost wholly divorced these days from "a language where it is theoretically possible to actually accomplish basic engineering tasks." I think our field's theory has too many parts Turing and too few parts Brooks.
Reply
The real world involves stuff not going according to theory. I've already gone down this timeouts road many times.
Reply
Small differences in what you want, when they involve things like time, cause large changes in implementation.
Reply
Reply
Reply
Reply
Reply
If I don't know the computation model I'll eventually need, I tend to write functions with type (Monad m) => a -> m b. If m turns out to be the trivial monad, well and good. If it has to be IO, well, that'll work too.
The biggest conceptual shift with Haskell programming isn't the knowledge of the types of your expressions-capable Python programmers know those just as well. The big shift is the prior knowledge of the computation model in which you'll work.
Reply
It would be interesting to rank programming languages by how expensive it is to be wrong about some fundamental design choice. Haskell does pretty well on some of that -- I did a massive refactor of one of my objects from lists-of-lists to sets-of-maps recently and the whole change was under an hour, far less than it would have been in C++ or even Java, pretty much on par with python. But to be wrong about anything related to time or state seems to be comically expensive.
I spend most of my time being wrong about something.
Reply
I write Haskell and Python code that often looks pretty similar; when I'm in Haskell I wish for easy "print" statements, but when I'm in Python I wish for foldr and mapM and (.). What keeps me coming back to Haskell is the type-checker's help in finding refactoring bugs. My most common bugs in Python come from attempts to replace a cross-cutting subsection of a program with a new implementation. I always miss a bit, and I find out at runtime. I could practice better test-writing... but when I write in Haskell, the types won't line up, and I'll have compile-time pointers to every line I haven't yet rewritten to the new model. I rely on the type system to verify that I've completed a big change.
Reply
The pure/impure jump makes it expensive to start changes, but the type system makes it cheap to finish changes.
Reply
That said, at least there are now readily available libraries to do stuff like "time out if a computation takes too long." I could even come up with a mostly-safe pure interface to them, as long as you don't want your results to be repeatable on different machines.
I do miss control over data structure layout and copy-free structure reuse. I don't miss writing allocators (which no one should ever waste their time doing). And now that I'm writing a bunch of C++ for a living, I really hate all the time I spend copying objects. I thought imperative programming was supposed to spare us all of that nonsense (and ironically, Java and C# are more successful in this respect).
Reply
Reply
Reply
Reply
Reply
Leave a comment