Haskell, guarantees and pusillanimity

Jan 25, 2007 18:10

Many times in the course of my Haskell apprenticeship, I've had conversations of the following form:

Me: I want to do X in Haskell, but the obvious thing doesn't work. How do I do it?
Sensei (usually Duncan or totherme): Oh no, you can't do that! If you could do that, then you could have a transperambic Papadopolous twisted asiatic sine-curve ( Read more... )

computers, beware the geek, rants, haskell

Leave a comment

Comments 36

totherme January 25 2007, 21:08:21 UTC

Except, in all the above cases, it screws up. It turns out that you really need dynamic memory a lot of the time...

I think it's worth trying to imagine what it would mean to not screw up. Is there an underlying assumption here, that there exists a holy-grail language, which can precisely and elegantly describe everything in the universe? Is it possible that the languages mentioned work well enough in their own domains, only to require hacks and/or extensions when you try to use them for something unexpected ( ... )

Reply

pozorvlak January 26 2007, 11:19:55 UTC
I think it's worth trying to imagine what it would mean to not screw up.That's an interesting question! And to be honest, I don't really know the answer. I guess by "screw up" I meant "the restrictions you enforce condemn the language to an excessively narrow niche, and make things that ought to be easy hard". So yeah, you could argue (as Duncan does, in fact) that Fortran is a great domain language for very fast numeric algorithms that don't need dynamic memory, and that we should only use it as an embedded DSL in some general-purpose language. But that's not how it actually gets used, most of the time. My comments about regexps are a bit dodgy, in that the only proof I know that Perl regexps have been extended is that you can embed arbitrary Perl code into a regexp :-) Do lookahead/lookbehind assertions fit the framework? The case I really wanted to talk about is Prolog, where you spend half your time second-guessing the optimizer and the other half the time writing Yet Another meta-interpreter (from my admittedly limited experience ( ... )

Reply

totherme January 26 2007, 13:29:36 UTC
SQL ( ... )

Reply

pozorvlak January 26 2007, 17:41:20 UTC
"Poor integration with the rest of the programming language" is definitely a problem with SQL. I've told you about the pain we had with schema mismatches between the (constantly-changing) database and the (permanently a few days behind) C++ layer when I was working on [classified]? I did suggest autogenerating the whole object-relational mapping layer, in the way that Ruby on Rails or Maypole do, but this was felt to be too much work :-(

I've been having a look, and can't find the specific problem that I'm thinking of - it was some feature that Mat wanted to add to Moblog (I think it was "Comments posted on (posts that you've commented on) since you last logged in") that we found we simply couldn't do in SQL. He had to set up a cron job to create an intermediate table. Possibly this was just a problem with MySQL (which at the time didn't support nested subqueries), but it was pretty lame.

Reply


stronae January 26 2007, 05:18:24 UTC
I think you're on to something with your theory about wanting guarantees over expressivity. I'm no Haskell expert by any stretch, but it really pains me when one of my advanced students has to dive fully into monad land in order to do something that's otherwise painfully basic in most other languages. (Like, roll a die.) Certainly he's getting a lot out of the type system, in that he'll be able to structure his thoughts more clearly, but lately I'm wondering if it's worth it.

Reply

pozorvlak January 26 2007, 11:21:30 UTC
Yes, I know exactly what you mean, and that's a great example :-)

Reply

totherme January 26 2007, 14:17:22 UTC
I know what you mean - monads look really scary, particularly if you're used to imperative programming. But I don't think it's nearly so bad as you think:

main = print =<< randomRIO(1, 6)

or if you prefer: main = randomRIO(1, 6) >>= print - or even main = do x <- randomRIO(1, 6) ; print x

how is that any worse than main() { x := random(1,6) ; print(x); } ?

The thing is, I think imperative programmers are in monad land all the time. Functional programming offers a way of diving out ;)

Reply

totherme January 26 2007, 14:31:00 UTC
...actually - I was just talking to surfnscubadiver about monads earlier today (he was a coder in the 80s, but not since). The thing that occurred to me during that conversation was:

Understanding your car (the engine, all the accompanying electrics and gubbins that keep it doing what you want, the transmission, etc) is Hard. But you can still drive.

Understanding Monads is Hard. You have be conversant in the most abstract, weird branch of mathematics I'm aware of. I think using them is deceptively easy - if you just concentrate on using them, and not understanding them. Accept that you can do sequencing (and things that rely on sequencing) inside the moand, but not outside. Trust it the same way you trust the compiler to write better assembly than you can, and concentrate on understanding more interesting things - like the meaning of your programme.

Reply


totherme January 26 2007, 12:16:03 UTC
...having had a night's sleep/think:

I think that when you say "haskellers prefer guarantees over expressiveness", it shows you're thinking roughly the right thing - haskellers certainly like guarantees.

OTOH, I'm not sure that this is the best way to say it to the world at large - because most haskellers would argue that through those guarantees you get more expressiveness out of your programming system than you would with a more naively expressive language. Haskellers don't want the guarantees because they like guarantees - they want them because they make writing cool code so much easier.

It's actually a really tricky thing to communicate, I think. At least - it is to someone who's already set themselves up to think in a very dynamic programming kind of way... It's really easy to teach haskell to non-coders ;)

Reply

pozorvlak January 26 2007, 17:46:10 UTC
I was deliberately using the word "expressiveness" to avoid the connotations of the word "power". Hmmm. For the definition of the word "expressive" I'm thinking of, Perl and Lisp are very expressive languages. Vanilla Pascal is a very unexpressive language. Cobol is a very unexpressive language.

Reply


beelsebob January 26 2007, 13:18:05 UTC
I think really you've hit the "use the right language for the job" barrier. I've been using Haskell for all my code for many years now, and never once hit a need to use any form of fancy type system (hey my code compiles in nhc for gods sake)... For my purposes, Haskell provides me with really tight guarentees and enough expressivity. For you it seems it doesn't provide enough expressivity, my suggestion is use a language that does. In fact, my suggestion is use the language that provides the most guarentees for your required level of expressivity. This is why I am a Haskell programmer.

Bob

Reply

pozorvlak January 26 2007, 16:53:47 UTC
I dunno... maybe it's because I'm doing mostly mathematical stuff (the integer-parametrized types problem originally arose while I was writing code for manipulating polynomials over finite fields), maybe it's because I'm coming from a highly dynamic language (Perl) or maybe it's just the kind of person I am, but I seem to run up against some problem of this sort almost every time I pick up a Haskell compiler :-) It could also be because I'm still at the stage of playing with the language and hence keep trying to poke it in funny ways to see what it does...

Reply

beelsebob January 26 2007, 19:13:40 UTC
Strange -- I seem to do some fairly similar things, in fact I wrote some code to manipulate polynomials a few weeks ago and didn't hit any problems at all, on the other hand, I'm quite possibly not doing things as complex as you. What representation are you using for your polynomials?

I don't know what you need to use IPTs for, but as far as I've seen IPTs and DTs tend to be used to make type constraints tighter rather than making the type system more expressive. Can you explain the particular problem you need them for?

Bob

Reply

pozorvlak January 26 2007, 19:28:27 UTC
The polynomials concerned had coefficients in finite fields, ie all arithmetic was modulo some prime p. I took the view that it didn't make much sense to add (3 mod 7) to (5 mod 11), and wanted the type system to handle that bit for me. I wasn't doing anything particularly complicated other than that.

I want to have a go at it using the library greg_buchholz linked to now :-)

Reply


Number-parameterized types greg_buchholz January 26 2007, 17:06:16 UTC
Re: Number-parameterized types pozorvlak January 26 2007, 17:47:23 UTC
Dude. Thanks! I'll check that out...

I'd wondered if it were possible to represent numbers as types, but decided it probably wasn't.

Reply


Leave a comment

Up