I'm of the opinion that laziness is the single hardest thing to get used to in Haskell: not warm fuzzy things, not higher order functions, not syntax. The reason why it's taken me awhile, having used Haskell more off than on for small toys over the past couple of years, to really grasp how to use lazy evaluation is that it forces you to think about the combination of your functions and data much more carefully.
Let's take a really silly example such as
because it's something that is easy & obvious. If you apply this function to a list of numbers the evaluation can proceed something like the following:head . map (+1)
head . map (+1) [1,2,..]To anyone remotely familiar with the concept of laziness I'm sure this seems somewhat intuitive. You only need the first element of the list, so only the first element has the argument to map applied to it. It's a simple consequence of the structure of lists and the definition of map, the kind of example that you can feel comfortable with after just a minute with a pencil and paper. So what's my point?
head $ ((+1) 1):(map (+1) [2,...])
((+1) 1)
2
My point is that if you're learning Haskell and you're running into problems with laziness, you should break out the pencil & paper and try to calculate what will happen when your function is called. It's something you don't really have to do with other languages, but I think it's the best way to see how your functions and data structures interact during evaluation until you have better intuition.
5 comments:
Lazy evaluation is the hardest thing to grasp in haskell for me too. I can understand it in small examples such as the one you posted, but for larger programs and more complex data structures such as Data.Map, it's still really hard for me to get what's going on.
Well if you come across a harder to understand problem again, feel free to e-mail me. We can try to pencil&paper our way through the calculation together to clarify what is happening.
Well, I created a thread recently on haskell cafe called "Library design question" which had some discussion about the lazy state monad vs. the strict one (starting here).
One thing I don't understand is why the version where I used strict versions of "modify" and "randomDouble" would still stack overflow, while using the strict state monad (which as far as I understand does the same) solved it.
Eh, it actually starts at http://article.gmane.org/gmane.comp.lang.haskell.cafe/44808 (9th message of the thread).
Yeah, laziness is tough. The operational semantics of Haskell (er, ghc to be precise) are very complicated. So complicated that they make you not want to learn it.
That's just what I didn't. I didn't learn what my program is *doing*, instead I was forced to think of my programs like mathematics, what my program *means*. And I'm very happy I did, because Haskell's true beauty shines when you are programming at that level rather than the operational level.
Until it blinds you with a space leak. Then you have to put on the sunglasses and go into Operational Land again. :-)
Post a Comment