Hacker Newsnew | past | comments | ask | show | jobs | submit | schwurb's commentslogin

That "Show your steps"-tipp is great, will try it out! Any other tipps for getting more correct (I know that LLMs are essentially autocomplete) output?

Additionally: I wonder whether instructions like "think carefully" or "You are an software engineering interviewee at Google." change anything. It feels like a prime candidate for magic thinking, but I do not understand LLMs well enough, such prompts could well improve the answer somehow.


I find CT highly fascinating, worked through parts of 7 Sketches in Composability and have a functional programming background. I see the appeal, but I came to the conclusion that my time is better spent observing, learning about and designing with abstractions like monads, applicatives and so on rather than to learn the theory behind it.

There seems to be a tiny handful of people that can use category theory as a resource to craft something relevant to software (the stereotypical example in my mind being Edward Kmett of Haskell Fame), but I am certainly not one of them, and that is not something that would change with learning more Category Theory (whatever that might mean: Proving some theorems, discovering more categories, ...)

To the author: I am looking forward to a retrospective at a later time. I wish you a good journey, happy diagram-chasing!


Thank you! I knew about psychological safety, but the other points are new to me, will check them out.


Perfect timing! Last week I struggled to write documentation that would leave me satisfied - this framework explains why: I tried to write a reference, how-to and explanation at the same time. And it also offers the solution: Split it into three parts (create them if they don't exist yet).


What you are describing is a Functor, one the two other important Haskell "design-patterns" (Functor - Applicative - Monad). "Mappable" is a good name insofar as it is more familiar to new folks. However, I like the mathematical name as it forces one to embrace the interface as what it is, as opposed to thinking about Functors as collection of data that can be mapped over. Often this is true, but not always.

A monad could be described as "sequencable/chainable", but that metaphor is quickly breaking down as there as so many ways in which monads show up.


> one the two other important Haskell "design-patterns" (Functor - Applicative - Monad)

These days you should probably add Foldable and Traversable to that list, as they are closely related and appear throughout the standard libraries and even in the Prelude.

Traversable is a generalization of Functor to include Applicative effects. It reduces to Functor if the Applicative happens to be the Identity type.

Where Functor and Traversable replace values one-to-one while preserving structure, Foldable represents reducing operations (folds) which consume the values from a structure in some defined order to build up a result. This includes basic functions like sums and products, as well as more complex operations like `sequence`.


> A monad could be described as "sequencable/chainable", but that metaphor is quickly breaking down as there as so many ways in which monads show up.

Hmm well it might break down but simply by choosing a better word you've helped me understand what a monad is more than any article I've read on the matter.


I am gonna be heavily opinionated, take my words with a bit of salt ;)

Here it goes:

1. One understood monads when one is able to write a monadic interface that behaves like any intermediate haskell would expect it to behave. This is not a hard task.

2. Screw the abstract stuff. While fascinating, really few people understand concepts going from abstract to concrete rather than the other way around. And don't get me wrong, I don't think these kind of people are better theorists or problem solvers, they just have to seem a special relationship with symbols. If you are one of those people, you would likely know - in any case, I would encourage to write 3-5 examples of monads in Haskell or Java or JavaScript or whatever language suits you and go from there.


I offer a simple explanation elsewhere in this thread: https://news.ycombinator.com/item?id=31652379


For 99% of people, monads are best understood by a) using and writing many monadic interfaces and b) forgetting about the whole mathematical background.

What a monad offers is: "1: Do A. After A is done, you can inspect the result of A and do either B or C. 2: You can chain monads of the same type together.". Sounds easy? It is because it is!

The big fuzz about Monads is mainly because prior to monads, doing IO was akward in Haskell. Monads offered a convenient way to do IO (and other interactions with the outside worlds, such as databases, networking, ...) and turn up basically everywhere. There is nothing stopping one from introducing a monadic interface in OO-World, and it is partly done (i.e. "orElse" or "then") in some domains. However, bigger gain is to be expected in Haskell, since the type checking system forces the developer to deal with monads in a disciplined way.

-----

I admit being guilty of having only the title prior to writing the comment. But even with only having read just the word 'monad' in the title together with 'lazy', I knew the author would describe 1) some way of lazily evaluating one thing after another and 2) some way of combining multiple lazily evaluated things with each other. After skimming, this is what the article is about. As always, the interesting stuff is not about the concept of a monad (which is pretty easy), but about how to model/code 1) and 2) in a specific context (in this instance, being lazy evaluation).


Monads in OO-world often fall flat because monads usually add something to a computation (ability to store state, ability to throw exceptions, ability to do I/O, ability to return `null`, etc) and in Haskell the basic building block is the pure function which can do none of those things. But in (say) Ruby, every method I write has already been given rights to return nil, write to the logging system, delete the database or launch the missiles no matter from where it is called in the program. The ErrorT monad transformer is much less useful when exceptions are a key part of the language and intended to be used everywhere.


Indeed, some design patterns (and monad is a design pattern) are more useful in some languages and less so in others.


There is a famous quote from a mathematician (maybe von Neumann, don't have it to hand) to the effect that you don't understand mathematics, you just get used to it. That's how monads work. You won't figure them out from thinking through the definition or listening to other people's explanations, but you will start to feel comfortable with them if you use them enough.

A mathematical analogy from introductory analysis is the concept of a dense set. If you're encountering the concept for the first time, it's easy to understand the definition, but it's hard to see why it matters. Then you start using it to solve problems and write proofs, and it's incredibly useful. After a while, you get a feeling about it, an intuition about when it might be helpful, and a facility at working through the technical details of using it. You get used to its power, in a way that feels like understanding. But you don't learn anything that can be passed on in words or symbols. You can't say anything to a beginner to clear up their confusion. You can only point them towards the exercises through which you acquired your understanding. Monads work exactly the same way.


https://www.youtube.com/watch?v=ZhuHCtR3xq8

I found Brian Beckman's video Don't Fear the Monad was extremely helpful for understanding monads as a practical user. I now use them everywhere. I don't know the first thing about category theory.


What you are doing is prototyping. Which is a good thing. However, writing stuff down is also another form of prototyping, which different tradeoffs.

With coding, you are confined to formal syntax and semantics, but if the code (even partially) works, you can be more confident in your design.

With paper, you can plan as high-level as you want, with the danger of being too highlevel and overlooking things.


Well, we're veering off into the weeds. The OP mentioned simply thinking about something, for a few weeks, before writing any code. They didn't mention anything about writing stuff down.

I cut my teeth, in the days when we were supposed to design the entire program; from start to finish, on a pad of paper, hand it to a data entry clerk, who would then create a deck of cards, based on the work.

It would then be scheduled for an expensive slice of time, and, if it screwed up, you got spanked.

It sucked. It really sucked.

Full disclosure: by the time I entered the field, punchcards had been replaced by VT-100 terminals and line printers, but the process was still the same, minus the data entry clerk.

These days, it's totally freewheeling. I try stuff out, screw up, kick myself, then try again.

I write about how I do stuff, here: https://littlegreenviper.com/miscellany/thats-not-what-ships...


Adressing the whitespread conception "It is hard to programm in Haskell because it is pure":

If you can write python, you can write Haskell. Don't believe me?

1. Write your program completely in the IO Monad, in a huge do-block

2. Factor out as much pure functionality as possible (= Have as little code in your big IO-programm as possible.)

Start at 1. and iterate 2. as many times as you please. It will already be a program that prevents many traps that would bite you in other langauges. Haskell knows exactly whether you are looping over an array of strings or an array of chars.

(Why all the buzz about pureness, effects and so on? Well, with Haskell you can design with a high granularity and reliability what sideeffect is caused where. But you are not forced to use that feature.)

Other tipps:

- Build small projects.

- Read as few tutorials on monads as possible. You might even get by with 0.

- The trifecta of Haskell typeclasses are the functor, applicative, monad. I would advise you to not try to understand their mathematical origins, but just look up how the are used. They will crop up naturally when you build even small projects and then they will make sense.


> The trifecta .. mathematical origins

Ends up reading Leibniz and converting to Catholicism.


I like the idea of iterating from imperative to functional. Here the devils advocate for your if you can do it in python you can do it in haskell: I use quite a bit of numpy, scipy and matplotlib, are there equivalent libraries for Haskell?


Well... wasn't numpy, at least initially, a Python wrapper around Fortran libraries? Sure, that made them accessible to a bunch more people, but it wasn't some Python-only wonder. Someone could probably write the same bindings for Haskell, if they haven't already.


Maybe some of the experts could name the haskell equivalent libraries/wrappers.


I'm certainly not an expert (have only dabbled in both Haskell and Python, and never used numpy), but a web search found https://pechersky.github.io/haskell-numpy-docs which compares numpy to https://hackage.haskell.org/package/hmatrix. I also came across https://hackage.haskell.org/package/vector.


The old joke comes to mind:'How do you recognize a guitar player in the audience of a concert? They stand in a corner look at the stage and say 'I also could do that' '


What Haskell did with Monads is nice, but eventually Monads are just tags on what functionality the function uses.

That being said, I like that Nim and Koka did exactly that. You just tag the functions (IO, Async, Whatever) and it works.

In Haskell, you need monad transformers (which have a runtime costs) or whatever else was made to allow you to work with multiple different effects.


> which have a runtime costs

As monad is just an interface, it doesn't necessary cause runtime costs. Identity is a monad too. Effects may not always require sacrificing performance, but as they can be used to implement exceptions they are not just free compile time annotations. Also the differences discussed there: https://www.reddit.com/r/haskell/comments/3nkv2a/why_dont_we...


Monad transformers are different from monads. Monad transformers do have runtime costs, they are adding indirection at runtime.


Sometimes - it's pretty cool what GHC can do


It's hard because there are so many concepts to understand. After reading one Python book you can write solid programs in Python. Not so in Haskell, you would need to understand also the extensions of the language which are popular and understand the best practices (what to use to compose I/O and in which context for example), on top of all the basics. That and understand how to work with complex types in libraries: that require time. That would be too much for one book.


> After reading one Python book you can write solid programs in Python.

Okay, so our goal is to write a solid program. Let's see...

> Not so in Haskell, you would need to understand also the extensions of the language which are popular

You can simply go with vanilla Haskell2010. Dealing with strings will be a bit cumbersome, dealing with records will be a bit cumbersome, but you are still at 50% the boilerplate of an average java codebase.

> and understand the best practices (what to use to compose I/O and in which context for example)

No! This is what I was aiming at: You don't have to understand these best practises to have a solid program. Throw everything into one massive do-Block and the resulting program will be at least as solid as the solid python program.

> That and understand how to work with complex types in libraries

I concur that Python documentation is heaps better than Haskell documentation, although we are slowly improving. That said, I think the work is not harder: Learning how to speak with a postgres database or do numerical tasks requires times, period. What is different to Python is that the time spent chasing runtime errors is spent chasing compile errors in Haskell.

Another user linked this comparison of numpy vs. the Haskell equivalent, hmatric. It does not look more complicated in my opinion: https://pechersky.github.io/haskell-numpy-docs/


> you would need to understand also the extensions of the language

Not really. Extensions typically remove restrictions rather than add features, or rather, the ones you are likely to want to use do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: