(Part 2) Top products from r/haskell

Jump to the top 20

We found 20 product mentions on r/haskell. We ranked the 66 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.

Next page

Top comments that mention products on r/haskell:

u/wibbly-wobbly · 13 pointsr/haskell

I'm a theorist, so my book recommendations probably reflect that. That said, it sounds like you want to get a bit more into the theory.

As much as I love Awodey, I htink that Abstract and Concrete Categories: The Joy of Cats is just as good, and is only $21, $12 used.

Another vote for Pierce, especially Software Foundations. It's probably the best book currently around to teach dependent types, certainly the best book for Coq that has any popularity. You can even download it for free. I recommend getting the source code files and working along with them inline.

I will say that I don't think Basic Category Theory for the Working Computer Scientist is very good.

Real World Haskell is a great book on Haskell programming as a practice.

Glynn Winskel's book The Formal Semantics of Programming Languages is probably the best intro book to programming language theory, and is a staple of graduate introduction to programming languages courses.

If you can get through these, you'll be in shape to start reading papers rather than books. Oleg's papers are always a great way to blow your mind.

u/edwardkmett · 19 pointsr/haskell

Types and Programming Languages by Benjamin Pierce covers type theory, and systems of type inference that we can have, and the ones we can't and why.

Pearls of Functional Algorithm Design by Richard Bird covers how to think equationally about code. It is probably the best guide out there on how to "think" like a Haskeller. Not directly about a discipline of math you can apply, but the mindset is invaluable.

Wadler's original papers on monads are probably when they finally clicked for me.

The original Idiom paper is also a golden resource for understanding the motivation behind applicatives.

Jeremy Gibbons' The Essence of the Iterator Pattern motivates Traversable, which so nicely encapsulates what folks meant by mapM over the years.

Uustalu and Vene's The Essence of Dataflow Programming captures a first glimmer of how and why you might want to use a comonad, but it can be fairly hard reading.

Awodey's Category Theory is probably the best general purpose category theory text book.

For folks weak on the math side Lawvere and Schanuel's Conceptual Mathematics can be used to bootstrap up to Awodey and provides a lot of drill for the areas it covers.

Dan Piponi's blog is excellent and largely set the tone for my own explorations into Haskell.

For lenses the material is a bit more sparse. The best theoretical work in this space I can point you to is by Mike Johnson and Bob Rosebrugh. (Pretty much anything in the last few papers linked at Michael's publication page at Macquarie will do to get started). I have a video out there as well from New York Haskell. SPJ has a much more gentle introduction on Skills Matter's website. You need to signup there to watch it though.

For comonads you may get some benefit out of my site comonad.com and the stuff I have up on FP Complete, but you'll need to dig back a ways.

u/mightybyte · 16 pointsr/haskell

I actually had this exact discussion today. A number of people argue that type classes must have laws. I definitely share the general sentiment that it is better for type classes to have laws. But the extreme view that ALL type classes should have laws is just that...extreme. Type classes like Default are useful because they make life easier. They reduce cognitive overload by providing a standardized name to use when you encounter a concept. Good uniformly applied names have a host of benefits (see Domain-Driven Design for more on this topic). They save you the time and effort of thinking up a name to use when you're creating a new instance and also avoids the need to hunt for the name when you want to use an instance. It also lets you build generic operations that can work across multiple data types with less overhead. The example of this that I was discussing today was a similar type class we ended up calling Humanizable. The semantics here are that we frequently need to get a domain specific representation of things for human consumption. This is different from Default, Show, Pretty, Formattable, etc. The existence of the type class immediately solves a problem that developers on this project will encounter over and over again, so I think it's a perfectly reasonable application of a useful tool that we have at our disposal.

EDIT: People love to demonize Default for being lawless, but I have heard one idea (not originally mine) for a law we might use for Default: def will not change its meaning between releases. This is actually a useful technique for making an API more stable. Instead of exporting field accessors and a data constructor, export a Default instance and lenses. This way you can add a field to your data type without any backwards-incompatible changes.

u/nbksndf · 6 pointsr/haskell

Category theory is not easy to get into, and you have to learn quite a bit and use it for stuff in order to retain a decent understanding.

The best book for an introduction I have read is:

Algebra (http://www.amazon.com/Algebra-Chelsea-Publishing-Saunders-Lane/dp/0821816462/ref=sr_1_1?ie=UTF8&qid=1453926037&sr=8-1&keywords=algebra+maclane)

For more advanced stuff, and to secure the understanding better I recommend this book:

Topoi - The Categorical Analysis of Logic (http://www.amazon.com/Topoi-Categorial-Analysis-Logic-Mathematics/dp/0486450260/ref=sr_1_1?ie=UTF8&qid=1453926180&sr=8-1&keywords=topoi)

Both of these books build up from the basics, but a basic understanding of set theory, category theory, and logic is recommended for the second book.

For type theory and lambda calculus I have found the following book to be the best:

Type Theory and Formal Proof - An Introduction (http://www.amazon.com/Type-Theory-Formal-Proof-Introduction/dp/110703650X/ref=sr_1_2?ie=UTF8&qid=1453926270&sr=8-2&keywords=type+theory)

The first half of the book goes over lambda calculus, the fundamentals of type theory and the lambda cube. This is a great introduction because it doesn't go deep into proofs or implementation details.

u/tel · 19 pointsr/haskell

Something like

  • Documentation has a wider goal than just "documenting", it must transition a novice user to an expert
  • To do this you must do more than annotate, you must teach
  • Types, tests, readable source, etc all mystify the beginner—while they have a purpose, they do not serve the total goal
  • Function-level documentation is great, but it's just one piece of the whole too
  • Community-driven documentation without an owner sucks. You need a voice and a guiding principle
  • Teaching is about empathy—your documentation should exude empathy for the novice

    Then there's a breakdown and guide

  • Good documentation comes in, perhaps, four parts
    • First Contact assumes little base knowledge and answers "what is this?" and "why do I care?"
    • It also describes "what's next?"
    • The Black Triangle is a step-by-step guide that takes a user who has decided that they do care to the point of operating the library, simply
    • Get your user using as fast as possible
    • The Hairball is a largeish breakdown of all the things someone must know, each paragraph nudging the novice toward greater understanding bird-by-bird
    • The Reference is support documentation for experts
u/crntaylor · 2 pointsr/haskell

That's fine, then. My main concern was that you might be putting your money on the line!

I am not sure that an automated momentum system can't work. In fact, many CTAs (commodity trading advisors... a kind of hedge fund) made consistent returns in 2000-2009 by following pretty simple momentum strategies - generally moving averages or moving average crossovers on a 100-300 day window. Note that the timescale is much longer than yours, and also note that most of those CTAs have been in drawdown since about 2010 (ie they've lost money or just about broken even for the last four years).

But I am pretty certain that an intraday trading system based on trashy, discredited technical analysis isn't going to yield consistent profits, especially when applied by someone who is trading for the first time.

One way to tell if a trader knows what they are doing is to listen to their language - if they talk about technical indicators, fibonacci retracement, elliott waves, entry and exit points, MACD etc then they are a quack. If they talk about regression, signal processing, traing and test sets, regularization, bias/variance etc then there's a chance that they know what they're talking about.

There is fifty years of history building mathematical tools for analysing random processes, making time series forecasts, building regression models, and analysing models out of sample, all of which is generally ignored by the quacks who rely on spurious "indicators" and "entry/exit points". A good place to start is this book -

http://www-bcf.usc.edu/~gareth/ISL/

and this is a good book for when you've progressed beyond the intermediate level

http://statweb.stanford.edu/~tibs/ElemStatLearn/

There are two books by Ernest Chan about quantitative trading that frankly don't tell you anything that will be immediately applicable to creating a strategy (there's no secret sauce) but they do give you a good high level overview of what building a quantitative trading system is all about

http://www.amazon.co.uk/Quantitative-Trading-Build-Algorithmic-Business/dp/0470284889/ref=sr_1_2?ie=UTF8&qid=1406963471&sr=8-2&keywords=ernest+chan
http://www.amazon.co.uk/Algorithmic-Trading-Winning-Strategies-Rationale/dp/1118460146/ref=sr_1_1?ie=UTF8&qid=1406963471&sr=8-1&keywords=ernest+chan

Hope that's helpful.

u/begriffs · 2 pointsr/haskell

I found the best way to think about relational data in general is to start with an old book, one that covers the subject in a pure way without reference to any particular system. Then you can translate the concepts into a nice modern system like PostgreSQL. http://www.amazon.com/Handbook-Relational-Database-Candace-Fleming/dp/0201114348

If you just want to jump in and try stuff out here are some tutorials and docs.

Here are some tutorials about triggers
http://www.postgresqltutorial.com/postgresql-triggers/

Managing roles (the "official" docs are actually pretty good)
http://www.postgresql.org/docs/9.4/static/user-manag.html
http://www.postgresql.org/docs/9.4/static/sql-grant.html
http://www.postgresqltutorial.com/postgresql-roles/

Creating schemas and using the search path
http://www.postgresql.org/docs/9.4/static/ddl-schemas.html

u/po8 · 94 pointsr/haskell

If you're trying to make functional programming a pariah, I can heartily recommend writing all your programs in Church encoded lambda calculus.

I mean that's what the author is ultimately advocating, right? That gets rid of all the Booleans and conditionals that make programming hard.

I get it. Ever since Dijkstra (his name is easy to spell if you remember it has the first three Fortran integers in it in order) did his "no gotos" thing, everybody has wanted to be that guy and explain how getting rid of another major programming feature will actually make programs better. Indeed functional programming does exactly that by considering programming with storage harmful.

But "no reification" is a bridge too far. That's what's being advocated here: never lift a decision into the domain of values. When you get beyond the whole mystic "just one bit" thing, Booleans aren't special. For example, anytime you pass an integer to say how many times to do something, you could instead have passed a lambda that does it the desired number of times. That's how Church encoding works: you literally pass the replicable thing to the function representing the integer, and voila.

The resulting mess is almost completely unreadable to most programmers, because reification is a reflection of how humans think about the real world. Counting is reification, for pity's sake: you replace some set with the number of elements precisely to throw away inessential information and retain only the part that is semantically meaningful in your situation. The same is true of Booleanization: you don't need to know or care about where the bit came from: you just care about true or false.

To make this more concrete, think about the cognitive burden on the programmer in the "good" and "bad" examples in the article. In the "bad" case, the programmer has to keep track of which of two Booleans means what. In the "good" case, she has to keep track of two arbitrary computations, and verify that they are appropriate for the use case in question.

I find myself writing less and less Haskell as I have to deal with more and more of other people's code that looks like this. I'm a not-too-dumb guy with an MS in programming languages and 30 years of functional programming experience. I was on a student's math dissertation committee last week. This coding style baffles me.

I've spent the last few days learning Rust, and the amount of this style of code in the standard libraries is close to making me give up. The overgenerality and confusion makes simple things really hard, and hard things impossible.

So if this is where everybody is headed, I'll go back to writing C. It's a horrible language for safety, but the code most people put into production stuff is comprehensible and maintainable. No one gets mad at me for writing an if or using a Boolean. Sorry, but I like that.

Edit: If you want to see how this approach looks in Java, I heartily recommend Felleisen et al's A Little Java, A Few Patterns. It's an amazing book...

u/Herald_MJ · 2 pointsr/haskell

I've found Haskell: The Craft of Functional Programming to be great. RWH is naturally better for real-world examples though. That would be CoFP's main downfall.

u/ilkkah · 4 pointsr/haskell

This might suffice

> Standard C++ and the design and programming styles it supports owe a debt to the functional languages, especially to ML. Early variants of ML's type deduction mechanisms were (together with much else) part of the inspiration of templates. Some of the more effective functional programming techniques were part of the inspiration of the STL and the use of function objects in C++. On the other hand, the functional community missed the boat with object-oriented programming, and few of the languages and tools from that community benefited from the maturing experience of large-scale industrial use.

I remember that he discussed the idea in the C++ programming lanugage book, but I cannot find the right passage on the interwebs.

u/sleepingsquirrel · 1 pointr/haskell

There are some advanced Logo environments:

u/Faucelme · 3 pointsr/haskell

I got a chuckle out of the Lol Monoid instance.

That, said, there's room for a "Haskell for the impatient"-style book.

u/jberryman · 1 pointr/haskell

Sounds awesome. Think I'll pick up this book and maybe in a year I'll not be utterly unqualified for this.

u/wjv · 6 pointsr/haskell

> You can get away with using Python now, in my mind, and this is a feat unimaginable 5 years ago. But I never want to.

Not even with the interactive beauty and wonderfulness of IPython Notebooks? :)

> Bokeh looks nicer than raw matplotlib, but I'm not sure why it reminds you of ggplot

Because both are explicitly based on The Grammar of Graphics (the "gg" in "ggplot").

> Copying Matlab style plotting has always been a mistake in my mind.

Again, it's explicitly a goal of Bokeh to leverage the experience of existing R/ggplot users in much the same way that matplotlib tried to appeal to Matlab users.

Agreed that I don't like matplotlib's imperative style, but much of its functionality is now exposed via multiple APIs — it's now possible to use it much "less imperatively".

u/globules · 2 pointsr/haskell

For what it's worth, I'm currently reading Bird & Wadler's Introduction to Functional Programming and they use list comprehensions quite heavily. It's from 1988 though... :-)

u/ReinH · 4 pointsr/haskell

Check out Simon Thompson's latest edition of Haskell: The Craft of Functional Programming. I have a few other recommendations here.

u/ninereeds314 · 3 pointsr/haskell

This is a nice video, but my problem is that I already understood (non-partial) derivatives of regular expressions from the Brzowski paper. To me, the derivatives method almost but not quite told me what I already knew, as I first learned how to handle regular expressions/grammars from the Grune, Bal, Jacobs and Langendoen book which uses a dotted-rules approach, and I had already adapted that to a "some, not necessarily canonical, representation of the tail set" mental model.

However, the little I know about partial derivatives of regular expressions is that in general the method yields NFAs which aren't necessarily DFAs. It's no surprise that the two concepts are very similar (after all derivatives and partial derivatives in calculus are mostly manipulated the same way, but the partial notation reminds you of a few things you can't do, which I have unfortunately forgotten) but there is presumably some difference.

What the video shows ends with not-necessarily-minimal DFAs, not with NFAs, so presumably it's the non-partial derivative method described by Brzowski and not the partial derivatives method. Because it's dealing with DFAs, the Brzowski paper can also address minimization (there's one well-defined minimal DFA form for any DFA, which you can't claim for NFAs), also using a derivatives-based approach, though personally, I particularly like the Hopcroft method - there's a link to the 1971 paper in the references to the wikipedia minimization page. Mainly because of a serendipitous bug in my first attempted implementation, which I now think of as "unsafe minimization" (the automaton behaves correctly on any given input, but may accept invalid input sequences - thereby getting a bit more minimization in contexts where you know the input sequences will always be valid anyway).

Anyway, presumably a partial derivative has the possibility of two or more transitions out of the same state for the same token, and thus there can be two or more partial derivatives of the same expression with respect to the same token. But doing that arbitrarily seems doomed to blow-ups, so can anyone explain the actual difference between partial and non-partial derivatives?

BTW - I've tried to read the partial derivatives paper a few times, but not got very far, and the same thing happened the first few attempts at both Hopcrofts and Brzowskis paper. A lot of academic papers give me serious headaches I'm afraid. I'm probably overdue for another attempt, though.