Reddit reviews Introduction to Algorithms, Second Edition
We found 30 Reddit comments about Introduction to Algorithms, Second Edition. Here are the top ones, ranked by their Reddit score.
We found 30 Reddit comments about Introduction to Algorithms, Second Edition. Here are the top ones, ranked by their Reddit score.
Thank you all for your responses! I have compiled a list of books mentioned by at least three different people below. Since some books have abbreviations (SICP) or colloquial names (Dragon Book), not to mention the occasional omission of a starting "a" or "the" this was done by hand and as a result it may contain errors.
edit: This list is now books mentioned by at least three people (was two) and contains posts up to icepack's.
edit: Updated with links to Amazon.com. These are not affiliate - Amazon was picked because they provide the most uniform way to compare books.
edit: Updated up to redline6561
CLR: http://www.amazon.com/exec/obidos/ASIN/0262032937
Three books that come to mind:
Types And Programming Languages by Benjamin Pierce covers the ins and outs of Damas-Milner-style type inference, and how to build the bulk of a compiler. Moreover, it talks about why certain extensions to type systems yield type systems that are not inferrable, or worse may not terminate. It is very useful in that it helps you shape an understanding to understand what can be done by the compiler.
Purely Functional Data Structures by Chris Okasaki covers how to do things efficiently in a purely functional (lazy or strict) setting and how to reason about asymptotics in that setting. Given the 'functional programming is the way of the future' mindset that pervades the industry, its a good idea to explore and understand how to reason in this way.
Introduction to Algorithms by Cormen et al. covers a ton of imperative algorithms in pretty good detail and serves as a great toolbox for when you aren't sure what tool you need.
That should total out to around $250.
Introduction to Algorithms, Second Edition. Of course, don't miss Mr. Krumins Video Lectures perfect complement.
CLRS
No. People say Cormen's Intro to Algorithms text is dense and inaccessible (though I thought it was very accessible when I first encountered it in college). TAoCP is way more dense and inaccessible than Intro to Algorithms.
I would recommend Intro to Algorithms if you want to seriously dive into algorithms. The newest version features improved pseudocode, which is pretty clear and instructive—especially compared to the MIX assembly code in TAoCP.
Doesn't get better than CLRS.
EDIT: My bad, that's a dead-trees thing.
Skip Calculus (not really useful unless you do fancy graphics or sound generators or scientific stuff). Discrete mathematics is what you want to look at for CS. You might want to move on to a linear algebra course from there.
Get the CS specific University textbooks. Here are some to get you started.
As other mentionned, your question is pretty hard to answer on a post, even if it is pretty interesting. But having done a bit of research in discrete math & algorithm design and being a math lover myself, I'll try to talk about why I find my area interesting.
Being a software developper, you should pretty much know what an algorithm is. Given a problem, an algorithm is a recipe/step-by-step set of instructions that, if followed exactly, solves your problem. Algorithms can be classified in a lot of categories: deterministic algorithms (given some input, you always get the same right answer), probabilistic algorithms (you get the right answer 'on average'), approximation algorithms (you get an answer that is within some (provable) factor of the optimal answer) and so on. The main measure (there are multiple others) for the performance of an algorithm is the time it takes to find an answer with respect to the size of the input you are giving it. For example, if your problem is to search through an unordered list of n items for a given item, then the trivial solution (look at every item) takes time n. This is what we call linear in n, but algorithms can run logarithmic in n (takes time log(n) ) (very fast), exponential in n (c^n for some c) (very slow) or polynomial in n (n^k for some k) (efficient).
So the fun part is to find THE best algorithm for your problem. For a particular problem, one may ask how fast can the best algorithm run. Well surprisingly, it turns out that some problems will never admit an efficient EXACT solution. I'll give an example of such problem, and then come to the main point of my discussion.
Consider a graph/network (we love these things in discrete math), which is basically a set of points/nodes/vertices that are connected by links/edges, and its size is usually the number of nodes (the maximum number of edges of a (simple) graph is n^2 - the maximum number of pairs you can make with n elements). The Internet is the best example : nodes are webpages and edges are hyperlinks between theses pages. We say that two nodes are neighbors if they are connected by an edge. A fundamental problem of discrete mathematics goes as follows: what is the minimum number k such that you can color the nodes of a graph with k colors such that not two neighboring nodes share the same color? (http://en.wikipedia.org/wiki/Graph_coloring). It turns out that this problem can be proven to be inherently hard - if we can find an efficient deterministic algorithm (we strongly believe we can't) to solve this problem, than there is an efficient (=fast) algorithm to solve many "hard" problems (ex.: proving a theorem, or solving a sudoku ! - probably not going to happen). Such hard problems are said to be NP-complete. It also turn out that most real life (interesting) problems are also that kind of hard (http://en.wikipedia.org/wiki/List_of_NP-complete_problems).
This sounds quite desperate. However, here is where the research starts. It is said that these NP-complete problems cannot have efficient DETERMINISTIC and EXACT algorithms. Nothing prevents us from producing randomized and approximate solutions. So with some clever analysis, input from other areas (read algebra, geometry, probability, ...) and other tricks, some algorithms are built to find solutions that are usually within a good factor of the optimal. Hell, some NP-complete problems (i.e. Knapsack Problem) even admit an arbitrarly precise efficient algorithm. How is this possible? Reading required!
I don't know of non-scholar books that covered this subject, but if you are motivated, here are the books I suggest:
I guess there wasn't at lot of fireworks, but the goal was really to show an area of math that is not often brought up in 'mainstream math discussions'. So feel free to ask questions, I'll try to answer them as far as my knowledge goes.
P.S. I guess I've have put down the foundations for the discrete math/ algorithm design fortress, so every knowledgeable person should add his/her own comment!
Hi RandomNumber37,
So here's a little bit about me first; I don't want to misrepresent myself. My background is in economics and political science, where I was interested in statistical models that predict rare international events like war and state failure. It's here I became obsessed with statistics, machine learning, etc. Also, I've been programming in many languages since I was a kid, so after my undergraduate work in the social sciences and statistics, I took a job with a bioinformatics group doing coding. I thought this would be a temporary job until graduate school in economics or quantitative political science.
However working with large-scale biological and sequencing data was way more awesome than I expected. This caused me to shift focus. I also did a fair amount of work on computational statistics, i.e. ways of trying to make R better, understanding compiler technologies, etc. So after, I became more purely interested in statistics and computational biology, and I thought I would go to graduate school for pure statistics so I could also devote some time to computational statistics. However, now I work in a plant breeding lab (which I absolutely love). I will do this about another 2-3 years before I transition into a graduate program. This would mean I've worked in the field about 6 years before applying to graduate programs.
So, with that out of the way here are answers to your questions and some advice I offer:
Being that my background isn't in biology, I don't currently work with plants much. However, this is why I moved towards plant biology. Before getting obsessed about social science methods, I loved plants. I worked at an orchid greenhouse, and actually went to UC Davis thinking I'd study plant biology (until an awesome political science professor got me excited about science applied to political data). However, the scientists I work with are often not doing too much work with plants: many grow the plants, do the wet lab work, then spend more than half the time (sometimes up to 90%) analyzing the huge amount of data. I spend my full day in front of a computer, except when a colleague wants me to check out something cool in the lab, etc.
Everything. We get raw sequencing data, I have to analyze it from start to finish. Or, from raw sequencing files until the point where the numbers behind it tell a story. I also spend a huge amount of my time writing programs that do certain things for biologists in our group. Everything — protein prediction, data quality analysis, statistical modeling, etc.
Yes, at this current position I am starting to (which I why I sought work in plant biology). It depends on what plant you work with (Arabidopsis = short life cycle, you can do lots of stuff, vs citrus tree = long life cycle, you can't do lots of stuff). But some of the more awesome longer term projects will take 4 years to fully materialize.
So now, what steps were more important? I will tell you the three things that have helped me the most. As a point of how much they've helped me, I'll just mention that despite that not having a Phd (yet), or much of a background in biology other than what I've taught myself or learned on the job (which is actually quite a lot after 4 years in the field), I've had (and continue to receive) really nice job offers.
Anyways, I'd recommend learning:
Now, optionally (but highly, highly recommended):
Now, even more optionally:
Another quick story: I had a statistics graduate student come tell me he was working for a rather well known genomics professor on campus. He asked me how to analyze RNA-seq data. He said he wanted to use ANOVA. Even though he was a statistics graduate student, he went immediately to normality-assuming models, which was definitely not the case with this data the case. So know your Poisson, negative binomial, gamma, etc distributions. A probability course should introduce them all to you. It will also means when you start learning more theoretical population genetics, you'll be set.
Also, buy a book on machine learning (Elements of Statistical Learning II is good, and a free PDF is available). ESL II is good, but dense; don't let it discourage you. I also like this book. But again, this is dense stuff, don't let it discourage you.
Have you ever considered reading the SICP book from MIT? You could also try an algorithms book, and brush up on your design patterns.
The book has a different approach than standard textbooks. While all other books I have read classify algorithms by the problem they solve, this book classifies algorithms by the technique used to derive them (TOC). I felt that the book tries to teach the thought process behind crafting algorithms, which, to me, is more important than memorizing details about a specific algorithm. The book is not exhaustive, and many difficult topics are ignored (deletions in Red-Black trees, for example). The book is written in an engaging style, and not the typical dry academic prose. I also liked the use of puzzles as exercises.
The only other comparable book, in terms of style and readability, is that of Dasgupta, Papadimitriou and Vazirani. But I like Levitin's book better (only slightly). These two books (+ the MIT videos) got me started in algorithms; I had to read texts like CLRS for thoroughness later. If someone is starting to study the topic my recommendation would be to read Levitin and get introduced to the breadth of the topic, and then read CLRS to drill deeper into individual problems and their details.
This was the algorithms book we used at UW Madison
http://www.amazon.com/Introduction-Algorithms-Second-Thomas-Cormen/dp/0262032937
And this was the AI book
http://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597/ref=sr_1_1?ie=UTF8&s=books&qid=1278078229&sr=1-1
I suck at math and physics so can't help you there.
Project Euler problems won't help you as much as some other places.
I recommend the following:
Also, read Introduction to Algorithms
I'm no pro, but I have managed to get to online round 2 for the past two years through doing some of the above. If I do get around to reading that book and doing topcoder SRMs, I'm sure I can get past that round.
If you are enjoying your Calc 3 book, I highly recommend reading Topology, which provides the foundations of analysis and calculus. Two other books I would highly recommend to you would be Abstract Algebra and Introduction to Algorithms, though I suspect you're well aware of the latter.
If you want to save money on books for college here's better advice:
I can honestly say it saved me at least a few hundred dollars. The only books I ended up keeping were the literary books (looks good on bookshelves) and the reference-type books (any c.s. grads out there?). Then save the extra $200 and use it on a mini-fridge.
In EE degree we had an algorithms class and I happen to have a copy of this book. It is massive but I probably should consider cracking that thing open and going over it again. As with many things if you don't use it you tend to lose it and I bet I'm pretty rusty on most of the things covered in that book.
I will check out "Cracking the Coding Interview". Thanks for the suggestion.
You want to start with an intro to algorithms book.
https://www.amazon.com/dp/0262032937/?tag=stackoverfl08-20
Dynamic programming is all about optimal substructure. Wikipedia's explanation isn't that great, so if you want to really grok it, go get your hands on a copy of Introduction to Algorithms aka CLRS.
Buy this book. Code and test every algorithm in any language. Learn discrete math.
IT and web dev are trades, so you need to learn DBs and networking and sysadmin stuff. CS is engineering, so you need to learn abstract concepts and how to apply them to real problems. Lots of faux-CS programs are really trade schools. If that's your situation, then you've got decide which path you want.
Is intro to algorithms really considered one of the hardest across all undergrad?? I've taken it at another university, using the MIT text and exams, and there were many harder courses even in my major...
No. But from my own experience it is not worth it. When new editions are released yearly the content is not that different (but they love to "reorganize" it). Now, they try to get new books when it is worth it. For example, you see big differences differences between Introduction to Algorithms, 2nd edition (2001) and the third edition (2009).
Why is that version $113.29 yet this one is $65?
If you can't afford either, try this.
Anything can be self-taught if you're determined enough. Buy a good book on algorithms (http://www.amazon.com/Introduction-Algorithms-Second-Thomas-Cormen/dp/0262032937) and watch http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/. That book is definitive, so if you finish it and learn it, it's all you'll really need to get ahead of most programmers.
In terms of math, you'll pick it up. Recursive and Big-O notation were surprisingly easy to understand after watching a few lectures, not even looking at the book.
http://www.amazon.com/Purely-Functional-Structures-Chris-Okasaki/dp/0521663504
http://www.amazon.com/Introduction-Algorithms-Second-Thomas-Cormen/dp/0262032937
Seriously, that book is awesome. It seems like, for a while, my CS professors were able to pick whatever books they wanted in teaching their classes. So we had theory of computation with Sipser, algorithms and complexity with this book (not sure if it has a shorthand reference), compilers was the Appel book by preference, with several others acceptable.
Then, I would guess, someone complained about "difficulty". The administration got involved and things are more "approachable" now. Not so wonderful anymore.
Also, for me this book handled abstract algebra in much the same way that Sipser handles the theory of computation.
>So what did you do? Anyone else have a formal CS education and feel like they came out of it with nothing?
I graduated in 2006 and I've been doing web development professionally for almost four years now. Until about two weeks ago, I felt like I could have skipped my entire five years at school because most of the stuff just doesn't apply to web development since it's so far abstracted from the hardware. I was reading my algorithms book on the toilet the other day when I realized that I learned a shitton at school and it gave me an incredible advantage over the guy who learned web development on the fly. It helps to go back and re-read things after you have a context to put it into so you can apply what the theory to what you've learned.
It took me a long time to start getting designs down. You have to make a lot of mistakes before you can learn from them. It's as simple as that. Don't get discouraged. If you haven't read Head First Design Patterns, buy that book right now and read it cover to cover. I had read design pattern catalogs, but none of them conveyed the 'why' as well as HFDP did. They don't have abstract, car has wheels, Ford is a car. They have real code examples and they show you why you should favor composition rather than inheritance. Why you should follow the law of Demeter.
I've entertained the notion of starting over several times. Don't quit, and don't get discouraged. If you ever get to the point where you think you've learned all you need to learn and you're writing code that can't be improved, start over.
Look up Introduction to Algorithms by Cormen et al. It is pretty much a computation major's bible, after doing the ACM for 2 years it became my reference guide for almost everything. Really, every CS student should have this on their bookshelf.
Also, it's worth considering doing problems on TopCoder, uva Online Judge etc. These are generally "practical" implementations of the Algorithms, where you are given a scenario which can be solved through some computational algorithm (Normally implemented in a tricky way). The key is that it tests you on all sorts of data, and almost always includes those exceptional/boundary cases. The questions from UVA are generally past ACM Programming Competitions.
The reason I recommend this is that you said you enjoy implementing Algorithms. You may find that getting "real-world" applications (Even of they are a bit... crazy sometimes) makes the work even more enjoyable, and exposes you to using the algorithms in ways you may not have even thought of.
When I took my algorithms class using Cormen's Introduction to Algorithms, the first thing we did in class when looking at a new algorithm was to rewrite their terrible psuedocode with more descriptive names (and more sensible syntax - double nesting for loops is silly).
I would highly recommend this book, if you are at all interested in algorithm design and analysis.
[EDIT] Just finsihed a very difficult algorithms course, this was my bible.