(Part 3) Best ai & machine learning books according to redditors

Jump to the top 20

We found 3,368 Reddit comments discussing the best ai & machine learning books. We ranked the 567 resulting products by number of redditors who mentioned them. Here are the products ranked 41-60. You can also go back to the previous section.

Next page

Subcategories:

Artificial intelligence & science books
Computer neural networks books
Artificial intelligence books
Machine theory books
Computer vision & graphics books
Natural language processing books

Top Reddit comments about AI & Machine Learning:

u/lukeprog · 172 pointsr/Futurology

I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first). Many AI people predict superhuman AI sooner than this, though — including Rich Sutton, who quite literally wrote the book on reinforcement learning.

Once AI can drive cars better than humans can, then humanity will decide that driving cars was something that never required much "intelligence" in the first place, just like they did with chess. So I don't think driverless cars will cause people to believe that superhuman AI is coming soon — and it shouldn't, anyway.

When the military has fully autonomous battlefield robots, or a machine passes an in person Turing test, then people will start taking AI seriously.

Amusing note: Some military big-shots say things like "We'll never build fully-autonomous combat AIs; we'll never take humans out of the loop" (see Wired for War). Meanwhile, the U.S. military spends millions to get roboticist Ronald Arkin and his team to research and write the book Governing Lethal Behavior in Autonomous Robots. (One of the few serious works in the field of "machine ethics", BTW.)

u/zorfbee · 32 pointsr/artificial

Reading some books would be a good idea.

u/seabass · 18 pointsr/datascience

The "bible" is "The Grammar of Graphics" by Leland Wilkinson. (link to amazon). The "gg" of ggplot2 stands for grammar of graphics.

Then we go into other books, resources that help with actually showing visualizations:

u/pt2091 · 17 pointsr/datascience

http://neuralnetworksanddeeplearning.com/chap1.html
For neural networks and understanding the fundamentals behind backpropagation.

http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf
(book is free, also an online course on it too called statistical learning)

http://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981
the author of this also has a set of lectures online:
http://videolectures.net/david_mackay/

Personally, I found it easier to learn linear algebra from the perspective of a physicist. I never really liked the pure theoretical approach. But here's a dude that likes that approach:
https://www.youtube.com/channel/UCr22xikWUK2yUW4YxOKXclQ/playlists

and you can't forget strang:
http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/

I think the best community for questions on any of the exercises in these book or concepts in this lecture is CrossValidated. I think its doubly helpful to answer other people's questions as well.

u/rjtavares · 16 pointsr/Python

> What is/are the benefits of ggplot compared to to matplotlib/pylab?

From my understanding, ggplot2 is an R package that aims to create graphics that follow the design principles from a book called "grammar of graphics". Also, they're pretty as hell with basically no effort required. This is an attempt to mimic that package in native Python.

Matplotlib is more powerful, and more customizable, but the default settings are ugly and sometimes almost illegible.

u/drakonite · 16 pointsr/gamedev

You may want to narrow that down a bit, but okay, here are some highlights, with amazon links to help disambiguate.

u/bigshum · 15 pointsr/compsci

I found the The WEKA toolkit to be a nice centralised resource when it came to learning about the multitude of techniques and parameters used out there. There's a book too which is a very informative read if a little dry in places.

This was used in my Language Identification project for speech signals and it worked quite nicely.

u/CapaneusPrime · 14 pointsr/RStudio

Super minor nitpick:

R Studio is the development environment.

R is the language.

Presumably you want to become well versed in the latter rather than the former. It's an easy mistake to make though, since the two are so intertwined for most people as to become almost indistinguishable.

More to your point though:

Before learning anything, it's a good idea to ask yourself why you want to learn it, and what you hope to be able to do with it. Now, you mentioned two things,

  • Hypothesis testing.

  • Graphing 4 variables.

    Both of these are relatively simple, and if you have even the most rudimentary understanding of R, you could learn to do in a couple of minutes.

    So, my question to you would be, in using R is your goal to get quick, simple answers to straightforward questions OR are you ultimately looking to be able to do much more complicated tasks? This isn't a judgemental question, not everyone needs to aspire to become an R god, just needing something quick and dirty is perfectly okay.

    If the things you mentioned are more or less the extent of your needs, I'd suggest just googling what you need to do at the time and pick up what you need, more or less, through osmosis.

    However, if you have designs on being able to do amazingly complicated things, if you want to push R to its fullest, you'll need a more structured approach.

    One thing you absolutely must understand is R is a package based language. What this means for you is that beyond the numerous ways you can do any task in any language, people have written countless* packages which contain all sorts of handy functions to do just about anything you could conceivably want to do.

    >* Okay, it's not really countless, there are (as of this writing 12,620 packages on CRAN and 1,560 additional packages on bioconductor. There are bunches more of unofficial ones scattered about GitHub and others privately maintained, but you get the point, there's lots of them.

    So, for anything you want to do, you can approach it in one of two, very broad, ways:

  • Base R.

  • Using packages.

    When you are starting out, I think it's very important to get a good handle on Base R.

    I would start out with basically any introductory R book. Search on Amazon and just find one you like.

    Personally, I can recommend Using R for Introductory Statistics by John Verzani. It isn't for everyone, but if you're truly a beginner to both R and statistics more generally, it's a good reference text.

    After that it's, up to you. Where you want to take it. For me, the pantheon of R gods* I would pay tribute to are these four:

  • The god of tidiness - Hadley Wickham GitHub/u/hadley

  • The god of speed - Dirk Eddelbuettel GitHub

  • The god of art - Winston Chang GitHub

  • The god of sharing - Yihui Xie GitHub

    >*I'm sure every single person on that list would balk at being called a "god," but they'd be lying.

    It's no mistake that 3/4 of them work for R Studio.

    The god of tidiness.


    Hadley must be a complete neat-freak because he's the driving force behind the tidyverse,

    >The tidyverse is an opinionated collection of R packages designed for data science. All packages share an underlying design philosophy, grammar, and data structures.

    Once you branch out of base R, the tidyverse should be your first destination. It's not quite a new language unto itself, more like a very sophisticated dialect of the language you already know. Once you can speak "tidy," you can still communicate with the "base" speaking plebs, you just won't be able to imagine every wanting to.*
    >* this is not exactly true, and might come across as gross and elitist, but the tidy paradigm really is substantially better. If you were designing a completely new language to do statistical competing, from scratch, today, the language would probably feel a lot like the tidyverse.

    Anyway, any book by Hadley Wickham is gold, and they're all available online for free. But R for Data Science is a good first step into a larger world.

    The god of speed.


    I imagine Dirk is not a patient man. He's very active on forums, basically every meaningful response on stackexchange for an Rcpp related question is his (or his collaborator, lesser-god Romain Francois), but sometimes his responses can seem a little... terse?

    Now, R is notoriously slow. It's much maligned for this, usually fairly, sometimes not.

    Much of the perceived slowness can be mitigated in base R by learning the suite of apply functions which are vectorized. That is they take a multivalued variable (a vector, matrix, or list) and they apply the same function to each element. Its typically much, much faster than using a for-loop. However, you can't always get away from needing a for-loop, and sometimes your loop will need to run thousands (or millions) of times. That's where the Rcpp package which Dirk maintains comes into play.

    It is an interface between R and C++, there's not much to say about the package itself. You'll need to learn at least some rudimentary C++ to make use of it, but simply breaking out a computationally intensive for-loop into an Rcpp function can yield a huge improvement in run times. 10x-100x (or more) depending on how well (or poorly) optimized your R and C++ code is. There's some weirdness involved (like you can't call an Rcpp function in a parallel apply function (separate package) unless your Rcpp function is loaded as part of a package, so for maximum benefit you'll need to learn how to write your own packages - praise be to Hadley).

    Rcpp includes some semantic "sugar" which allows you to write some things in C++ more like you would in R, but that's yet a third thing to learn.

    Also Rcpp, much like the tidyverse is more an ecosystem of interconnected packages than a single package.

    The god of art.


    Base R plots are ugly as sin. They just are, no one should use them ever, for any reason.*

    >*Exaggeration.

    That said, Winston's* ggplot2 is a revelation and a revolution in how graphics are created and presented.

    >* Yes, technically ggplot2 is also Hadley's and is part of the tidyverse, but Winston literally wrote the book on it. Okay, okay, Hadley technically created the package and has written books about it, I just find Chang's book more fitting to my needs.

    The "gg" in ggplot2 stands for "grammer of graphics", a common structure for describing the components of a visualization in a concise way.

    Learning ggplot2 will take you a long way toward being able to make beautiful graphical visualizations.

    The god of sharing.


    After you've learned all of the above. You can wrangle your messy data into something tidy and manageable, you can work on it cleanly and power through massive computations, and you can create stunning images from your data, it all means nothing if you're the only one who sees it.

    This is where Yihui shines. He is the maintainer for the knitr package, and the author of Dynamic Documents with R and knitr. This will allow you to turn all of your work into PDFs or web pages to share with the world.

    It's super easy to get started with, much more complicated to master, but definitely worth it.

    To use it effectively, you'll need to learn rmarkdown also by Yihui. You'll also want to start dabbling with LaTeX (if your not proficient already) and to truly bend documents to your whim you'll need to learn to tinker with YAML.

    Closing remarks.


    It's a lot to master. Few ever will. Not everyone will agree on everything I've said, but I think the park to true mastery looks something like that.

    Best of luck!

u/maruahm · 13 pointsr/compsci

Always liked Introduction to Automata Theory, Languages, and Computation by Hopcroft and Ullman as an intro text. Undergraduate-level but good treatment of TCS.

If that's too basic, I recommend Theory of Computation by Kozen. It's roughly 1st-year graduate level, intended for those already with some background.

If that's too basic, for a research-level survey of TCS, take a look at Wigderson's Mathematics and Computation.

u/theekrat0s · 12 pointsr/fo4

I could write a book of information and opinions with dozens of sources and stuff but I am gonna keep it simple and link you to these two thing:
https://www.youtube.com/watch?v=r-jMdJHv1Lk
and
http://www.amazon.de/How-Create-Mind-Thought-Revealed/dp/0670025291

The big thing you gotta look at is consciousness. I am gonna copy paste some previous comments of mine in other threads that talk about that. Everything beyond this point is what I think and if you want you can ignore it and just rely on those 2 previous links because they are pretty much the starting point that leads to what I will say:

Someone asked this "Why don't people understand that gen3s are practically human?"

My answer/comment was:
Well tbh the answer is simply we don't know. We IRL are not advanced enough to know if they can be considered humans. The brain pretty much works like a computer, with electric(and also chemical[PS.: there is a super computer that uses those too it's pretty cool]) signals. What makes humans different from machines is that we are conscious of ourselves and our surroundings. Yes, synths are conscious too BUT (this is a big but) they did not achieve that level of consciousness by themselves. Humans it took hundreds of thousand of years of evolution to build this consciousness and every newborn baby gets taught this consciousness passively by there parents. Growing up we just start understanding these things because everyone does, we leach of each other. Synth do that too but they never achieved this themselves, after they get created the Institute indoctrinates them with what they need to know. Their consciousness is just as artificial and synthetic as their bodies. The way they think and do and feel is all based upon how they see themselves and their surroundings and ALL of that has been implanted into them, kinda like programming the basis of an AI, and after a while they also learn. (IRL AI are not as advanced and can only effectively learn simple tasks but that is changing rapidly) To TRULY determine if synths can gain and create their own consciousness(biological programming pretty much) they have to be separated from any human contact. What happens if they grow up with different species, or with themselves or alone? If they are able to build up that by themselves then we could start calling them sentient beings. Without any of that data they are just fancy pants machines. (Nick is an odd exception because he came from a human, for me he is more human than any Gen3)
PS.: But hey, this is all just my opinion and this is the internet…so who cares.

The Brain is grown but not the informations inside it. A synths mind can be wiped and reshaped. They most likely have a basic "programming" in them to begin with given the fact that they can walk the second they get made/"born". And about that free and happy life thing. Most are ok with beings workers/slaves/tools. The minority of them escapes and in the story they never give a reason as to why that movement got started, all synths are created equal so why do some want t escape and some not. They have the same life since they were created, they came from the same source and got taught the same basic skills needed for their work. When did they start to "think" for themselves and start to be sentient and want things such as freedom? The only logical answer(given with what we know from the game) is that it depended with which human they got into contact more. There are people in the Institute that think they are humans and most likely the synths working in that area are the ones getting those ideas of escaping. Long story short, all of this leads to what I said that they are not creating these thoughts and their consciousness by themselves, they are grabbing it, leaching it passively from the humans they work with. It's very much like an AI learning things step-by-step. And the synths that NEVER try to escape are the coursers, why? Because they spent a lot of time with the department(SRB) that treats the synths more as tools than humans as any other place in the Institute.
TLDR: Synths behave a LOT like how a futuristic AI would work and learn. Never making their own decisions but instead leaching ideas and learning from their surroundings.

The thing is that humans throughout their entire evolution build that up. We build up that consciousness and every generation benefits and expands it, that's what a species does.
Yeah you could consider a synth humans because of that but in my eyes it is just a very advanced AI being thought and given a consciousness that is not his. The answer is IF they can build something like that up by themselves which is at this point in time simply impossible to answer.
Scientists are estimating that by 2023 they can rebuild how a human brain works with a computer, on top of that there are super computers out right now that use both electrical and chemical signals to send information just like the brain. If you combined those two and give it the same ideas, believes and skills that the Institute teaches the synths would you consider that thing human?
I believe it is not the body that determines if they are human, not the flesh and bone but what their mind is and the synths are just a fancy harddrive up there(for me that is). Nick is an exception because his memories are DIRECTLY from a human and after that he built his own personality and consciousness in combination with his former self and the people that he ended up with after being kicked out of the Institute. Because he developed so much himself(not completely though, even the SS helps him with his quest) I consider him more human than any Gen3 Synth out there.

Hope this helps!

u/lefnire · 12 pointsr/productivity

When I found my passion: machine learning. Few arm-chair books in (The Singularity is Near; How to Create a Mind; The Master Algorithm) and I realized humanity is on the cusp of a breakthrough - possibly our most important moment in history - and I can be part of it. I find myself naturally focused on education, unlike my forced productivity while working web/mobile dev for some business analytics company. "I'm passionate about informative data," "I'm driven by beautiful user interfaces" - sure you're not convincing yourself? Biotech, AI... shit will alter the course of human history. Hunger is the best sauce, find that thing that drives you.

u/proggR · 10 pointsr/IAmA

Hello Ben, thank you for doing an AMA. I apologize for the long windedness in advance. I added the bolded text as headings just to break things up a little more easily so if you don't have time to read through my full post I'd be happy to just get suggested readings as per the AMA Question section.

AMA Question

I'm interested more and more in AI but most of what I know has just been cobbled together from learning I've done in other subjects (psychology, sociology, programming, data modeling, etc), with everything but programming being just as hobby learning. AI interests me because it combines a number of subjects I've been interested in for years and tries to fit them all together. I have Society of Mind by Minsky and How to Create A Mind by Kurzweil at home but haven't started either yet. Do you have any follow up reading you would recommend for someone just starting to learn about AI that I could read once I've started/finished these books? I'm particularly interested in information/data modelling.

Feedback Request for Community AI Model

I had a number of long commutes to work when I was thinking about AI a lot and started to think about the idea of starting not with a single AI, but with a community of AI. Perhaps this is already how things are done and is nothing novel but like I said, I haven't done a lot of reading on AI specifically so I'm not sure the exact approaches being used.

My thought process is that the earliest humans could only identify incredibly simple patterns. We would have had to learn what makes a plant different than an animal, what was a predator and what was prey, etc. The complex patterns we idenfity now, we're only able to do so because the community has retained these patterns and passed them onto us so we don't have to go through the trouble of re-determining them. If I were isolated at birth and presented with various objects, teaching myself with no feedback from peers what patterns can be derived from them would be a horribly arduous, if not impossible, task. By brute forcing a single complex AI, we're locking the AI in a room by itself rather than providing it access to peers and a searchable history of patterns.

This made me think about how I would model a community of ai that made sharing information for the purpose of bettering the global knowledge core to their existence. I've been planning a proof of concept for how I imagine this community AI model, but this AMA gives me a great chance to get feedback long before I commit any development time to it. If you see anything that wouldn't work, or that would work better in another way, or know of projects or readings that are heading in the same direction I would love any and all feedback.

The Model

Instead of creating a single complex intelligent agent, you spawn a community of simple agents, and a special kind of agent I'm calling the zeitgeist agent, that acts as an intercessor for certain requests (more on that in a bit).

Agents each contain their own neural networks which data is mapped to, and a reference to each piece of information is stored as meta data to which "trust" values can be assigned which would relate to how "sure" the agent is of something.

Agents contain references to other agents they have interacted with, along with meta data about that agent including a rating for how much they trust them as a whole based on previous interactions, and how much they trust them for specific information domain based on previous interactions. Domain trust will also slowly allow agents to become "experts" within certain domains as they become go-tos for other agents within that domain. This allows agents to learn broadly, but have proficiencies emerge as a byproduct of more attention being given to one subject over another and this will vary from agent to agent depending on what they're exposed to and how their personal networks have evolved over time.

As an agent receieves information, a number of things take place: it takes into account who gave it the information, how much they trust that agent, how much they trust that agent in that domain, how much trust the agent has placed on that information, whether conflicting information exists within its own neural network, and the receiving agent then determines whether to blindly trust the information, blindly distrust the information, or whether to verify it with its peers.

Requests for verification are performed by finding peers who also know about this information which is why a "language" will need to be used to allow for this interaction. I'm envisioning the language simply being a unique hash that can be translated to the inputs recieved that are used by the the neural networks, and whenever a new piece of information is recieved the zeitgeist provisions a new "word" for it and updates a dictionary it maintains that is common to all agents within the community. When a word is passed between agents, if the receiving agent doesn't know the word, it requests the definition from the zeitgeist agent and then moves on to judging the information associated with the word.

When a verification request is made to peers, the same evaluation of trust/distrust/verify is performed on the aggregate of responses and if there is still doubt that isn't enough doubt to dismiss it entirely, the receiving agent can make a request to the zeitgeist. This is where I think the model gets interesting, but again it may be commonplace.

As agents age and die, rather than lose all the information they've collected, their state gets committed to the zeitgeist agent. Normal agents and the zeitgeist agent could be modelled relatively similarly, with these dead agents just acting as a different type peers in an array. When requests are made to the zeitgeist agent, it can inspect the states of all past agents to determine if there was a trustworthy answer to return. If after going through the trust/distrust/verify process its still in doubt, I'm imagining a network of these communities (because the model is meant to be distributed in nature) that can have the same request past onto the zeitgeist agent from another community in order to pull "knowledge" from other, perhaps more powerful, communities.

Once the agent finally has its answer about how much trust to assign that information, if it conflicts with information recieved from other peers during this process, it can notify those peers that it has a different value for that information and inform them of the value, the trust they've assigned, and some way of mapping where this trust was derived from in order for the agent being corrected to perform its own trust/distrust/verify process on the corrected information. This correction process is meant to have a system that's generally self correcting, though bias can still present itself.

I'm picturing a cycle the agent goes through that includes phases of learning, teaching, reflecting, and procreating. Their lifespan and reproductive rates will be determined by certain values including the amount of information they've acquired and verified, the amount of trust other agents have placed on them, and (this part I'm entirely unsure of how to implement) how much information they've determined a priori, which is to say that, through some type of self reflection, they will identify patterns within their neural network, posit a "truth" from those patterns, and pass it into the community to be verified by other agents. There would also exist the ability to reflect on inconsistencies within their "psyche", or put differently to evalutate the trust values and make corrections as needed by making requests against the community to correct their data set with more up to date information.

Agents would require a single mate to replicate. Agent replication habits are based on status within the community (as determined by the ability to reason and the aggregate trust of the community in that agent), peer-to-peer trust, relationships meaning the array of peers determines who the agent can approach for replicating with, and heriditary factors that reward or punish agents who are performing above/sub par. The number of offspring the agent is able to create will be determined at birth, perhaps having a degree of flexibility depending on events within its life, and would be known to the agent so the agent can plan to have the most optimized offspring by selecting or accepting from the best partners. There would likely also be a reward for sharing true information to allow for some branches to become just conduits of information moving it through the community. Because replication relies on trust and ability to collect validated knowledge, as well as being dependent on finding the most optimal partner, lines of agents who are consistently wrong or unable to reflect and produce anything meaningful to the community will slowly die off as their pool of partners will shrink.

The patterns at first would be incredibly simple, but by sharing information between peers, as well as between extended networks of peers, they could become more and more complex over time with patterns being passed down from one generation of agent to the next via the zeitgeist agent so the entire community would be learning from itself, much like how we have developed as a species.


Thanks again

I look forward to any feedback or reading you would recommend. I'm thinking of developing a basic proof of concept so feedback that could correct anything or could help fill in some of the blanks would be a huge help (especially the section about self reflection and determining new truths from patterns a priori). Thanks again for doing an AMA. AI really does have world changing possibilities and I'm excited to see the progress that's made on it over the next few decades and longer.

u/[deleted] · 9 pointsr/programming

>if one really cares about the programming craft, during his/her journey of ever improving his/her efficiency and effectiveness as a software programmer, one tends to come full circle and go back to the root, which is made of the stuff CS grads are forced to read about in university.

Well said, and a great book for those of us self-taught Java and/or scripting language programmers, for just this very purpose is Hyde's "Write Great Code: Understanding the Machine"

u/dan_lessin · 9 pointsr/videos

I learned about genetic algorithms from this book:

http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857

If you're at that point, I think you can code up some of the examples yourself without too much difficulty to gain a real understanding of it. I'm sure there are less technical introductions, too.

For neural networks, I'm less familiar, since my work doesn't use neural networks exactly, but rather a more general network of computing nodes (a lot like what Karl Sims used).

u/sleepingsquirrel · 9 pointsr/ECE
u/xamomax · 7 pointsr/Futurology

To understand what Google is likely to be doing, I highly recommend How to Create a Mind by Ray Kurzweil. Keep in mind that Kurzweil is now at Google, probably specifically for this project.

u/BobBeaney · 7 pointsr/compsci

Have a look at Nine Algorithms That Changed The Future. It sounds like this is just the book you're looking for.

u/kanak · 6 pointsr/compsci

I would start with Cover & Thomas' book, read concurrently with a serious probability book such as Resnick's or Feller's.

I would also take a look at Mackay's book later as it ties notions from Information theory and Inference together.

At this point, you have a grad-student level understanding of the field. I'm not sure what to do to go beyond this level.

For crypto, you should definitely take a look at Goldreich's books:

Foundations Vol 1

Foundations Vol 2

Modern Crypto

u/wjv · 6 pointsr/haskell

> You can get away with using Python now, in my mind, and this is a feat unimaginable 5 years ago. But I never want to.

Not even with the interactive beauty and wonderfulness of IPython Notebooks? :)

> Bokeh looks nicer than raw matplotlib, but I'm not sure why it reminds you of ggplot

Because both are explicitly based on The Grammar of Graphics (the "gg" in "ggplot").

> Copying Matlab style plotting has always been a mistake in my mind.

Again, it's explicitly a goal of Bokeh to leverage the experience of existing R/ggplot users in much the same way that matplotlib tried to appeal to Matlab users.

Agreed that I don't like matplotlib's imperative style, but much of its functionality is now exposed via multiple APIs — it's now possible to use it much "less imperatively".

u/mredding · 5 pointsr/compsci

I can't speak of a specific book that is a comprehensive history of computing, but I will speak to books that speak of our culture, our myths, and our hero's.

Hackers and Painters, by Paul Graham. People are polarized about the man, whether he's too "pie in the sky" - full of shit and ego, or if he speaks as an ambassador to our most optimistic ideals of our (comp-sci) culture. The contents of this book is a collection of his essays that are inspirational. It made me forego the societal pressures within our culture and reject popular opinion because it is merely popular and just an opinion, which is a virtue no matter who you are, where you are, or what you do. All these essays are on his website, though. If you want to review them, I recommend Hackers and Painters (the essay), What You Can't Say, Why Nerds are Unpopular, and The Age of the Essay; his oldest essays are at the bottom of the page and go up - he writes about what he's thinking about or working on at the time, so you'll see the subject matter change over time. So much of this will have direct application to his middle school and high school life. I cannot recommend this book, and the rest of his essays, enough.

If he wants to get into programming, I recommend The Pragmatic Programmer. This book talks about the software development process. I'm not going to lie, I don't know when best to introduce this book to him. It's not a hard read whatsoever, but it's abstract. I read it in college in my first months and said, "Ok," and put it down. Approaching the end of college and my first couple years in my profession, I would reread it ever 6 months. It's a kind of book that doesn't mean anything, really, without experience, without having to live it, when he has an obligation to his craft, his profession. I tell you about this one since you're asking about books to tell him, because this isn't something someone would normally come up across without being told about it.

The Cathedral and the Bazaar is a telling book about the cultural differences between the proprietary monoliths like Apple and Microsoft, and the Free and Open Source Software communities that back such popular software as Linux (the most popular operating system on the planet, running on all top 500 super computers, most server computers on the internet, and all Android phones) and Chrome(the worlds most popular web browser). Indeed, this book directly reflects the huge cultural battle that was duked out in the field, in the industry, and in the courts from the mid-90s and into the 2000s. It advocates helping the community, contributing to something larger than yourself, and that none of us are as good as all of us. To paraphrase Linus Torvalds(inventor of Linux) "Given enough eyeballs, all bugs are shallow."

It's important to know who the hero's are in our culture, and they are diverse and varied, they're not just computer scientists, but mathematicians, physicists, philosophers, science fiction writers, and more. I would find a good book on Nicola Tesla, since he invented basically everything anyway (Thomas Edison was a great businessman, but a bit of a tosser), Richard Feynman was a physicist who is still celebrated in his field, and he even worked for Thinking Machines, back in the day, which was a marvel of it's time. Seymour Cray founded Cray Supercomputers and they have a lasting legacy in the field, a biography on that would be interesting. A biography on Symbolics and their Lisp Machines will make him yearn to find one still functioning (a rare gem that crops up every now and again, though he can run one in an emulator), and about the "AI Winter", a significant historic era (note: the AI Winter is over, and we are in spring, the history is both compelling and enthralling). Anything Issac Asimov published (in nearly every category of the dewy decimal system) is also compelling, and hardly dated. In fact, he's the originator of a lot of modern sci-fi. Charles Babbage invented the modern computer (though it was entirely mechanical in his day, and it wasn't actually built until 1996-2002) and Ada Lovelace was the worlds first computer programmer. A woman! Speaking of women, and it's worth young men learning this about our history, Grace Hopper was a military computer engineer who invented the term "bug".

And speaking of women, someone I have respect for, especially if your boy wants to get into game development is Sheri Graner Ray's Gender Inclusive Game Design, which may be more appropriate when he's in high school, and I consider it required reading for anyone who wants to enter the gaming industry. The book lays out plainly how video games hyper-sexualize both women, and, for some reason surprisingly to many - men, it's disastrous effects it has for the game industry, the game market, and the gaming community, and insights on how we may combat it. I have seen colleagues (men) become indignant and personally offended at reading this book, but were absolutely humbled when they took the fight to Sheri directly (we had a few phone interviews with her, always fantastic). If your boy found a problem with this book, he would do well to read Paul Grahams essay on keeping his identity small... The subject matter is not a personal attack on the individual, but on the blight, and he would be better served finding himself on the right side of history with this one, it would serve him well if he were to pursue this craft, specifically, but also any forward facing media in general.

And I also recommend some good books on math. Algebra, linear algebra, calculus, and statistics. You can get very far, lead an entire career unto retirement without knowing anything more than arithmetic and basic, basic algebra, but he could only serve himself well if he makes the decision that he is going to like maths and chooses to willfully become good at it. Outside the context of school and terrible teachers, it's actually an enthralling subject. Just get him a copy of Flatland, Flatterland, and Sphereland. Try this. There are books about proofs that break them down into laymen terms so that anyone can celebrate how special they are. My wife has a few on the shelf and I can't remember their titles off hand. Also this, the book is the narrative of some witty laymen who discover a whole branch of mathematics from first principles, the surreal numbers, an extension of imaginary numbers. It's really quite good, but might keep him occupied for a couple years in high school.

I should stop here or I never will.

u/icannotfly · 5 pointsr/botsrights

Check out Our Final Invnetion by James Barrat; if we're lucky, the difference between the first artificial general intelligence and the first artificial general superintelligence will be a few hours at most. Exponential growth is a bitch.

u/HGlpIyHk9LiGP · 5 pointsr/learnprogramming

That said, it's not uncommon to do something like this:

typedef struct _LinkedList {
struct _LinkedList next;
void
data;
} LinkedList;


LinkedList list_CreateList() { return (LinkedList )malloc(sizeof(LinkedList)); } //i.e. a constructor
void list_AppendTail(LinkedList list, void data) { ... }
void list_Destroy(LinkedList *list) { ...; free(list); ... } //i.e. a destructor

What you have there is a linked list struct typedef'd to a LinkedList. You can initialize a list on the heap using list_CreateList(), and destroy it by calling list_Destroy(). It's not Object oriented (edit: you can still do composition and use function pointers) , but it does behave like it.

edit: Accidentally a pointer
edit2: This book has a whole bunch of datastructures written in C like this http://www.amazon.com/Mastering-Algorithms-C-Kyle-Loudon/dp/1565924533

u/CrimsonCuntCloth · 4 pointsr/learnpython

Depending on what you want to learn:

PYTHON SPECIFIC

You mentioned building websites, so check out the flask mega tutorial. It might be a bit early to take on a project like this after only a month, but you've got time and learning-by-doing is good. This'll teach you to build a twitter clone using python, so you'll see databases, project structure, user logons etc. Plus he's got a book version, which contains much of the same info, but is good for when you can't be at a computer.

The python cookbook is fantastic for getting things done; gives short solutions to common problems / tasks. (How do I read lines from a csv file? How do I parse a file that's too big to fit in memory? How do I create a simple TCP server?). Solutions are concise and readable so you don't have to wade through loads of irrelevant stuff.

A little while down the road if you feel like going deep, fluent python will give you a deeper understanding of python than many people you'll encounter at Uni when you're out.

WEB DEV

If you want to go more into web dev, you'll also need to know some HTML, CSS and Javascript. Duckett's books don't go too in depth, but they're beautiful, a nice introduction, and a handy reference. Once you've got some JS, Secrets of the javascript ninja will give you a real appreciation of the deeper aspects of JS.

MACHINE LEARNING
In one of your comments you mentioned machine learning.

These aren't language specific programming books, and this isn't my specialty, but:

Fundamentals of Machine Learning for Predictive data analytics is a great introduction to the entire process, based upon CRISP-DM. Not much of a maths background required. This was the textbook used for my uni's first data analytics module. Highly recommended.

If you like you some maths, Flach will give you a stronger theoretical understanding, but personally I'd leave that until later.

Good luck and keep busy; you've got plenty to learn!

u/Buckwheat469 · 4 pointsr/opensource

I suggest reading the Cathedral and the Bazaar by Eric S Raymond.

u/claytonkb · 4 pointsr/AskComputerScience

I'm partial to Hopcroft and Ullman. I'm usually not the "open a book, read the explanations..." type of learner but Hopcroft&Ullman is very clear and concise, so I found myself doing just that. No need to skip ahead to figure out the point of it all, they just explain it in logical order. This text goes quite a bit deeper than most algorithms courses will go but it's a great way to pump iron with your brain and everything you learn will be applicable to whatever curriculum is taught in your algorithms course.

u/Soupy333 · 4 pointsr/Fitness

If you're interested in this stuff (and just getting started), then I highly recommend this book - http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597

When you're ready to go deeper, then this one is even better http://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/0070428077/ref=sr_1_2?s=books&ie=UTF8&qid=1341852604&sr=1-2&keywords=machine+learning

That second book is a little older, but all of its algorithms/techniques are still relevant today.

u/chancegold · 3 pointsr/videos

Our rate of technological advancement is getting to a very interesting point in it's exponential rate increase. We're on the cusp of technologies that will radically change the world damn near overnight.

On the one hand, we are getting absurdly close to sustainable fusion; hell, skunkworks said 2(?) years ago that they would have a working prototype in 5 years and a production model in 10. Anyone else, yeah, it's been said for decades by dozens, but Skunkworks? These are the people that were tasked with coming up with an aircraft that had a 40(?)% smaller than actual radar cross section and showed up with a basically invisible plane. Absurdly cheap, plentiful, portable energy like their proposed reactor design will change everything from CO2 emissions to clean/fresh water availability.

On the other hand..

I'm cautiously optimistic about AI, but there are a stupid amount of ways that AI development could go wrong, and it's unlike anything humanity has developed before. Nukes? Yeah, very.. VERY destructive invention, but it only took two actual uses before humanity as a whole stepped back and said, "Waaaaait a minute. Yeah, we should probably not ever do this again." The thing with AI is that if someone fucks up once, it could very well be game over almost literally overnight.

And those are just the things we know about. For all I know, some dufus is in his garage building some type of quantum computing adamantium cake based portal gate that will accidentally cause a 3au diameter explosion wiping out our solar system.

*Fixed link.

u/drknowledge · 3 pointsr/linux
u/root_pentester · 3 pointsr/blackhat

No problem. I am by no means an expert in writing code or buffer overflows but I have written several myself and even found a few in the wild which was pretty cool. A lot of people want to jump right in to the fun stuff but find out rather quickly that they are missing the skills to perform those tasks. I always suggest to people to start from the ground up when learning to do anything like this. Before going into buffer overflows you need to learn assembly language. Yes, it can be excellent sleep material but it is certainly a must. Once you get an understand of assembly you should learn basic C++. You don't have to be an expert or even intermediate level just learn the basics of it and be familiar with it. The same goes for assembly. Once you get that writing things like shellcode should be no problem. I'll send you some links for a few books I found very helpful. I own these myself and it helped me tremendously.

Jumping into C++: Alex Allain

Write Great Code: Volume1 Understanding the Machine

Write Great Code: Volume2 Thinking Low-Level, Writing High Level

Reversing: Secrets of Reverse Engineering

Hacking: The Art of Exploitation I used this for an IT Security college course. Professor taught us using this book.

The Shellcoders Handbook This book covers EVERYTHING you need to know about shellcodes and is filled with lots of tips and tricks. I use mostly shells from metasploit to plug in but this goes really deep.

.

If you have a strong foundation of knowledge and know the material from the ground-up you will be very successful in the future.

One more thing, I recently took and passed the course from Offensive Security to get my OSCP (Offensive Security Certified Professional). I learned more from that class than years in school. It was worth every penny spent on it. You get to VPN in their lab and run your tools using Kali Linux against a LOT of machines ranging from Windows to Linux and find real vulnerabilities of all kinds. They have training videos that you follow along with and a PDF that teaches you all the knowledge you need to be a pentester. Going in I only had my CEH from eccouncil and felt no where close to being a pentester. After this course I knew I was ready. At the end you take a 24-long test to pass. No questions or anything just hands on hacking. You have 24 hrs to hack into a number of machines and then another 24 hours to write a real pentest report like you would give a client. You even write your own buffer overflow in the course and they walk you through step by step in a very clear way. The course may seem a bit pricey but I got to say it was really worth it. http://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/

u/DevilsWeed · 3 pointsr/darknetplan

As someone with zero programming experience, thank you for the reading list. I was just planning on trying to learn python but I don't know if that's the best language to start with. Would you recommend just reading those books and starting with C?

Also, since I have no experience a technical answer would probably go right over my head but could you briefly explain how someone would go about messing around with an OS? I've always wondered what people meant by this. I have Linux installed on a VM but I have no idea what I could do to start experimenting and learning about programming with it.

Edit: Are these the books you're talking about? Physical Computing, C programming, and Writing Great Code?

u/davodrums · 3 pointsr/math

For a much better explanation than this blog post, check out this text: http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857

u/obscure_robot · 3 pointsr/occult

HTML represents a particular line of thinking about how best to present information for machine processing and ultimately rendering so that a human can read it. The weight that HTML carries is entirely due to its success in the marketplace, it isn't a particularly good or bad exemplar as to how things should be done.

Some languages use paired symbols to indicate the beginning and end of a block of code, such as { and } or even begin and end. Others don't have an explicit beginning, but whatever valid command is first encountered is the beginning, and the end-of-file marker (a thing that no one but programmers and other very curious people ever see) signifies the end.

Hopcroft & Ullman's book on Automata Theory is a great place for someone with a magickal background to dive into computer science.

If you just want to see the many different ways of composing a very simply program (one that prints "hello, world!"), check out this list at Wikipedia.

I'm happy to answer more specific questions here or via private messages.

u/timelick · 3 pointsr/Physics

I was glad when someone pointed out David MacKay's book to me. Now I can pass it along to you. I don't know if it is directly relevant to what you are pursuing in physics, but it is a wonderful, and FREE, book. Check out the amazon reviews and see if it would be worth your time.

u/bryanv_ · 3 pointsr/Python

FWIW, although Bokeh is itself not an implementation of the Grammar of Graphics, many of the original Bokeh authors, including and especially myself, were avid fans of Wilkinson's approach. I would say those ideas (and their structure and consistency) greatly informs the structure of Bokeh.

u/CyberByte · 3 pointsr/artificial

The most obvious answer is Bostrom's Superintelligence, and you can indeed find more info on this whole topic on /r/ControlProblem. (So basically I agree 100% with /u/Colt85.)

The other book closest to what you're asking for is probably Roman Yampolskiy's Artificial Superintelligence: A Futuristic Approach (2015). I would also recommend his and Kaj Sotala's 2014 Responses to catastrophic AGI risk: a survey, which isn't a book, but does provide a great overview.

Wendell Wallach & Colin Allen's Moral Machines: Teaching Robots Right from Wrong (2010) does talk about methods, but is not necessarily about superintelligence. There are also some books about the dangers of superintelligence that don't necessarily say what to do about it: 1) Stuart Armstrong's Smarter Than Us: The Rise of Machine Intelligence (2014), 2) James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era (2015), and 3) Murray Shanahan's The Technological Singularity (2015). And probably many more... but these are the most relevant ones I know of.

u/umib0zu · 3 pointsr/compsci

TTFP. I would rank SICP, TTFP, then Ullman/Hopcroft as a sort of nice set of introductions to formal language theory and computation. SICP being the closest to programming, and Ullman/Hopcroft being the furthest. TTFP is a nice "in between".

u/kukhuvudet · 3 pointsr/science

This in not only relevant to visual stimuli. It has been speculated that one fundamental function of the brain is prediction, and that it occurs everywhere in the cortex, in every region, at all cognitive levels.

I encourage anyone to look further into it:

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/0805074562

http://www.numenta.com/

u/Neres28 · 3 pointsr/learnprogramming

Might ask in /r/CompSci as well.

Your best bet is probably to use Google Scholar to look for papers in the field, but there are a handful of books that might be useful. The "original" text book on GAs is this one, but you'll notice that it was written in '89 and the niche has advanced since then. This book was suggested for my Evolutionary Algorithms class by the professor, but I found it too short and outdated to be of much use; I found better introductions on-line through simple Google searches.

Are you interested primarily in Genetic Algorithms, or Evolutionary algorithms at large? The Field Guide to Genetic Programming was pretty good, and reasonably cheap. Particle swarm based EA's are neat, and also very simple to implement.

If I were you I'd find a good GA framework in the language of your choice (I like and use the Watchmaker framework for Java), and start playing around.

Also, GA's are not known for being efficient, just generally good at evading local optima and finding the global optimum.

u/fuckdragons · 3 pointsr/DoesAnybodyElse

You need to read about the human brain, not the birth of the universe. I liked On Intelligence.

u/flebron · 2 pointsr/compsci

I studied from Introduction to Automata, Languages, and Computation, and used their definitions here :)

Certainly one can define FSMs to always have two tapes, and just not use the output one.

u/Yare_Owns · 2 pointsr/Stormlight_Archive

Computer science isn't about programming, it's about math and logic. Computer science has actually been around since before computers!

Most schools with good comp sci programs will focus heavily on math, logic, algorithms, data structures... I think most schools also offer courses in Machine Learning? They may be graduate-level at some schools but I think you can take them as electives.

Anyway, it's never too late! Here are the topics that were covered in my intro class > 10 years ago. Now, you can find all sorts of cool open source libraries to play around with instead of writing it all from scratch in C with nothing but obscure mathematical notation to work from!

http://en.wikipedia.org/wiki/Decision_tree_learning


http://en.wikipedia.org/wiki/Artificial_neural_network


http://en.wikipedia.org/wiki/Genetic_algorithm


http://en.wikipedia.org/wiki/Genetic_programming


http://en.wikipedia.org/wiki/Naive_Bayes_classifier


http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm


http://en.wikipedia.org/wiki/Association_rule_learning

You could use this text, but it reads like a math journal so I advise finding a friendlier, programming-centric treatment.

u/sandsmark · 2 pointsr/compsci

if you want machine learning, I'd recommend Mitchell: http://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/0070428077/

u/nhjk · 2 pointsr/AskComputerScience

Thanks, I already have a book on this I was planning on reading: Introduction to Automata Theory, Languages, and Computation. I just started reading CLRS though, do you think it would be helpful to finish it or are the two mostly unrelated?

u/Noobflair · 2 pointsr/learnprogramming

Hey don't underestimate the theoretical side of computer science, how about answering the age old question [What are the fundamental capabilities and limitations of computers?] (http://www.amazon.in/Introduction-Automata-Theory-Languages-Computation/dp/0321455363) or how about [ design and workings of computers and software] (http://www.goodreads.com/book/show/44882.Code)

u/restorethefourthVT · 2 pointsr/learnprogramming

Here is a really good book if you want to get into the nitty-gritty stuff.

Write Great Code Volume 1

Volume 2 is good too. It's not just a rewrite of Volume 1 either.

u/IjonTichy85 · 2 pointsr/compsci

I think before you start you should ask yourself what you want to learn. If you're into programming or want to become a sysadmin you can learn everything you need without taking classes.

If you're interested in the theory of cs, here are a few starting points:

Introduction to Automata Theory, Languages, and Computation

The book you should buy

MIT: Introduction to Algorithms

The book you should buy


Computer Architecture<- The intro alone makes it worth watching!

The book you should buy

Linear Algebra

The book you should buy <-Only scratches on the surface but is a good starting point. Also it's extremely informal for a math book. The MIT-channel offers many more courses and are a great for autodidactic studying.

Everything I've posted requires no or only minimal previous education.
You should think of this as a starting point. Maybe you'll find lessons or books you'll prefer. That's fine! Make your own choices. If you've understood everything in these lessons, you just need to take a programming class (or just learn it by doing), a class on formal logic and some more advanced math classes and you will have developed a good understanding of the basics of cs. The materials I've posted roughly cover the first year of studying cs. I wish I could tell you were you can find some more math/logic books but I'm german and always used german books for math because they usually follow a more formal approach (which isn't necessarily a good thing).
I really recommend learning these thing BEFORE starting to learn the 'useful' parts of CS like sql,xml, design pattern etc.
Another great book that will broaden your understanding is this Bertrand Russell: Introduction to mathematical philosophy
If you've understood the theory, the rest will seam 'logical' and you'll know why some things are the way they are. Your working environment will keep changing and 20 years from now, we will be using different tools and different languages, but the theory won't change. If you've once made the effort to understand the basics, it will be a lot easier for you to switch to the next 'big thing' once you're required to do so.

One more thing: PLEASE, don't become one of those people who need to tell everyone how useless a university is and that they know everything they need just because they've been working with python for a year or two. Of course you won't need 95% of the basics unless you're planning on staying in academia and if you've worked instead of studying, you will have a head start, but if someone is proud of NOT having learned something, that always makes me want to leave this planet, you know...

EDIT: almost forgot about this: use Unix, use Unix, and I can't emphasize this enough: USE UNIX! Building your own linux from scratch is something every computerscientist should have done at least once in his life. It's the only way to really learn how a modern operating system works. Also try to avoid apple/microsoft products, since they're usually closed source and don't give you the chance to learn how they work.

u/001Guy001 · 2 pointsr/ifyoulikeblank

Don't know if it's exactly what you wanted but check out these non-fiction books:

The Singularity Is Near (Ray Kurzweil)

Augmented: Life In The Smart Lane

Surviving AI: The Promise And Peril Of Artificial Intelligence

Edit: Bicentennial Man (was already mentioned) is one of my all-time favorite films

u/squirreltalk · 2 pointsr/AskStatistics

I've heard this one's a classic, but I haven't read it, so I can't comment personally:

https://smile.amazon.com/Grammar-Graphics-Statistics-Computing/dp/0387245448?sa-no-redirect=1

u/playaspec · 2 pointsr/embedded

Not a direct answer, but a fantastic resource for dealing with every datatype on small and large systems. Check out:

Write Great Code: Volume 1: Understanding The Machine

Of all the computer books I've read, this is my #1 MUST read for ANY programming discipline. Chapter 4 covers the various floating point formats and includes C code examples. For more in depth coverage, check out Don Knuth's "The Art of Computer Programming: Volume Two".

u/kolrehs · 2 pointsr/learnprogramming

there is this o'reilly book. It has lots of examples, which might be good for picking up python as you go, but I don't know for sure b/c i haven't read it.

u/EvilActivity · 2 pointsr/gamedev

Take a look at the AI Game Programming Gems series.

u/SOberhoff · 2 pointsr/compsci

The Nature of Computation. 950 pages might take a small lifetime to read, but it's the best book regarding theoretical computer science by a large margin.

u/LiamMayfair · 2 pointsr/C_Programming

As far as books go, this one is good.

It covers linked lists and all the variants (doubly linked lists, circular...) alongside their Big O assessment, including examples of where you'd use these data structures as well as full, working code examples in C.

It covers much more than linked lists though. Pretty much all the major data structures and well-known computing algorithms are explained too.

u/tpintsch · 2 pointsr/datascience

Hello, I am an undergrad student. I am taking a Data Science course this semester. It's the first time the course has ever been run so it's a bit disorganized but I am very excited about this field and I have learned a lot on my own.I have read 3 Data Science books that are all fantastic and are suited to very different types of classes. I'd like to share my experience and book recommendations with you.

Target - 200 level Business/Marketing or Science departments without a programming/math focus. 
Textbook - Data Science for Business https://www.amazon.com/gp/product/1449361323/ref=ya_st_dp_summary
My Comments - This book provides a good overview of Data Science concepts with a focus on business related analysis. There is very little math or programming instruction which makes this ideal for students who would benefit from an understanding of Data Science but do not have math/cs experience. 
Pre-Reqs - None.

Target - 200 level Math/Cs or Physics/Engineering departments.
Textbook -Data Mining: Practical Machine Learning Tools and Techniques https://www.amazon.com/gp/aw/d/0123748569/ref=pd_aw_sim_14_3?ie=UTF8&dpID=6122EOEQhOL&dpSrc=sims&preST=_AC_UL100_SR100%2C100_&refRID=YPZ70F6SKHCE7BBFTN3H
My comments: This book is more in depth than my first recommendation. It focuses on math and computer science approaches with machine learning applications. There are many opportunities for projects from this book. The biggest strength is the instruction on the open source workbench Weka. As an instructor you can easily demonstrate data cleaning,  analysis,  visualization,  machine learning, decision trees, and linear regression. The GUI makes it easy for students to jump right into playing with data in a meaningful way. They won't struggle with knowledge gaps in coding and statistics. Weka isn't used in the industry as far as I can tell, it also fails on large data sets. However, for an Intro to Data Science without many pre-reqs this would be my choice.
Pre-Req - Basic Statistics,  Computer Science 1 or Computer Applications.

Target - 300/400 level Math/Cs majors
Textbook - Data Science from Scratch: First Principles with Python
http://www.amazon.com/Data-Science-Scratch-Principles-Python/dp/149190142X
My comments: I am infatuated with this book. It delights me. I love math, and am quickly becoming enamored by computer science as well. This is the book I wish we used for my class. It quickly moves through some math and Python review into a thorough but captivating treatment of all things data science. If your goal is to prepare students for careers in Data Science this book is my top pick.
Pre-Reqs - Computer Science 1 and 2 (hopefully using Python as the language), Linear Algebra, Statistics (basic will do,  advanced preferred), and Calculus.

Additional suggestions:
Look into using Tableau for visualization.  It's free for students, easy to get started with, and a popular tool. I like to use it for casual analysis and pictures for my presentations. 

Kaggle is a wonderful resource and you may even be able to have your class participate in projects on this website.

Quantified Self is another great resource. http://quantifiedself.com
One of my assignments that's a semester long project was to collect data I've created and analyze it. I'm using Sleep as Android to track my sleep patterns all semester and will be giving a presentation on the analysis. The Quantified Self website has active forums and a plethora of good ideas on personal data analytics.  It's been a really fun and fantastic learning experience so far.

As far as flow? Introduce visualization from the start before wrangling and analysis.  Show or share videos of exciting Data Science presentations. Once your students have their curiosity sparked and have played around in Tableau or Weka then start in on the practicalities of really working with the data. To be honest, your example data sets are going to be pretty clean, small,  and easy to work with. Wrangling won't really be necessary unless you are teaching advanced Data Science/Big Data techniques. You should focus more on Data Mining. The books I recommended are very easy to cover in a semester, I would suggest that you model your course outline according to the book. Good luck!

u/elliot_o_brien · 2 pointsr/deeplearning

Read https://www.amazon.in/Reinforcement-Learning-Introduction-Richard-Sutton/dp/0262193981.
It's a great book for beginners in reinforcement learning.
If you're a lecture guy then watch deep mind's reinforcement learning lectures by David silver.
School of AI's move 37 course is also good.

u/adventuringraw · 2 pointsr/learnmachinelearning

I always like finding intuitive explanations to help grapple with the 'true' math. It's really hard to extract meaning sometimes from hard books, but at some point, the 'true' story and the kind of challenging practice that goes with it is something you still need. If you just want to see information theory from a high level, Kahn's Academy is probably a great place to start. But when you say 'deep learning research'... if you want to write an original white paper (or even read an information theoretic paper on deep learning) you'll need to wade deep into the weeds and actually get your hands dirty. If you do want to get a good foundation in information theory for machine learning, I went through the first few chapters so far of David MacKay's information theory book and that's been great so far, excited to go through it properly at some point soon. I've heard Cover and Thomas is considered more the 'bible' of the field for undergrad/early grad study, but it takes a more communication centric approach instead of a specific machine learning based approach.

Um... though reading your comment again, do you also not know probability theory and statistics? Wasserman's all of statistics is a good source for that, but you'll need a very high level of comfort with multivariate calculus and a reasonable level of comfort with proof based mathematics to be able to weather that book.

Why don't you start looking at the kinds of papers you'd be interested in? Some research is more computational than theoretical. You've got a very long road ahead of you to get a proper foundation for original theoretical research, but getting very clear on what exactly you want to do might help you narrow down what you want to study? You really, really can't do wrong with starting with stats though, even if you do want to focus on a more computer science/practical implementation direction.

u/c3534l · 2 pointsr/computerscience

From what I know, there's two basic ways most music recommendation services use. The one technique is to use an efficient comparison method called minhashing. But the basic idea is that you represent every song as a collection of users who like the song. The similarity between one user and another is the Jaccard similarity (the proportion of people in song A shared by song B). Minhashing is then used as more of a search algorithm for finding which sets share Jaccard similarity.

This works okay for a lot of things, but the music service Pandora actually does not use that method. They have a unique approach where someone (I think mostly grad students in musicolgy) actually sat down and listened to every song and filled out a little chart that said things like "minor key tonality" and you write in the tempo and all that. Like, just an exhaustive list. Then to find similar music they're using a distance metric of some kind, although I don't know all the details. But basically if you imagine every attribute a song can have as a dimension, a song is a point in high dimensional space and you're trying to find music that's physically closer. Pandora does also learn a little bit about what attributes are important to you, too.

In general, this sort of topic is part of a field called machine learning. I personally enjoyed this ML book which was maybe a bit heavy on math and theory and not so much on practicality, but I do think quite a few other more down-to-earth books on the subject have been published if you want to look around and find a good one. I also hear great things about the coursera class on machine learning and data science.

u/SuchStealth · 2 pointsr/CapitalismVSocialism

None of these authors would probably call themselves modern communists but I do view them as such. Some of the material here goes into great depth to outline a possible post-scarcity scenario while some stay on the surface but are non the less a great read and great thinking exercices about a possible future.

​

Peter Joseph - The New Human Rights Movement: Realizing a New Train of Thought

https://www.amazon.com/New-Human-Rights-Movement-Reinventing-ebook/dp/B01M3NWW48/ref=sr_1_1?ie=UTF8&qid=1550425640&sr=8-1&keywords=peter+joseph+the+new+human+rights+movement

​

Jacque Fresco - The Best That Money Can't Buy

https://www.amazon.com/Best-That-Money-Cant-Buy-ebook/dp/B0773TB3GX/ref=sr_1_2?ie=UTF8&qid=1550425758&sr=8-2&keywords=jacque+fresco

​

Buckminster Fuller - Operating Manual for Spaceship Earth

https://www.amazon.com/Operating-Manual-Spaceship-Buckminster-Fuller-ebook-dp-B010R3HVOW/dp/B010R3HVOW/ref=mt_kindle?_encoding=UTF8&me=&qid=1550425647

​

Jeremy Rifkin - The Third Industrial Revolution: How Lateral Power Is Transforming Energy, the Economy, and the World

https://www.amazon.com/Third-Industrial-Revolution-Lateral-Transforming-ebook/dp/B005BOQBGW/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=1550426107&sr=8-1

​

Peter Diamandis - Abundance: The Future Is Better Than You Think

https://www.amazon.com/Abundance-Future-Better-Than-Think-ebook/dp/B005FLOGMM/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=1550426273&sr=8-1

​

Ray Kurzweil - The Singularity Is Near: When Humans Trenscend Biology

https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook-dp-B000QCSA7C/dp/B000QCSA7C/ref=mt_kindle?_encoding=UTF8&me=&qid=

u/Neutran · 2 pointsr/MachineLearning

Count me in!
I really want to read though this book: "https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981" by Richard Sutton, as well as a few other classical ML books, like Christopher Bishop's and Kevin Murphy's.

I know many concepts already, but I've never studied them in a systematic manner (e.g. follow an 1000-page book from end to end). I hear from multiple friends that it's super beneficial in the long run to build a strong mathematical/statistical foundation.
My current model of "googling here and there" might work in the short term, but will not help me invent new algorithms or improve state-of-the-art.

u/treerex · 2 pointsr/csbooks

Introduction to Automata Theory, Languages, and Computation is the standard text. Jeff Ullman has a page for the book, and has taught the subject a couple of times on Coursera.

u/webauteur · 2 pointsr/artificial

I'm currently reading Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality by Robert M. Geraci. This book explores how religious ideas have infested our expectations for AI. It's arguments are quite similar to The Secret Life of Puppets by Victoria Nelson which was an even deeper consideration of the metaphysical implications of uncanny representations of human beings whether in the form of dolls, puppets, robots, avatars, or cyborgs. I think it is really important to understand what is driving the push for this technology.

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is also a good book on the dangers of AI.

You want more book recommendations? Well, one of the creepiest aspects of AI is that Amazon is using it for its recommendation engine. So just go on Amazon and it will be an AI that recommends more books for you to read!

u/inspectorG4dget · 2 pointsr/learnprogramming

I used Machine Learning: The Art and Science of Algorithms that Make Sense of Data and Evaluating Learning Algorithms: A Classification Perspective in my grad course. They're both super to-the-point and are written in non-overcomplex language

u/ewiethoff · 2 pointsr/learnprogramming

Mastering Algorithms with C has a chapter with code for defining a hash.

u/Ars-Nocendi · 2 pointsr/computerscience

Let me refer this excellent book for you ....

It will take you on a wonderful journey through the science part of Computer Science ....

u/clavalle · 2 pointsr/zeroxtenc

This is a great resource.

Ninja Edit: This book is also good, and free! (PDF warning)

u/ItsAConspiracy · 2 pointsr/Futurology

My suggestion is to opensource it under the GPL. That would mean people can use your GPL code in commercial enterprises, but they can't resell it as commercial software without paying for a license.

By opensourcing it, people can verify your claims and help you improve the software. You don't have to worry about languishing as an unknown, or taking venture capital and perhaps ultimately losing control of your invention in a sale or IPO. Scientists can use it to help advance knowledge, without paying the large license fees that a commercial owner might charge. People will find all sorts of uses for it that you never imagined. Some of them will pay you substantial money to let them turn it into specialized commercial products, others will pay you large consulting fees to help them apply the GPL version to their own problems.

You could also write a book on how it all works, how you figured it out, the history of your company, etc. If you're not a writer you could team up with one. Kurzweil and Jeff Hawkins have both published some pretty popular books like this, and there are others about non-AGI software projects (eg. Linux, Doom). If the system is successful enough to really make an impact, I bet you could get a bestseller.

Regarding friendliness, it's a hard problem that you're probably not going to solve on your own. Nor is any large commercial firm likely to solve it own their own; in fact they'll probably ignore the whole problem and just pursue quarterly profits. So it's best to get it out in the open, so people can work on making it friendly while the hardware is still weak enough to limit the AGI's capabilities.

This would probably be the ideal situation from a human survival point of view. If someone were to figure out AGI after the hardware is more powerful than the human brain, we'd face a hard takeoff scenario with one unstoppable AGI that's not necessarily friendly. Having the software in a lot of hands while we're still waiting for Moore's Law to catch up to the brain, we have a much more gradual approach, we can work together on getting there safely, and when AGI does get smarter than us there will be lots of them with lots of different motivations. None of them will be able to turn us all into paperclips, because doing that would interfere with the others and they won't allow it.

u/DrJosh · 2 pointsr/IAmA

I'd recommend Melanie Mitchell's very accessible (and short) book.

u/verandaguy · 2 pointsr/learnprogramming

Can confirm this one, also try Mastering Algorithms with C, another O'Reilly title. By the way OP, get used to "O'Reilly" as a household name.

u/videoj · 2 pointsr/MachineLearning

O'Reilly has published a number of practical machine learning books such as Programming Collective Intelligence: Building Smart Web 2.0 Applications and Natural Language Processing with Python that you might find good starting points.

u/sudoatx · 2 pointsr/linux

"The art of Unix Programming" by Eric S. Raymond. Not as intimidating or outdated as you might think - This book goes over the history and philosophical concepts behind not only Unix (and Linux), but also the Open Source initiative behind Linux. ESR's other work is "The Cathedral & the Bazaar" which is also worth a look, but I would argue is dated now, as much of what was suggested in this book has already come to pass with too many real world examples to mention.

u/gogromat · 2 pointsr/Anarchism

What I was thinking recently was to ask/suggest forming a board/forum where everyone can discuss these issues.

I am a developer, and as you may know a lot of software is open sourced. Apache project for example has created a multitude of products made by volunteer contributions. Valve corporation internally is organized as anarcho-syndicalism (You may recognize the author of this blog, Yanis Varoufakis, a current Finance Minister of Greece (newly elected left-wing government party)). A lot of books that talk about openness, and sharing of ideas have been turned into software books, like The Cathedral and the Bazaar. A lot of software projects breathe anarchy.

My point is software developers have a great open platforms where everyone can make projects, submit issues and form discussions. For example the Github is such a platform.

Without going into long detail, my suggestion was going to be making a project on Github that is well-organized and supported by anarchist community. It would consist of main points of how/what can/should be done in plain, Simple English.

This will make it highly accessible, easy-to-replicate (In software terms "forking" - on github you can easily copy the project for yourself, store it on any number of computers that you have, all done securely).

Inside it can give key insights/ideas of anarchists that are easily to modify and submit new/alternative strategies for each entry. Like you can even make pages about "revolution" and steps that are necessary/advised for people to take.

Of course it takes a little time to get used to using this platform, but we can even take it step further and create something even simpler out of it.
I don't know how many developers are on this subreddit, but even a small number is enough to keep track of this project.
I think it can be a very successful endeavour. Comments/Questions/Suggestions?

u/Jivlain · 2 pointsr/programming

Are you referring to "A field guide to genetic programming"? It's excellent, though obviously it's about genetic programming, not GAs. It's highly readable (unfortunately uncommon in this field), and covers a lot of material, though it could have done with some more detail on things like ADFs and pareto fronts IMO.

Of the GA books I've read so far, my favourite introduction has been Melanie Mitchell's An Introduction to Genetic Algorithms. Or you could read seanluke's book.

u/CSMastermind · 1 pointr/AskComputerScience

Entrepreneur Reading List


  1. Disrupted: My Misadventure in the Start-Up Bubble
  2. The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win
  3. The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It
  4. The Art of the Start: The Time-Tested, Battle-Hardened Guide for Anyone Starting Anything
  5. The Four Steps to the Epiphany: Successful Strategies for Products that Win
  6. Permission Marketing: Turning Strangers into Friends and Friends into Customers
  7. Ikigai
  8. Reality Check: The Irreverent Guide to Outsmarting, Outmanaging, and Outmarketing Your Competition
  9. Bootstrap: Lessons Learned Building a Successful Company from Scratch
  10. The Marketing Gurus: Lessons from the Best Marketing Books of All Time
  11. Content Rich: Writing Your Way to Wealth on the Web
  12. The Web Startup Success Guide
  13. The Best of Guerrilla Marketing: Guerrilla Marketing Remix
  14. From Program to Product: Turning Your Code into a Saleable Product
  15. This Little Program Went to Market: Create, Deploy, Distribute, Market, and Sell Software and More on the Internet at Little or No Cost to You
  16. The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully
  17. The Innovator's Solution: Creating and Sustaining Successful Growth
  18. Startups Open Sourced: Stories to Inspire and Educate
  19. In Search of Stupidity: Over Twenty Years of High Tech Marketing Disasters
  20. Do More Faster: TechStars Lessons to Accelerate Your Startup
  21. Content Rules: How to Create Killer Blogs, Podcasts, Videos, Ebooks, Webinars (and More) That Engage Customers and Ignite Your Business
  22. Maximum Achievement: Strategies and Skills That Will Unlock Your Hidden Powers to Succeed
  23. Founders at Work: Stories of Startups' Early Days
  24. Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant
  25. Eric Sink on the Business of Software
  26. Words that Sell: More than 6000 Entries to Help You Promote Your Products, Services, and Ideas
  27. Anything You Want
  28. Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers
  29. The Innovator's Dilemma: The Revolutionary Book that Will Change the Way You Do Business
  30. Tao Te Ching
  31. Philip & Alex's Guide to Web Publishing
  32. The Tao of Programming
  33. Zen and the Art of Motorcycle Maintenance: An Inquiry into Values
  34. The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity

    Computer Science Grad School Reading List


  35. All the Mathematics You Missed: But Need to Know for Graduate School
  36. Introductory Linear Algebra: An Applied First Course
  37. Introduction to Probability
  38. The Structure of Scientific Revolutions
  39. Science in Action: How to Follow Scientists and Engineers Through Society
  40. Proofs and Refutations: The Logic of Mathematical Discovery
  41. What Is This Thing Called Science?
  42. The Art of Computer Programming
  43. The Little Schemer
  44. The Seasoned Schemer
  45. Data Structures Using C and C++
  46. Algorithms + Data Structures = Programs
  47. Structure and Interpretation of Computer Programs
  48. Concepts, Techniques, and Models of Computer Programming
  49. How to Design Programs: An Introduction to Programming and Computing
  50. A Science of Operations: Machines, Logic and the Invention of Programming
  51. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology
  52. The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation
  53. The Annotated Turing: A Guided Tour Through Alan Turing's Historic Paper on Computability and the Turing Machine
  54. Computability: An Introduction to Recursive Function Theory
  55. How To Solve It: A New Aspect of Mathematical Method
  56. Types and Programming Languages
  57. Computer Algebra and Symbolic Computation: Elementary Algorithms
  58. Computer Algebra and Symbolic Computation: Mathematical Methods
  59. Commonsense Reasoning
  60. Using Language
  61. Computer Vision
  62. Alice's Adventures in Wonderland
  63. Gödel, Escher, Bach: An Eternal Golden Braid

    Video Game Development Reading List


  64. Game Programming Gems - 1 2 3 4 5 6 7
  65. AI Game Programming Wisdom - 1 2 3 4
  66. Making Games with Python and Pygame
  67. Invent Your Own Computer Games With Python
  68. Bit by Bit
u/mhornberger · 1 pointr/artificial

I find that more threatening than promising. I recently re-read Blindsight and Echopraxia by Peter Watts. One of his main themes is that transhumans and AIs are making scientific advances that are so far out there that "baseline" humans can't even understand what they're talking about.

The interesting non-fiction book Our Final Invention also touches on this at some length. We might get ever-more amazing discoveries, but the price would be that we really don't know how anything works. We would be as children, taking everything on trust because we're not smart enough to understand the answers or contribute to the conversation. But this presupposes that the AIs or augmented intelligences would be vastly smarter than us, not just tools we ourselves use to ask better questions. Who knows. But an interesting set of questions, in any case.

u/crwcomposer · 1 pointr/atheism

You should check out the book "On Intelligence" by Jeff Hawkins.

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/0805074562

It's an amazing book that goes into the neuroscience of consciousness, intelligence, and yes, free will.

u/sw4yed · 1 pointr/mlclass

I'm taking an ML course as my institution alongside this course. The book assigned in my other course was Machine Learning, Mitchell. It's pretty old but my professor referred to it as the bible for ML; but I've heard the Bible reference many times before.

I've been doing the readings and I like the book. The way it reads is very nice and it's an awesome supplement for anyone interested in ML.

The Bishop book mentioned here was a (strongly) recommended supplement in the course as well. I got both and although Bishop requires more focus to read (IMO) it has tons of great information.

u/FrogBoiling · 1 pointr/AskReddit

I would recommend this book and this book

u/as4nt · 1 pointr/italy

Di editori ce ne sono diversi, se cerchi un'introduzione alla PNL con un approccio accademico, ti consiglio: Speech And Language Processing: An Introduction to Natural Language Processing , Computational Linguistics, and Speech Recognition.

Alternativamente, Natural Language Processing with Python .

u/me2i81 · 1 pointr/compsci

A few more to consider:
A recent book that looks like fun is the Nature of Computation.
Sipser's book on theory of computation is good.
Algorithms by Dasgupta, Papadimitriou, and Vazirani is a very accessible algorithms book, as is Skiena. CLRS is a good reference, but dull as a read.
Comprehensive Mathematics for Computer Scientists volumes one and two might be interesting to look at.

u/stateful · 1 pointr/programming

Some great responses here everyone, thank you. The book Write Great Code: Volume 1: Understanding the Machine helped me understand.

u/gnuvince · 1 pointr/programming

I found good reviews for this book. Anyone has opinions on it?

u/SupportVectorMachine · 1 pointr/MLQuestions

I used Weka a lot when I was first starting out, and I can confidently recommend it. Data Mining: Practical Machine Learning Tools and Techniques is essentially a companion volume to Weka and its documentation, and it provides a great introduction to machine learning methodology in general; I recommend it, too. For user friendliness and visualization, I think it's a very good place to start.

Over time, I moved to R, which has the advantage of being more likely to incorporate new, cutting-edge methods that people have coded and released in packages. (There are also other R-based ML suites, such as Rattle.) If you like Weka, the transition into R can be pretty smooth, since R and Weka can talk to each other through R's Java interface. R is also good for applying command-line options (which can also be done in Weka's console), which you will eventually want to do as you get more familiar with your techniques of choice, whether they're found in Weka or not.

Python is a popular option for a lot of users (and with it you can use, among other things, Google's open-source TensorFlow suite), and it has the advantage of generally having pretty easy-to-read code, good visualization options, and a huge and very dedicated user base.

u/Kacawi · 1 pointr/rstats

As a book for beginning R programmers, I would recommend The Art of R Programming: A Tour of Statistical Software Design, written by Norman Matloff. As a general machine learning book, I liked this book, written by Peter Flach.

u/derwisch · 1 pointr/statistics

If you are geeky about it, "The Grammar of Graphics" is a bit theory-heavy, but may be worth the dive.

After all, it's what the "gg" in the "ggplot" package is taken from.

u/stewartr · 1 pointr/programming

In MSCS we spent a semester going through this. It was a hard class! As we were hurling question after confused question about our text

http://www.amazon.com/Introduction-Automata-Theory-Languages-Computation/dp/0201441241

(first edition). The instructor exclaimed, "no, yes, this is good!"

u/Philosopher_King · 1 pointr/atheism

You probably noticed, but maybe not, that he edited his comment to indicate that it was a movie quote meant mostly as sarcasm.

BTW, I agree with your stance on morals. Reminds me of something I read recently... brb...

Got it: On Intelligence, by Jeff Hawkins. I think you'll like it.

u/Vultyre · 1 pointr/EngineeringStudents

I haven't used Origin 9 before, but from what I can see online it looks like a GUI-driven plotter. How comfortable are you with basic coding? Depending on circumstances, I use either Matplotlib or ggplot2 for data visualization at work and school.

Matplotlib is a plotting library for Python. It's standard plots can look pretty bland, but the Seaborn library helps a lot with appearance. You can handle interactivity with Jupyter Notebooks and its Jupyter Widgets.

ggplot2 is a library for R that is built around the idea of The Grammar of Graphics. Examples of some of the available plots are here. You can also use the ggthemes package to change plot appearance on-the-fly. An option for interactive plots is Plotly's ggplot2 library.

u/Eurchus · 1 pointr/MachineLearning

Sutton and Barto wrote the standard text in reinforcement learning.

Here is the AlphaGo paper in Nature.

The FAQ has a list of resources for learning ML including links to Hinton's Coursera course on neural nets.

u/descoladan · 1 pointr/compsci

http://www.amazon.com/Mastering-Algorithms-C-Kyle-Loudon/dp/1565924533#
While the title is algorithms, it taught me a great deal about data structures using C

u/o0o · 1 pointr/programming

They're called ε transitions in the book from which I first learned (HMU):

http://www.amazon.com/Introduction-Automata-Theory-Languages-Computation/dp/0201441241

But, all subsequent classes I've been in have called them lambda.

I continue to call them ε transitions because before I had heard otherwise, I read a paper discussing the sorts of (equivalent) automata that are created by allowing multiple NFAs to run concurrently.

http://citeseer.ist.psu.edu/297888.html

In the paper, the sort of transition that is created due to the shuffle operator is called a "lambda transition," so that is what I think of when I hear "lambda".

I recommend the read. It introduces the shuffle operator, adds a new construct to the "Thompson Construction" method of RE->NFA, and discusses their equivalence to binary Petri Nets.

It should be noted that a review problem in HMU discusses the closure properties of shuffling two regular languages (i.e., the shuffle of 2 regular languages is still regular). Kozen also discusses the shuffle in his intro to automata book.

u/zEconomist · 1 pointr/askscience

I highly recommend Nine Algorithms That Changed the Future. Chapter 5 covers error correcting code, which is summarized somewhat well here.

Your home PC is (probably?) not utilizing these algorithms.

u/KoOkIe_MoNsTeR93 · 1 pointr/learnmachinelearning

The book that I followed and I think it's pretty standard is

https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981

Curated lists available on Github

https://github.com/muupan/deep-reinforcement-learning-papers

https://github.com/aikorea/awesome-rl

The deepmind website


https://deepmind.com/blog/deep-reinforcement-learning/

The above content is what I am familiar with. Perhaps there are better resources others can point toward.

u/Boris999 · 1 pointr/learnprogramming

I bought this to read while I travel around Europe, it might be what you are looking for.

Nine Algorithms that Changed the Future

Edit: Also, listen to the talking machines podcast for some insight into machine learning.

u/trillic · 1 pointr/computerscience

I enjoyed The Nature of Computation during my theory course.

u/aDFP · 1 pointr/Games

The same way consumers always push for the things they want, by seeking it out and spending money on it.

For starters, go play Prom Week and talk about it on Reddit, or find the other games and tools that are in development.

Or pick up a good book on the subject, and get involved with the discussion.

Money is power, and Reddit is a powerful collective.

u/fanglet · 1 pointr/linguistics

Hell yes. If you want a slightly less intense introduction to computational linguistics, I'd also recommend Natural Language Processing with Python.

u/Lamez · 1 pointr/linux

I noticed it was online, is there a place where I can get a tangible copy?

Is this the one?

u/artpendegrast · 1 pointr/compsci

This might have some material that will help you out.
Natural Language Processing with Python
It can also be found for free, too.

u/evolvingstuff · 1 pointr/programming

Here is a great book, in medium-sized words (it gets technical, but is still very readable): http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857

u/cerules · 1 pointr/computerscience

I had to read this book for my first year seminar in college.
It explains some important algorithms in ways that someone with no experience in computer science can understand. I enjoyed it.

http://www.amazon.com/Nine-Algorithms-That-Changed-Future/dp/0691158193

u/robokind · 1 pointr/IAmA

There is a lot of code running on the robot, but we generally avoid recursion to keep things more maintanable. It's goals are defined by the behaviors and motivations you have specified, which become a complex subject.

I'd recommend Machine Learning by Tom Mitchell

u/macemoth · 1 pointr/computerscience

Here's a PDF which I used as reference and which covers most concepts:

https://cglab.ca/~michiel/TheoryOfComputation/TheoryOfComputation.pdf

​

For more depth, this book is often seen as "the bible" of this topic:

https://www.amazon.com/Introduction-Automata-Theory-Languages-Computation/dp/0321455363

​

If you're looking for exercises, this could be a good resource (especially designing Turing Machines is a thing of practice):

https://www.cl.cam.ac.uk/teaching/exams/pastpapers/t-ComputationTheory.html

​

If primitive recursive functions are also relevant for you, I can strongly recommend this video to you:

https://www.youtube.com/watch?v=cjq0X-vfvYY

u/crabpot8 · 1 pointr/compscipapers

Regarding the AI, have you read this book? I'm in the middle of it and it's been really interesting so far!

u/leokassio · 1 pointr/datascience

Many thanks about kaggle tip and book!
Despite your tip about book (that I dont known), I'd like to recommend the DATA MINING from the authors of Weka, a very good book too.(http://www.amazon.ca/Data-Mining-Practical-Learning-Techniques/dp/0123748569/ref=sr_1_1?s=books&ie=UTF8&qid=1425389007&sr=1-1&keywords=data+mining)

u/ryanbuck_ · 1 pointr/MachineLearning

I’d recommend Melanie Mitchell’s book on Genetic Algorithms. If you came up with all that on your own I’m thinking you’d find it mighty fascinating.

edit: you might also check out the field called ‘artificial life ’ (sorry I’m only partially educated, so this could be a false trail) if it’s the population dynamics/emergent behaviors that intrigue you.

u/duppy · 1 pointr/compsci

you beat me to this, but you didn't include a link.

u/FatFingerHelperBot · 0 pointsr/AskComputerScience

It seems that your comment contains 1 or more links that are hard to tap for mobile users.
I will extend those so they're easier for our sausage fingers to click!


Here is link number 1 - Previous text "1"

Here is link number 2 - Previous text "2"

Here is link number 3 - Previous text "3"

Here is link number 4 - Previous text "4"

Here is link number 5 - Previous text "5"

Here is link number 6 - Previous text "6"

Here is link number 7 - Previous text "7"

Here is link number 8 - Previous text "1"

Here is link number 9 - Previous text "2"

Here is link number 10 - Previous text "3"

Here is link number 11 - Previous text "4"



----
^Please ^PM ^/u/eganwall ^with ^issues ^or ^feedback! ^| ^Delete

u/eco32I · 0 pointsr/Python

ggplot is an implementation of the ideas from "Grammar of Graphics". It is so much more than just pretty looks.