Reddit reviews Computer Systems: A Programmer's Perspective (2nd Edition)
We found 32 Reddit comments about Computer Systems: A Programmer's Perspective (2nd Edition). Here are the top ones, ranked by their Reddit score.
We found 32 Reddit comments about Computer Systems: A Programmer's Perspective (2nd Edition). Here are the top ones, ranked by their Reddit score.
Here is a "curriculum" of sorts I would suggest, as it's fairly close to how I learned:
Generally you'll probably want to look into IA-32 and the best starting point is the Intel Architecture manual itself, the .pdf can be found here (pdf link).
Because of the depth of that .pdf I would suggest using it mainly as a reference guide while studying "Computer Systems: A Programmers Perspective" and "Secrets of Reverse Engineering".
Of course if you just want to do "pentesting/vuln assessment" in which you rely more on toolsets (for example, Nmap>Nessus>Metasploit) structured around a methodology/framework than you may want to look into one of the PACKT books on Kali or backtrack, get familiar with the tools you will use such as Nmap and Wireshark, and learn basic Networking (a simple CompTIA Networking+ book will be a good enough start). I personally did not go this route nor would I recommend it as it generally shys away from the foundations and seems to me to be settling for becoming comfortable with tools that abstract you from the real "meat" of exploitation and all the things that make NetSec great, fun and challenging in the first place. But everyone is different and it's really more of a personal choice. (By the way, I'm not suggesting this is "lame" or anything, it was just not for me.)
*edited a name out
Sounds like you don't know some of your low-level computing fundamentals as well as you should for the jobs you want. I recommend studying up on those, and then developing more familiarity with them by tinkering or doing relevant projects.
​
If you're looking for a book recommendation, try Computer Systems: A Programmer's Perspective. If you read and understand chapter 2 (it's dry, hang in there), your question #1 will seem trivial to you (and you'll learn much more as well; pretty much all of it is important material). The book overall is a great read for embedded programmers, and anyone doing any form of low-level computing. There is a newer edition but the one I linked is the one I read.
​
I'm always sad to see this book never get mentioned in these sorts of discussions: Computer Systems: A Programmer's Perspective.
This book is awesome, and should be required reading for any serious programmer. It covers so much, and does it clearly.
You cover the complete system of computing as seen by a programmer, from the lowest level representation of numbers, more complicated data structures, programming language constructs, and entire programs. By this I mean how n-bit integers are represented, floats and doubles, structs, classes, and entire binaries including how they are linked and loaded.
You see programming languages from the lowest level machine code to modern x86/x64 assembly and higher level languages like C and Java.
You learn about processor architecture from simple sequential execution through modern pipelined architectures.
You learn about the memory hierarchy, from I/D caches to L1, L2, main memory, disk, network, etc. You learn about operating system constructs like virtual memory.
You learn system-level I/O, network programming, and concurrent programming including both I/O multiplexing and threads exploiting real parallelism.
By the concluding chapter, you've built a small toy threaded web server. And you understand its execution from loading the binary into memory down through how it's executed on modern architectures.
Computer Systems - A Programmer's Perspective is a great book IMO, from Carnegie Mellon's CS course.
I read Computer Systems A programmer's perspective on my first BSc year in Software Engineering.
Computer Systems: A Programmer's Perspective
Are you in high school or college?
C# is very similar to Java - it's object oriented, has garbage collection (meaning you can get away with not learning about memory), and strongly typed. I wouldn't really say it's that useful to learn if you already know Java unless you end up working for a software company that does work in C#.
C doesn't have any of those nice features of Java and C#(strongly typed, garbage collection), and all variables - pointers, integers, characters - are treated as bits stored somewhere in memory, either in the stack or the heap. Arrays and structs (similar to objects in Java, sort of) are longer blocks of memory. C++ is an object-oriented version of C, and if you already know C and Java you would be able to pick up on C++ fairly quickly.
Learning C forces you to learn a lot of memory and system concepts. It's not really used in the software industry as much because, since it's missing all those nice Java and C# features, it can be difficult to write huge, complicated systems that are maintainable. If you want to be a serious developer, you DO need to learn these things before you graduate from college. Most major software companies ask systems/memory type questions in their interviews.
However, if you're in high school, I wouldn't say it's really necessary to try to learn C on your own unless you really want to. A good computer science program in college would require at least one class on C programming. If you are really interested, I would look at this to learn C, and later this for more information on how computers work.
TLDR; Learn C in college if want to be a software engineer and, if you're in high school, learn whatever you find interesting.
Umm....if you want to do that...then write a simple program to lets say sum 10 numbers in C. Now , compile this file and "step" through the program in gdb....as you will see each assembly line executed you will have an understanding of whats going on.
However , for some sanity please refer the Intel manual or use this book (there might be other references as well) ... http://www.amazon.com/Computer-Systems-Programmers-Perspective-Edition/dp/0136108040
There's a free beta edition somewhere...and you will need Chapters 2 and Chapters 3. One full day read both of them thoroughly..and you"ll be golden. Let me know how it goes.
If you are looking to go a little deeper, I can recommend this book as well:
"Computer Systems: A Programmer's Perspective"
http://www.amazon.com/Computer-Systems-Programmers-Perspective-2nd/dp/0136108040/ref=sr_1_1?ie=UTF8&qid=1292882697&sr=8-1
This book has a similar thread, but is much more in-depth and covers modern Intel and AMD processors, modern Operating Systems and more.
I learned all about linkers, compiler optimization, the crazy memory management structure of x86...
tl:dr Practice & perserverence are the main points. No one is really any good at programming until they've got a few years of churning out code, so don't get discouraged. Finally: don't let the breadth of the computer science/software world overwhelm you. Focus on small pieces, and in a few years you'll have learned more than you would have expected.
Because unlike what the OP said, IP's that are actively being used to route information are ints in memory. When I ping Facebook.com, the program doesn't get 173.252.120.6 back, it gets 2919004166 back (which is identical to getting 0xADFC7806 back, because it's just a bit order, there's no way to differentiate the decimal and the hex. It's just a display thing). It has to convert that number to the 4 octets that make sense to humans.
Same thing in reverse. When I go to http://173.252.120.6, it can't just start sending data out to 173.252.120.6, it has to convert it to 0xADFC7806 first. This is true of any program that does networking. Why is that? It saves space in an area where space is most important. So you see, it's not like the program is doing an extra step to allow this, it's actually doing 1 less step. What's more, I bet Mozilla (I use firefox, this is probably true of all browsers) didn't implement a specific "convert IP to int" function, this is default functionality of networking libraries, libraries that would routinely be handling both octet notation and ints, because most programs dealing with this would already dealing with low level ints. So not only are they doing 1 less step, Mozilla would have to go out of their way to specifically disallow this.
And again, for completeness sake, an IP address that is traversing a network is stored in a big endian int, meaning the bit order you see here (1010 1101 1111 1100 0111 1000 0000 0110) isn't actually the bit order a network switch will receive when it needs to route the packet. Also, it's technically not an int, it's a structure that only contains a single int.
/ Internet address. /
struct in_addr {
uint32_t s_addr; / address in network byte order /
};
If you want to learn about low-level hardware and what is actually happening behind the scenes, I strongly recommend Computer Systems a Programmers Perspective. It's a hard book, no doubt, but it will show you everything that happens that you don't see when you compile a program and then run it, including memory management, cache fetching, how hard drives store data, how processor pipelining works, all to the level of detail in my posts.
http://www.amazon.com/Computer-Systems-Programmers-Perspective-Edition/dp/0136108040
You're looking for a clear dividing line, and there isn't one. The term "emulator" is more descriptive of the problem you're trying to solve (I have a program for X but I only have Y, how can I get it to run on Y?) than any particular implementation. It's all in the name, "emulate" means to copy or imitate. If that's the goal of the software, or even just how you're using software that was designed for another purpose, it could be considered an emulator.
> So, [...] it has to be called "emulating"?
No, but if it fits the definition you shouldn't complain if someone says it is.
> I really think that "computer system" refers to hardware, not software.
Maybe you'll trust the textbook I was taught from.
> A computer system consists of hardware and systems software that work together to run application programs.
Maybe you're not looking for this sort of thing and it's a bit more advanced (expecting you to know C, or Java. Any programming experience will be good.), but it's a goldmine of information and covers a broad range of topics.
http://www.amazon.com/dp/0136108040/
From a quick Googling you can find a pdf of the most recent version of it on this guy's Github:
https://github.com/largetalk/datum/blob/master/others/Computer%20Systems%20-%20A%20Programmer's%20Perspective%20(2nd%20Edition).pdf
You can view the raw file to download the PDF.
The two starting books that gave me a great deal of understanding on systems (which I think is one of the toughest things to grasp and CLRS and the Art of Programming have already been mentioned):
[Computer Systems: A Programmer's Perspective] (http://www.amazon.com/Computer-Systems-Programmers-Perspective-Edition/dp/0136108040/ref=sr_1_2?ie=UTF8&qid=1407529949&sr=8-2&keywords=systems+computer)
This along with its labs served as a crash course in how the system works, particularly a lot about assembly and low-level networking.
The Elements of Computing Systems: Building a Modern Computer from First Principles
I've mostly only done the low-level stuff but it is the most fun way I have found to learn starting all the way at gate architecture. It pairs well if you have read Petzold's Code. A great introduction to the way computers work from the ground up.
This book has a pretty strong breakdown of how computers and processors work, and goes into more advanced things that modern day hacks are based off of, like address translation and virtualization with the recent Intel bugs:
https://www.amazon.com/Computer-Systems-Programmers-Perspective-2nd/dp/0136108040
The book can be found online for free. The author's website has practice challenges that you can download, one of them being a reverse engineer of a "binary bomb". I did a challenge similar to it, and it felt pretty awesome when I was able to get around safeguards by working with the binaries and causing buffer overflows.
Senior Level Software Engineer Reading List
Read This First
Fundamentals
Development Theory
Philosophy of Programming
Mentality
Software Engineering Skill Sets
Design
History
Specialist Skills
DevOps Reading List
Computer Systems
While being a self taught sys admin is great, learning the internals of how things work can really extend your knowledge beyond what you may have considered possible. This starts to get more into the CS portion of things, but who cares. It's still great stuff to know, and if you know this you will really be set apart. Im not sure if it will help you directly as a sys admin, but may quench your thirst. Im both a programmer and unix admin, so I tend to like both. I own or have owned most of these and enjoy them greatly. You may also consider renting them or just downloading them. I can say that knowing how thing operate internally is great, it fills in a lot of holes.
OS Internals
While you obviously are successful at the running and maintaining of unix like systems. How much do you know about their internal functions? While reading source code is the best method, some great books will save you many hours of time and will be a bit more enjoyable. These books are Amazing
The Design and Implementation of the FreeBSD Operating System
Linux Kernel Development
Advanced Programming in the UNIX Environment
Networking
Learning the actual function of networking at the code level is really interesting. Theres a whole other world below implementation. You likely know a lot of this.
Computer Networks
TCP/IP Illustrated, Vol. 1: The Protocols
Unix Network Programming, Volume 1: The Sockets Networking API
Compilers/Low Level computer Function
Knowing how a computer actually works, from electricity, to EE principles , through assembly to compilers may also interest you.
Code: The Hidden Language of Computer Hardware and Software
Computer Systems: A Programmer's Perspective
Compilers: Principles, Techniques, and Tools
There's a course at my school that covers exactly that (216 at UMD).
The book that's recommended is computer systems a programmer's perspective, it's exactly what you're looking for. Code is only used for examples, C and more often assembly. Mostly details on CPU instructions, hardware implementation and the creation of Unix
If you want x86 assembly, this book is very good: http://www.amazon.com/Computer-Systems-Programmers-Perspective-Edition/dp/0136108040/ref=dp_ob_title_bk/180-6741587-3105245
I'm talking an assembly class this semester that involves writing assembly from scratch and this book (which is required for this class) is a lifesaver because the professor isn't that great at summarizing the important points.
I think it's a good book. It starts easy and it has a lot of exercises that have answers on the back of the chapter so you can check your answers pretty easily.
Just for my own clarification, is that just applied programming? I am a freshman also, and I have to make a decision about this also.
On one side it really is not that hard to learn how to program. Anyone can make it through LPTHW or hell even K&R... but being able to grapple SICP is a whole other story.
I really enjoy the whole spectrum, but what I am really looking for is the traditional theoretical courses. These sort of lessons are what really make me a better programmer I have found. I was a crappy PHP dev until I learned C, then I was a crappy C dev until I picked up Computer Systems: A Programmers Perspective
The one thing I want to avoid is to sit through garbage I am never gonna use like C#. First off its non-standard (No ISO or ECMA for the later version) Secondly non-free software doesn't teach you anything, merely makes you memorize what buttons and knobs to press.
So any upper classmen want to give advice to clueless youngins :D
Computer Systems: A Programmer's Perspective is the one we use at my school, and it is pretty awesome. It's engaging and entertaining inasmuch as a book on systems programming can be. There are tons of exercises and there is a website where you can work on lab assignments that the authors created.
For this case I usually recommend Computer Systems: A Programmer's Perspective. However it's a layer deeper than the OP is looking for:
> This book focuses on the key concepts of basic network programming, program structure and execution, running programs on a system, and interaction and communication between programs. (Amazon)
However, I think that this should giver a better understanding of system and networking internals and will allow to pick her up on any other topic. Additionally I think that its one of those books that contains timeless knowledge which stands in contrast to a lot of other IT books out there.
My physical copy stands directly next to Physically-Based Rendering.
I haven't read it in years, but I remember The C Programming Language being very useful.
If you want to learn more about the low level details of how computers work in general, I own the following books and recommend them:
---
Computer Systems: A Programmers Perspective
Computer Organization and Embedded Systems
Hacking: The Art of Exploitation
Operating System Concepts Essentials
Computer Networking: A Top-Down Approach
The question you want to ask is how is dynamic memory allocation implemented. You can probably find something online that walks through am implementation of
malloc
. Essentially, it has to take a big block of memory, and when a request is received it has to find space for it in this big block. A naive implementation might have a header for each block of memory, and that header points to the next block of memory and the current state of the block (free or allocated). When a memory allocation request is received, it finds a block of free memory large enough by "jumping" from header to header, alters that header to say allocated, and then adds a header at the end of allocated bit of memory which says how much is left over. When memory is freed, the header is changed to reflect its status and merged with surrounding free blocks.Clearly, I'm brushing over a lot of details. And a challenge is finding strategies for avoiding fragmenting memory. You should read the chapter on memory allocation from Computer Systems.
We used this one when I took the course two years ago. I don't think it's changed, but you might want to double check that.
This kind of stuff is commonly taught in university CS courses called something like "Computer Architecture." The book that was used in my computer architecture course was Computer Systems: A Programmer's Perspective, by Bryant and O'Hallaron (book home page). This book uses C and IA32 assembly (or rather something the authors call "Y32," which is a simplified version of IA32).
I cannot support copyright violations, so I will not say anything that might lead you to believe you might be able to find a PDF of this book on the Web if you Google for the title.
> I wanted to write a program to emulate a CPU so I could fully understand how it's operation actually worked, but I have no idea of what the "start up" process is, or how we determine when to get a new instruction
The CPU begins loading instructions at a fixed address known as its reset vector. On AVR microcontrollers the reset vector is always at address 0, and it is always a jump instruction with the address where startup code actually begins. On x86 processors the reset vector is at address 0xFFFFFFF0. For a CPU emulator, which presumably doesn't need to deal with interrupt vectors or anything like that, I would just start loading instructions from address 0.
Also, you should should look at some of the simplified CPU designs that have been made for teaching. In my classes we used the LC-3 (a very simple and straightforward design) then moved to the y86 (a simplified x86 design mainly used to teach pipelining). It will be much more realistic to make an emulator for one of them rather than an extremely complex real-world design. I've linked below to textbooks that are based around each of those designs.
http://highered.mheducation.com/sites/0072467509/index.html
http://www.amazon.com/Computer-Systems-Programmers-Perspective-Edition/dp/0136108040
> I know basically nothing about x86 internals to make an accurate statement
If you're interested in learning about the internals, check out some real world technologies articles. For instance, Intel’s Haswell CPU Microarchitecture. On page 3, Haswell Out-of-Order Scheduling, it talks about the register renaming that goes on to support out-of-order execution.
It's more detail than most people really need to know, but it's interesting to understand what modern microprocessors are doing under the hood during program execution.
For anyone else reading, an even easier introduction to the topic is in the awesome Computer Systems: A Programmer's Perspective. It'll get you comfortable with the machine language representations of programs first, and then move on to basic architecture for sequential execution, and finally pipelined architecture. It's a solid base to move forward from to modern architecture articles like on real world technologies. There are more detailed treatments if you're really interested, e.g. Computer Architecture, A Quantitative Approach, but I have never read it so can't say much about it.
Hey dude, definitely do Computer Systems: A Programmer's Perspective.. Also, i saw you mention that google didn't help, but it definitely would have. Look up Stack Overflow's most recommended books and you'd find some awesome C stuff. Good luck!
Sorry, I just didn't put that right. I meant how data is stored in memory, and how it can be manipulated using C. I'm using this book for the course, if that helps clear what I'm trying to say.