Reddit Reddit reviews The DevOps Handbook:: How to Create World-Class Agility, Reliability, and Security in Technology Organizations

We found 4 Reddit comments about The DevOps Handbook:: How to Create World-Class Agility, Reliability, and Security in Technology Organizations. Here are the top ones, ranked by their Reddit score.

Business & Money
Books
Industries
Computers & Technology Industry
The DevOps Handbook:: How to Create World-Class Agility, Reliability, and Security in Technology Organizations
Check price on Amazon

4 Reddit comments about The DevOps Handbook:: How to Create World-Class Agility, Reliability, and Security in Technology Organizations:

u/mdaffin · 33 pointsr/devops

First, read The Phenoix Project and The DevOps Handbook as well as the Google SRE book.

After this you should have a good idea of what DevOps is really meant to be about and can start to learn how to implement the details. The key to it is not technology but instead process, don't settle on one particular bit of tech as it may or may not be relevant to any job you get. Instead focus on your problem solving skills and be able to find and pick up new technology to solve problems you encounter.

For this you should focus on the whole process of getting something from development into production. Starting with a simple application written in any language of your choice (learning to program is a key point of being able to apply DevOps principles effectively) and learn how to deploy and run it in production via manual means.

You should get very familiar with version control, especially git which almost all of DevOps revolves around.

From here figure out what are the most painful point of the process then attempt to figure out how to solve them - one by one, increasing the level of automation and tooling you introduce as you go.

Typically you will want to start out with environment automation first - being able to being up a environment with a single command that can be reused for dev, staging and production.

Then look at deployments and getting your code onto each environment. Typically Ansible, SaltStack, Chef or Puppet are used for this.

Next learn about CI and CD and how you can deploy your application from a push or tag to git.

Now that you have a basic pipeline you should add automated testing (unit and integration as well as static analysis such as with linters) to give you better confidence in your deployments.

Once your pipeline is fully automated it is time to look at metrics and logging. Prometheus + Grafana (metrics) and the ELK stack (logging) are good for this.

At the point you should have a good grasp of DevOps processes and can continue to expand from here. Start to look at containerisation including docker then kubernates and how they can solve some of the problems you had above - but also where they are less suitable.

The tech mentioned above is a good starting point, but are far from the only things you should be learning. The exact tech stacks you use are not very important - the important parts are knowing how to apply tech to solve problems. Every company you work for will solve these problems in different ways so to be good at DevOps you need to be flexible with which tech you use.

Note that a lot of roles labelled DevOps are not really DevOps, but more SysAdmin roles or Automation Engineers and while still useful roles can be very misleading, disappointing or not hugely suitable for properly learning DevOps principles.

I also do not fully believe there is a junior DevOps role - DevOps is a combination of both Dev and Ops roles and so tends to require a lot of knowledge of both areas. Typically people get into it from either the Dev or the Ops roles by cross learning to the other side by starting to apply DevOps principles in your day to day work.

Most roles that are labelled as Junior roles are really just junior SysAdmin roles with a heavy slant on automation. Which can be one way to start but keep in mind that you will be learning more of the sysadmin side of things that full DevOps and will need to branch out to the Dev side like with any SysAdmin role.

Even fully fledged DevOps roles can be missleading and can mean anything from Dev who works on deployment pipelines to a SysAdmin that automates infrastructure. There is no good definition of a DevOps roles as it is really a multi-team discipline and about bringing experience of Devs and Ops together rather than one person knowing both sides (though some cross learning is required).

You will find quite a lot of DevOps roles that do not actually follow DevOps principles at all - one key sign of this is the DevOps team being separate from the Dev or Ops team, in fact, if any team is separate you are not fully following DevOps principles.

u/TheBigLewinski · 20 pointsr/webdev

The best single resource, IMO, is The Devops Handbook.

Devops is a spectrum subject, like photography or design. Yes, a professional photographer would commonly use PhotoShop, LightRoom, and a Canon EOS 1D. But you don't become a photographer by using those tools, you become a photographer when the tools are solutions to what you want to accomplish.

Similarly, you don't "do devops" by simply building a Docker enabled CI/CD pipeline. You're doing devops when you create measurable visibility for productivity, when you create organizational memory and learning among teams, when you create autonomy and cohesion among engineers -unobstructed by meaningless process- to build a sustainable, performant and competitive product.

It helps to understand that devops is a largely a product of large corporations facing the challenge of maintaining a complex product which must always be available, secure, bug free (as possible) and performant over the course of years. And they must manage anywhere from a 4 person team to thousands of engineers who are constantly working on the product, and intrinsically challenging every one of the attributes above.

To that end, there are tenants of devops, more than there are hard rules, far too numerous and deep to cover in a reddit comment, but here are some surface level answers to your questions...

Basic workflow starts with TrunkBasedDevelopment. Everyone constantly publishes to master. This is typically achievable by having a set of (integration) tests run on each pull request to ensure the code is deployable. The level of testing done varies considerably for each corporation. Typically, unit testing is a minimum requirement, but there are security, performance, UI, QA and more.

It's important in devops that each developer is provided with immediate and thorough feedback. Tests and failures should happen quickly. If a developer is waiting for two hours each time new code is pushed, it decreases productivity. If messages or failures are unclear, this is also a detriment.

When tests are successful, and the pull request is granted, your code is merged in to master. This is the "delivery" end of continuous delivery.

"Deployment" is sending the code to production so its publicly available. This process varies tremendously by company, and even by project. Common approaches are "blue/green" and "canary" deployments which deliver the new code to a specific percentage of traffic to help find bugs before full deployment, test performance (make sure new queries don't kill the database when thousands of users hit it, for instance), or sometimes conduct marketing tests (A/B testing).

Engineers should also have simple access to perpetual, easy to understand feedback on the quality of their services after deployment, ranging from usage statistics, to performance, to failures. Again, plenty of tools available to help with this, but the end result of helping engineers refine the product is the goal.

Docker: In the theme of understanding devops, it might help to understand the problems being solved by Docker. They're numerous, and this is not exhaustive.

Docker is a "container" technology, but it's not the only one, it's just the most popular. Like what Kleenex is to tissues.

"Containerizing" your code means you deliver everything needed to run your code. This has cascading effects when it comes to operations:

It bundles all the configuration needed with your code. Before Docker, you would install xampp on your local machine in order to run the code base. Then, push it to a server, which had all of the configuration needed for its respective environment (e.g. dev, stage, live).

In this scenario, you're running an approximation of the live environment on your local machine. You have a different OS, maybe a different version of PHP, a different php.ini. Maybe production is running Nginx, and you're running Apache locally.

With Docker, your code is packaged, so your local development gets pushed to production and runs that way, as-is. Your nginx version and configuration, your php version and configuration, etc.

This has cascading benefits:

  • It's far simpler and more robust to setup local development (once your Docker image is setup). New team members can be provided quick, simple to understand instructions, and have the exact environment the rest of the team has, in moments.
  • It abstracts the server configuration away from the code configuration. So, if you want to run a Python 3 app, a PHP 7.1 app and a PHP 7.4 experiment on the same server, you can do so safely and easily.
  • Version updates, security updates and patches to the OS and languages no longer need to be delicately performed on live servers. They can be built and tested locally before deployment (and in the pipeline, if setup correctly).
  • Updates to entire operating systems or language version are often as simple as switching out version numbers in the build process.
  • All of your changes can now be captured in your github repo. Now we can track down the exact date an OS update or language version update was made, and we can easily roll back if it causes issues.
  • It provides deployment flexibility (the aforementioned blue/green and canary deployments), and since most container builds are stored in their own repo, it's simple to redeploy a known working copy in the event of a critical bug deployment.
  • Containers -or properly containerized apps- are easier, more granular and more efficient to scale. When everything is properly containerized, you can simply deploy more containers of the service receiving the most demand, instead of deploying more servers and code, just because of a specific subset of functionality.

    That's all I have energy for. There's a lot, lot more. Hopefully that helps somewhat.

    TL;DR Read The Devops Handbook.
u/ferstandic · 2 pointsr/ADHD

I'm a software developer with about 5 years of experience , and I used to have the same sorts of problems where I would over-commit to getting work done and under-deliver. To summarize, I changed to where I only commit to tasks that will take 1-2 days or less at a time, and I make it very very public what I'm working on in order to manage both my and my team's expectations. Here are the gritty details (ymmv of course):


  1. I got my team to start using a ticketing system and explicitly define what we are working on with explicit acceptance criteria for each ticket. That way you know where your finish line is. There other huge benefits to this but its outside of the scope of your personal workflow. This of course takes buy-in from your team, but at the very least start a board on trello with "todo", "in progress", and "done" columns, and try to keep the number of items "in progress" to a minimum, and work on them until their finish. A cardinal sin here is to move something from "in progress" back to "todo". This thing you're setting up is called a kanban board

  2. I break the work I do into 1 or 2 workday 'chunks' on our team board, so I don't lose interest or chase another issue before the work I'm doing gets finished. Keep in mind that some workdays, depending on how heinous your meeting schedule is, a workday may only be 4 (or less :[ ) hours long. An added bonus to this is that its easier to express to your team what you're working on, and after practice chunking up your work, you and they will reasonably be able to expect you to finish 2-3 tasks a week. There are always snags because writing software is hard, but in general smaller tasks will have a smaller amount of variability.

  3. As I'm coding, I practice test-driven development, which has the benefit of chunking up the work into 30 or so minute increments. While I'm making tickets for the work I do, i explicitly define the acceptance criteria on the ticket in the form of tests I'm going to write as I'm coding ( the bdd given-when-then form is useful for this ) , so the flow goes write tests on ticket -> implement (failing) test -> implement code to make test pass -> refactor code (if necessary)

  4. This is a little extreme but I've adopted a practice called 'the pomodoro technique' to keep me focused on performing 30-minute tasks. Basically you set a timer for 30 minutes, work that long, when the time elapses take a 5 minute break. After 5 or so 30-minute intervals, you take a 20-30 minute break. There's more to it, but you can read more here. Again, this is a little extreme and most people don't do things like this. Here is the timer I use at work when its not appropriate to use an actual kitchen timer (the kitchen timer is way more fun though). There's a build for mac and windows, but its open source if you want to build it for something else.


    Side note: in general I limit my work in progress (WIP limit) to one large task and one small task. If there are production issues or something I break my WIP limit by 1 and take on a third task (it has to be an emergency like the site is down and we are losing money), and I make sure that whatever caused the WIP limit to break gets sufficient attention so that it doesn't happen again (usually in the form of a blameless postmortem ) . If someone asks me to work on something that will break the WIP limit by more than one, then I lead them to negotiate with the person who asked me to break it in the first place, because there is not way one person can work on two emergencies at the same time.

    Here's some books I've read that lead me to work like this

u/pertymoose · 1 pointr/PowerShell

>Read powershell books. Read C# books. Read .NET books. Read C++ books. Read infrastructure books. Read modeling books. Read architecture books. Read everything.

Powershell books

C# books

.NET books (and chromebooks)

Infrastructure book


Data modeling books

Enterprise/application architecture books

If it has a reasonable rating then read it, evaluate it, form your own opinion on it, and learn something.