edit: Fixed the link to the article 

There's an interesting article on the LogicWorks Blog titled, "Can (or should) DevOps Do It All?". It's a good read. You should check it out. I have some thoughts on that topic, so I went to comment, but unfortunately, I think their comment system is down, so I've taken the liberty of adding my comments here:

So, I think that the DevOps movement isn't without tradeoffs, but the things that you've mentioned here are obstacles, but not road blocks.

One thing I've noticed is that the layers of abstraction that we use make it easier to use a system, but harder to administer that system, because not only do you need to master the underlying layers, as you always have, but you've also got the added burden of the abstraction layer, as well.

It's a well-known feature of redundancy that increasing the number of redundant parts increases the number of failures. In the same way, building a more complex infrastructure leads to more failures as well - there are so many pieces inter-operating and the system grows so interdependent that over time, the complexity isn't just hard to grasp - it's impossible.

This is one of the reasons that it's becoming difficult to hire new people in DevOps roles. What everyone is looking for is someone who's "full stack", and the stack keeps growing. There's going to be a critical mass that we hit (and a lot of large organizations are already there) where you just can't have someone who's "full stack". The complexity of the system won't allow it within the confines of the human brain and lifespan.

I suspect that DevOps will continue (or maybe return, depending on who you ask) to the concept of being a relationship and a series of methodologies. Agile, quick turn around techniques for making changes, absorbing failures, and building a system resilient to (and maybe even that relies upon) continual, unpredictable, low-grade errors rather than building up to large scale cataclysmic system failures.

What do you think? Log in to comment!