While developers and IT operations professionals have been excited about the concept of DevOps, data center operators, the people who run the infrastructure for the teams upstream, haven’t generally been involved in the conversation. Jack Story, distinguished technologist at Hewlett-Packard Enterprise, thinks that is a mistake.
And people make that mistake because there is a lot of confusion about what DevOps is and isn’t. In a session at this week’s Data Center World Global conference in Las Vegas, Story attempted to make the case that data center operators should be part of the DevOps process and explain what it is.
A lot of confusion about DevOps comes from the misconception that it is about tools and automation. “It is not about automating the processes that you have today,” Story said. “It is not a tool. It is a cultural and organizational change.”
More than anything, a switch to DevOps is a cultural one, and that is the kind of change that is the most difficult for any organization. “We resist change in our profession; we resist change as human beings,” he said.
But change in IT today is a necessity. Even if you aren’t providing cloud services as an IT organization, your customers, be they marketing directors or developers, generally expect you to provide services the way a cloud provider would.
“It’s the perception that cloud can be cheaper; it’s the perception that cloud can be faster.”
What do they expect exactly? They expect all the attributes of cloud computing the National Institute of Standards and Technology identified five years ago in its definition of the model:
- On-demand self-service
- Broad network access
- Resource pooling
- Rapid elasticity or expansion
- Measured service
This pressure is forcing IT organizations to rethink the silos they know and love. The strict delineation between network, server, storage, and facilities teams just doesn’t work for DevOps.
It is about enabling developers to switch from the “waterfall” model, where a new software deployment is given a long timeframe, allowing a lot of time for preparation and planning before it goes live, to agile methodologies, whose core elements are collaboration across teams, deploying quickly, and improving continuously.
It is about tightly aligning software development with business needs, because the waterfall model doesn’t work in this day and age. The feature or product is simply outdated by the time it goes live.
Business requirements today don’t stay static, so there’s no room to be static for the technology teams that serve the business.
Marrying operations with development to enable the DevOps model is basically about drawing all stakeholders, from development down to data center operations, into a single collaborative process.
HP’s internal IT organization made the transition and saw the benefits almost immediately, Story said. Those benefits were greater velocity, quality, and stability.
The recent split of the company in two went a lot smoother because the IT organization was functioning this way, he added. The split into HPE and HP Inc went almost unnoticed by the employees, as far as the tools they were using were concerned, but the work that was done in the background by the IT teams was “tremendous,” Story said.
The process is still ongoing at HPE, because there is never a point at which you can say, ‘OK, we are now DevOps.’ It is a philosophy and a methodology, and not everybody in the organization will like it, he warned, so not everybody will stay onboard.
“At a couple of environments we had to have people removed,” he said. “A lot of conversations are going to get emotional, because it is change.”
Original article appeared here: Why Data Center Managers Should Care about DevOps