Showing posts with label ITIL. Show all posts
Showing posts with label ITIL. Show all posts

Tuesday, April 07, 2015

ITIL-ITSM Tagteam Boosts Mexican ISP INFOTEC's Service Desk and Monitoring Performance

Transcript of a BriefingsDirect podcast on how an IT provider in Mexico uses ITSM tools to improve service to customers.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

Our next innovation case study interview highlights how INFOTEC in Mexico City improves its service desk and monitoring operations and enjoys impressive results from those efforts.

To learn more, we're joined by Victor Hugo Piña García, the Service Desk and Monitoring Manager at INFOTEC. Welcome.
Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management
Victor Hugo Piña García: Hello. Thank you.

Gardner: Tell us about INFOTEC, what your organization is and does.

Piña: INFOTEC is a Government Research Center. We have many activities. The principal ones are teaching, innovation technology, and IT consulting. The goal is to provide IT services. We have many IT services like data centers, telecommunications, service desk, monitoring, and manpower.

Gardner: This is across Mexico, the entire country?

Piña: Yes, it covers all the national territory. We have two locations. The principal is in Mexico City; San Fernando, and the Aguascalientes City is the other point we offer the services.

Gardner: Explain your role as the Service Desk and Monitoring Manager. What are you responsible for?

Three areas

Piña: My responsibility is in three areas. The first is the monitoring, to review all of the service, the IT components for the clients.

Piña
The second is the service desk, management of incidents and problems. Third is the generation of the deliveries of all the services of INFOTEC. We make deliveries for the IT service managers and service delivery.

Gardner: So it's important for organizations to know their internal operations, all the devices, and all the assets and resources in order to create these libraries. One of the great paybacks is that you can reduce time to resolution and you can monitor and have much greater support.

Give us a sense of what was going on before you got involved with ITIL and IT service management (ITSM), so that we can then better understand what you got as a benefit from it. What was it like before you were able to improve on your systems and operations?

Piña: We support the services with HP tools, HP products. We have many types of assets for adaptation and for solution. Then we create a better process. We align the process with the HP tools and products. Within two years we began to see benefits to service a customer.
That reduces considerably the time to repair. As a consequence, users have a better level of service.

We attained a better service level in two ways. First is the technical report, the failures. And second, the moment the failure is reported, we send specialists to attend to the failure. That reduces considerably the time to repair. As a consequence, users have a better level of service. Our values changed in the delivery of the service.

Gardner: I see that you have had cost reductions of up to one third in some areas, a 40 percent reduction in time to compliance, with service desk requests going from seven or eight minutes previously down to five minutes. It’s a big deal, an incident reduction of more than 20 percent. How is this possible? How were these benefits generated? Is it the technology, people, process, all the above?
Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management
Piña: Yes, we consider four things. The people with their service is the first. The process with innovative mindset, the technology, is totally enabled to align with the previous two points, and the fourth, consistent and integral to the work in terms of the above three points.

Gardner: It sounds to me as if together these can add up to quite a bit of cost savings, a significant reduction in the total cost of operations.

Piña: Yes, that’s correct.

Gardner: Is there anything in particular that you're interested in and looking for next from HP? How could they help you do even more?

New concept and model

Piña: I've discovered many things. First, we need to know better and think about how we take these to generate a new concept, a new model, and a new process to operate and offer services.

There have been so many ideas. We need to process that and understand it, and we need to support HP Mexico to know how to deal with these new things.

Gardner: Are there any particular products that you might be going to, now that you've been able to attain a level of success? What might come next, more ITIL, more configuration management, automation, business service management? Do you have any  thoughts about your next steps?

Piña: Yes. We use ITIL methodology to make changes. When we present a new idea, we're looking for the impact -- economic, social, and political -- when the committee has a meeting to decide.
We need to know better and think about how we take these to generate a new concept, a new model, and a new process to operate and offer services.

This is a good idea. This has a good impact. It's possible and proven, and then right there, we make it the new model of business for delivering our new service. We're thinking about the cloud, about big data, and about security. I don’t want to promise anything.

Gardner: Very good. I'm afraid we will have to leave it there. We've been learning how INFOTEC in Mexico City has been improving on their helpdesk and monitoring and had some impressive reductions in costs and time to compliance with service requests and overall incident reduction of more than 20 percent.

I'd like to thank our guest. We've been joined by Victor Hugo Piña García, the Service Desk and Monitoring Manager at INFOTEC. Thank you so much.
Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management
Piña: Thank you very much.

Gardner: And thank you to our audience for joining us for this special new style of IT discussion.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how an IT provider in Mexico uses HP ITSM tools to improve service to customers. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Wednesday, September 10, 2014

How Waste Management Builds a Powerful Services Continuum Across Operations, Infrastructure, Development, and IT Processes

Transcript of a BriefingsDirect podcast on how a large environmental services company uses HP BSM tools to provide always-on, always-available services to customers and internal users.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

http://friendfeed.com/danagardner
Gardner
Our next innovation case study interview highlights how Waste Management in Houston, Texas is improving the quality of their services and operations in IT for a variety of their users, both internal and external.
To help us learn more about Waste Management’s experience, we're here with Gautam Roy, Vice President of Infrastructure, Operations and Technical Services at Waste Management. Welcome.

Gautam Roy: Hi. Thank you.

Gardner: You're a very large organization across North America with more than 20 million customers. This size and scale requires an awful lot of IT. Tell us about the scope and size of your operation.

Roy: Water Management is an environmental services company. We have primarily three lines of business. First is waste service. This is our traditional waste pickup, transfer, and disposal. Our second line of business is renewable energy or green energy, and our third is recycling.

Roy
What makes Waste Management different from others in the waste industry is that we also invest quite a lot of effort in next-generation waste technology. We invest in companies like Agilyx, which converts very hard-to-recycle waste, such as plastic, into crude oil. We convert organic food waste into natural gas. We pressurize, scrub, and dry municipal solid waste into solid fuel, which burns cleaner than coal.

And we're quite diverse, a global company. We have operations in the US and Canada, Asia, and Europe. We have our renewable energy plants. There is quite a large array of technology and IT to support these business processes to ensure consistent business-services availability.

Gardner: As with many organizations, gaining greater visibility into operations -- having earlier detection of problems, and therefore earlier remediation -- means better performance. What were some of the drivers for your organization specifically to mature your IT operations?

Business transformation

Roy: I'll give a few business reasons, and a couple of technology reasons. From the business side, we began business transformation a couple of years ago. We wanted to ensure that we unlocked the value for our customers and for us, and to institutionalize the benefits for Waste Management.

Customer care, providing outstanding, world-class customer service is aligned completely with our business strategy. Business services availability is crucial, it's in our DNA. Our IT business service availability scorecard a few years ago wasn't too good. So we had to put the focus on people, process, and technology to ensure that we provide a very consistent service set to our customers.

Gardner: Moving across the spectrum of development, test, and operations can be challenging for many organizations. You have put in place standardized processes to measure, organize, and perform better across the DevOps spectrum. Tell us how you accomplished that. How did you get there?

Roy: That's a very good question. For us, IT business-service availability is really not about having a great monitoring solution. It starts even before the services are in production. It starts with partnership with our business and business requirements. It starts with having a great development methodology and a robust testing program. It starts with architecture processes, standardization, and communication. All those things have to be in place. And you have to have security services and a monitoring solution to wrap it up.
We try to approach it from the front end, instead of chasing it from the back end.

What we are trying to do is to not fight the issue at the back-end. If a service is down, our monitoring software picks it up, our operational team and engineering team jumps on it, we are able to fix the problem ASAP before it impacts the customer. Great. But, boy, wouldn’t it be nice if those services aren't going down in the first place? So we try to approach it from the front-end, instead of just chasing it from the back-end.

Gardner: So it’s Application Lifecycle Management (ALM) and Business Service Management (BSM), not one or the other, but really both -- and simultaneously?

Roy: Exactly, ALM, BSM, testing, and security products. We also want to make sure that the services are not down from intentional disruption. We want to make sure that we produce code with quality and velocity, and code that is consistent with the experience of our customer.

With our operational processes, ITIL and Lean IT, we want to make sure that the change management and incident management are followed to our prescription. We want to make sure that the disaster-recovery (DR) program, the high-availability (HA) program, the security operation center (SOC), the network operation center (NOC), and the command centers are all working together to ensure that the services are up 24/7, 365.

Gardner: And when you do this well, when you have put in place many of the capabilities that we have been describing, do you have any sense of payback? Do you keep score?

Availability scorecard

Roy: A few years ago, when we were not as good at it, we started rebuilding this all from the ground up, and our availability scorecard was pretty bad. Our services were down. At times, we didn’t know that our services were down. Our first indication of a problem was from customers calling us.

Now, fast-forward a few years, with making the appropriate choices and investments in technology -- such as in people and processes --  and our scorecard is very good. We know of the problems rapidly. We proactively detect problems and fix the problems before they impact our customers.

We have 4 9s availability for our critical applications. We're able to provide services to our customers via wm.com, our digital channel, and it has been quite a success story. We still have work to cover, but it has been following the right trajectory.

Gardner: Here at HP Discover, are there any developments that you're monitoring closely? Are there some things that you're particularly interested in that might help you continue to close the gap on quality?
We want to provide optimal solutions at a right price point for our customers and our business.

Roy: Sure. Things like understanding what's happening in the world of big data and HP’s views and position on that. I want to understand and learn about testing, software testing, how to test faster and produce better code, and to ensure, on a continuous basis that we're reducing the cost of running the business. We want to provide optimal solutions at a right price point for our customers and our business.

Gardner: On that topic of big data, are you referring to the data generated within IT, in your systems, to be able to better analyze and react to that? Or perhaps also the data from your marketplace, things that your customers might be saying in social media, for example? Or is it all of the above?

Roy: It’s all of the above. We have internal data that we're harvesting. We want to understand what it’s telling us. And we'd like to predict certain trends of our system, across the use of our applications.

Externally, we have 18 call centers. We get user calls. We also want to know our customer better and serve them the best. So we want to move into a situation where we can take their issues, frame them into solutions, and proactively service them the best in our industry.

Gardner: I'm afraid we will have to leave it there. We've been discussing how Waste Management improves their IT operations across the BSM spectrum, from development through operations, and then embarking on more use of big data to analyze their business requirements as well as their marketplace.

So a big thank you to Gautam Roy, Vice President of Infrastructure, Operations and Technical Services at Waste Management in Houston. Thanks so much.

Roy: Thank you, Dana.

Gardner: And thank you, too, to our audience for joining this special HP Discover new style of IT discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a large environmental services company uses HP BSM tools to provide always-on, always-available services to customers and internal users. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Friday, March 07, 2014

Fast-Changing Demands on Data Centers Drive Need for Automated Data Center Infrastructure Management

Transcript of a BriefingsDirect discussion on how organizations need to better manage the impact that IT and big data now have on data centers and how Data Center Infrastructure Management helps.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on improving the management and automation of data centers. As data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.

At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical -- at ever larger scale and density.

So how do enterprise IT operators and planners keep their data centers from spinning out of control despite these new requirements? How can they leverage the best of converged systems and gain increased automation, as well as rapid analysis for improving efficiency?

We’re here to pose such questions to two experts from HP Technology Services, and thereby explore how new integrated management capabilities are providing the means for better and automated data center infrastructure management (DCIM).

Here now to explain how disparate data center resources can be integrated into broader enterprise management capabilities and processes, we’re here with Aaron Carman, HP Worldwide Critical Facilities Strategy Leader. Welcome to BriefingsDirect, Aaron. [Learn more about DCIM.]

Aaron Carman: It's pleasure to be here. Thank you.

Gardner: We’re also here with Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. Welcome, Steve.

Steve Wibrew: Hello, and glad to be here. Thank you.

Gardner: Aaron, let me start with you. From a high level, what’s forcing these changes in data center management and planning and operations? What are these big new requirements? Why is it becoming so difficult?

Carman: It's a very interesting question that people are actually trying to deal with. What it comes down to is that in the past, folks were dealing with traditional types of services that were on a traditional type of IT infrastructure.

Standard, monolithic-type data centers were designed one-off. In the past few years, with the emergence of cloud and hybrid service delivery, as well as some of the different solutions around convergence like converged infrastructures, the environment has become much more dynamic and complex.

Hybrid services

So, many organizations are trying to grapple with, and deal with, not only the traditional silos that are in place between facilities, IT, and the business, but also deal with how they are going to host and manage hybrid service delivery and what impact that’s going to have on their environment.

Carman
It’s not only about what the impact is going to be on rolling out new infrastructure solutions like converged infrastructures from multiple vendors, but how to increasingly provide more flexibility and services to their end users as digital services.

It's become much more complex and it's a little bit harder to manage, because there are many, separate types of tools that they use to manage these environments, and it has continued to increase.

Gardner: Steve, do you have anything more to offer in terms of how the function of IT is changing? I suppose that with ITIL v3 and more focus on a service-delivery model, even the goal of IT has changed.

Wibrew: That's very true. We’re seeing a trend in the change and role of IT to the business. Previously IT was a cost center, an overhead to the business, to deliver the required services. Nowadays, IT is very much the business of an organization, and without IT, most organizations simply cease to function. So IT, its availability and performance, is a critical aspect of the success of the business.

Gardner: What about this additional factor of big data and analysis as applied to IT and IT infrastructure. We’re getting reams and reams of data that needs to be used and managed. Is that part of what you’re dealing with as well, the idea that you can be analyzing in real-time what all of your systems are doing and then leverage that?

Wibrew
Wibrew: That’s certainly a very important part of the converged-management solution. There’s been a tremendous explosion in the amount of data, the amount of management information, that's available. If you narrow that down to the management information associated with operating management and supporting data centers from the facility to the applications, to the platforms right up to the services to the business, clearly that's a huge amount of information that’s collected or maintained on a 24×7 basis.

Making good and intelligent decisions on that is quite a challenge for many organizations. Quite often, we would be saying that people still remain in isolated silo teams without good interaction between the different teams. It's a challenge trying to draw that information together so businesses can make intelligent choices based on analytics of that end-to-end information.

Gardner: Aaron, I’ve heard that word "silo" now a few times, siloed teams, siloed infrastructure, and also siloed management of infrastructure. Are we now talking about perhaps a management of management capabilities? Is that part of your story here now?

Added burden

Carman: It is. For the most part, most organizations when faced with trying to manage these different areas, facilities IT and service delivery, have come up with their own set of run books, processes, tools, and methodologies for operating their data center.

When you put that onto an organization, it's just an added burden for them to try to get vendors to work with one another and integrate software tools and solutions. What the folks that provide these solutions have started to realize is that there needs to be an interoperability between these tools. There has never really been a single tool that could do that, except for what has just emerged in the past few years, which is DCIM.

HP really believes that DCIM is a foundational, operational tool that will, when properly integrated into an environment, become the backbone for operational data to traverse from many of the different tools that are used to operate the data center, from IT service management (ITSM), to IT infrastructure management, and the critical facilities management tools.

Gardner: I suppose yet another trend that we’re all grappling with these days is the notion of things moving to as-a-service, on-demand, or even as a cloud technology. Is that the case, too, with DCIM, that people are looking to do this as a service? Are we starting to do this across the hybrid model as well?
Today, clients have a huge amount of choice in terms of how they provision and obtain their IT.

Carman: Yes. These solution providers are looking toward how they can penetrate the market and provide services to all different sizes of organizations. Many of them are looking to a software-as-a-service (SaaS) model to provide DCIM. There has to be a very careful analysis of what type of a licensing model you're going to actually use within your environment to ensure that the type of functionality you're trying to achieve is interoperable with existing management tools.

Gardner: Steve, do you have anything more to offer in terms of where this is going, perhaps over time on that services delivery question? [Learn more about DCIM.]

Wibrew: Today, clients have a huge amount of choice in terms of how they provision and obtain their IT. Obviously, there are the traditional legacy environments and the converged systems and clients operate in their own cloud solutions.

Or maybe they’re even going out to external cloud providers and some interesting dynamics that really do increase the complexity of where they get services from. This needs to be baked into that converged solution around the interoperability and interfacing between multiple systems. So IT is truly a business supporting the organization and providing end-to-end services.

Gardner: Well I can certainly see why IDC recently named 2014 is the year of DCIM. It seems that the timing now is critical. If you let your systems languish in legacy status for too long, you won’t be able to keep up with the new demand. If you don’t create management-of-management capabilities, you won’t be able to cross these boundaries of service delivery and hybrid models and you certainly won’t be able to exploit the analysis change from all the data.

So it seems to me that this is really the time to get on this before you lose ground and/or can’t keep up with the modern requirements. What’s happening right now in terms of HP and how it’s trying to help organizations obtain do some sooner rather than later? Let me start with you, Aaron.

Organizations struggling

Carman: Most organizations are really struggling to introduce DCIM into their environment, since at this point, it’s really viewed as more as a facilities-type tool. The approach from different DCIM providers varies greatly on the functions and features they provide in their tool. Many organizations are struggling just to understand which DCIM product is best for them and how to incorporate into a long term strategy for operations management.

So the services that we brought to market address that specifically, not only from which DCIM tool will be best for their environment, but how it fits strategically into the direction they want to take from hosting their digital services in the future.

Gardner: Steve, I think we should also be careful not to limit the purview of DCIM. This is not just IT. This does include facilities, hybrid and service delivery model, management capabilities. Maybe you could help us put the proper box around DCIM. How far and why does it go or should we narrow it so that it doesn’t become deluded or confused?

Wibrew: Yeah, that’s a very good question, an important one to address. What we’ve seen is what the analysts have predicted. Now is the time, and we’re going to see huge growth in DCIM solutions over the next few years.
DCIM alone is not the end-to-end solution.

DCIM has really been the domain of the facilities team, and there’s traditionally been quite a lack of understanding of what DCIM is all about within the IT infrastructure management team. If you talk to lot of IT specialists, the awareness of DCIM is still quite limited at the moment. So they certainly need to find out more about it and understand the value that DCIM can bring to IT infrastructure management.

I understand that features and functions do vary, and the extent of what DCIM delivers will vary from one product to another. It’s very good certainly around the facilities space in terms of power, cooling, and knowing what’s out on the data center floor. It’s very good at knowing what’s in the rack and how much power and space has been used within the rack.

It’s very good at cable management, the networks, and for storage and the power cabling. The trend is that DCIM will evolve and grow more into the IT management space as well. So it’s becoming very aware of things like server infrastructure and even down to the virtual infrastructure, as well, getting into those domains.

DCIM will typically have work protectabilities for change in activity management. But DCIM alone is not the end-to-end solution, and we realized the importance of the need to integrate it with the full ITSM solutions and platform management solutions. A major focus, over the past few months, is to make sure that the DCIM solutions do integrate very well with the wider IT service-management solutions to provide that integrated end-to-end holistic management solution across the entire data-center ecosystem.

Gardner: Aaron, when I hear Steve talking about this more general inclusion description of DCIM, it occurs to me that this isn’t something you buy in a box. This is not just a technology or a product that we’re talking about. We’re talking about methodology. We’re talking about consulting, expertise, and tribal knowledge that’s shared. Maybe you could help us better understand not only HP’s approach to this, but how one attains DCIM. What is the process by which one becomes an expert in this? [Learn more about DCIM.]

Great variation

Carman: With DCIM being a newer solution within the industry, I want to be very careful about calling folks DCIM specialists. We feel that we have a very great knowledge of the solutions out there. They vary so greatly.

It takes a collaborative team of folks within HP, as well as with the client, to truly understand what they’re trying to achieve. You could even pull it down to what types of use cases they’re trying to achieve for the organization, which tool works best and in interoperability and coordination with the other tools and processes they have.

We have a methodology framework called the Converged Management Framework that focuses on four distinct areas for a optimized solution and strategy for starting with business goals and understanding what the true key performance indicators are and what dashboards are required.

It looks at what the metrics are going to be for measuring success and couples that with understanding organizationally who is responsible for what types of services we provide as an ultimate service to our end user. Most of the time, we’re focusing on the facilities in IT organization.

Also, those need to be aligned to the process and workflows for provisioning services to the end users, supported directly by a system’s reference architecture, which is primarily made up of operational management tools and software. All those need to be supported by one another and purposefully designed, so that you can meet and achieve the goals of the business.
IT infrastructure, right up to services of a business, end to end, is very large and very, very complex.

When you don’t do that, the time it takes for you to deliver services to your end user lengthens and costs money. When you have separate tools that are not referencing single points of data, then you’re spending a lot of time rationalizing and understanding if you have the accurate data in front of you. All this boils down to not only cost but having a resilient operations, knowing that when you’re looking at a particular device or setup devices, you truly understand what it’s providing end to end to your users.

Gardner: Steve, it seems to me that this is a little bit of a chameleon. People who have a certain type of requirement can look at DCIM, some of the methodologies and framework, and get something unique or tailored.

If someone has real serious energy issues, they’re worried about not being able to supply sufficient energy. So they could approach DCIM from that energy vantage point. If someone is building a new data center, they could bring facilities planning together with other requirements and have that larger holistic view.

Am I reading this right? Is this sort of a chameleon or an adaptive type of affair, and how does that sort of manifest itself in terms of how you deliver the service?

Wibrew: If you think about the possibilities in the management of facilities, the IT infrastructure, right up to services of a business, end-to-end, is very large and very, very complex. We have to break it down into small or more manageable chunks and focus on the key priorities.

Most-important priorities

So we look at the trans-organization, work with them to identify to them what their most important priorities are in terms of their converged-management solution and their journey.

It’s heavily structured around ITSM and ITIL processes, and we’ve identified some great candidates within ITIL for integration between facilities in IT. It’s really a case of working out the prioritized journey for that particular client. Probably one of the most important integrations would be to have a single view of the truth of operational data. So it would be unified asset information.

CMDBs within a configuration management system might be the very first and important integration between the two, because that’s the foundation for other follow-on services until you know what you’ve got, it’s very difficult to plan, what you need in the future in terms of infrastructure.

Another important integration that is now possible with these converged solutions is the integration of power management in terms of energy consumption between the facilities and the IT infrastructure.
These integrated solutions can be more granular, far more dynamic around energy consumption.

If you think about managing the power consumption of things like efficiency of the data center with PoE, generally speaking, in the past, that would be the domain of the facilities team. The IT infrastructure would simply be hosted in the facility.

The IT teams didn’t really care about how much power was used. But these integrated solutions can be more granular, far more dynamic around energy consumption with much more information being collected, not just at a facility level but within the racks and in the power-distribution units (PDUs), and in the blade chassis, right down to individual service.

We can now know what the energy consumption is. We can now incentivize the IT teams to take responsibility for energy management and energy consumption. This is a great way of actually reducing a client’s carbon foot print and energy consumption within the data center through these integrated solutions.

Gardner: Aaron, I suppose another important point to be clear on is that, like many services within HP Technology Services, this is not just designed for HP products. This is an ecumenical approach to whatever is installed in terms of product facility management capability. I wonder if you could explain a bit more HP’s philosophy when it comes to supporting the entire portfolio. [Learn more about DCIM.]

Carman: HP’s professional services we’re offering in this space are really agnostic to the final solution. We understand that a customer has been running their environment for years and has made investments into a lot of different operational tools over the years.

That’s a part of our analysis and methodology, to come in and understand the environment and what the client is trying to achieve. Then we put together a strategy, a roadmap of different products, that will help them achieve their goals that are interoperable.

Next level

We continue to transform them to the next level of abilities or capabilities that they are looking to achieve, especially around how they provision services and help them become, at the end, most likely a cloud-service provider to their end users, where heavy levels of automation are built in, so that they can get digital services to their end users in a much shorter period of time.

Gardner: One of the things I really like in talking about technology is to focus on the way it’s being used, to show rather than just tell. I’m hoping that either of you, Aaron or Steve, have some use cases or examples where this has been put to good use -- DCIM processes, methodologies, the über-holistic approach, and planning right down to the chassis included.

I hope you can not only discuss a little bit about by who and how this is being done, but what they get for it. Are there any data points we can look to that tell us that, when people do this right -- and here are some folks that have done it right -- what they got back for their efforts. Why don’t we start with you, Aaron?

Carman: HP has been offering operational services for years. So this is nothing new to HP, but traditionally, we’ve been providing these services in silos. When we reorganized ourselves just recently and really started to put together the IT-plus-facilities store, it quickly became very apparent that from an operations management perspective, a lot of the services we provide really needed to have a lifecycle approach and be brought together.

So we have a lot of different examples. We’ve rolled out different forms of converged-management consulting to other clients, and there are a lot of different benefits you get from the different tools that are a part of the overall solution.
We’re providing folks with a means of optimizing how they provision services, which is going to lower their cost structures.

You can point to DCIM and a lot of the benefits you get from understanding your assets and being able to decommission those more quickly, understanding the power relationship, and then understanding many different elements of tying the IT infrastructure chain to the facilities chain.

In the end, when you look at all these together, it’s going to be different for every client. You have to come in and understand the different components that are going to make up a return on investment (ROI) for the client based upon what they’re willing to do and what they’re trying to achieve.

In the end, we’re providing folks with a means of optimizing how they provision services, which is going to lower their cost structures. Everyone is looking to lower cost, but also increase resiliency, as well as then possibly defer large capital expenditures like expanding data centers. So many of these different outcomes could apply to a customer that engages with converged management.

Gardner: I realize this is fairly new. It was just on Jan. 23 that HP announced some newservices that are converged-management consulting, and that management framework was updated with new technical requirements. You have four new services organized with the management workshop, roadmap, design implementations, and so forth. [Learn more about DCIM.]

So this is fairly new, but Steve Wibrew, is there any instance where you’ve worked with some organization and that some of the really powerful benefits of doing this properly have shown through? Do you have any anecdotes you can recall of an organization that’s done this and maybe some interesting ways that it’s benefited them, maybe unintended consequences?

Data-center transformation

Wibrew: I certainly can give some real examples. Where I've worked in the past with some major projects for transformation within the data center, we would be deploying large amounts of new infrastructure within the data center.

The starting point is to understand what’s there in the first place. I’ve been engaged with many clients where if you ask them about inventory, what’s in the data center, you get totally different answers from different groups of people within the organization. The IT team wants to put more stuff into the data center. The facilities team says, “No more space. We’re full. We can’t do that.”

I found that when you pull this data together from multiple sources and get a consistent feel of the truth, you can start to plan far more accurately and efficiently. Perhaps the lack of space in the data center is because there may be infrastructure that’s sitting there, powered on, and not being utilized by anybody.

It’s a fact that we’re redundant. I’ve had many situations where, in pulling together a consistent inventory, we can get rid of a lot of redundant equipment, allowing space for major initiatives and expansion projects. So there are some examples of the benefits of consolidated inventory and information.
DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

Gardner: We’re almost out of time, but I just wanted to look towards the future about the requirements and the dynamic nature of workloads and the scale and density of consolidated data centers. I have to imagine that these are only going to become more urgent and more pressing.

So what about that, Aaron, as we look a few years out at big-data requirements, hybrid cloud requirements, infrastructure KPIs for service delivery, energy, and carbon pressures? What’s the outlook in terms of doing this, and should we expect that there will be an ongoing demand, but also ongoing and improving return on investments you make, vis-à-vis these consulting services and DCIM?

Carman: Based upon a lot of the challenges that we outlined earlier in the program, we feel that in order to operate efficiently, this type of a future state operational-tools architecture is going to have to be in place, and DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

So more-and-more, with a lot of the challenges of my compute footprint shrinking and having a different requirements that I had in the past, we’re now dealing with a storage or data explosion, where my data center is all filled up with storage files.

As these new demands from the business come down and force organizations onto new types of technology infrastructure platforms they haven’t dealt within the past, it requires them to be much more flexible when they have, in most cases, very inflexible facilities. That’s the strength of DCIM and what it can provide just in that one instance.

But more-and-more, the business is expecting digital services to almost be instant. They want to capitalize on the market at that time. They don't want to wait weeks or months for enterprise IT to provide them with a service to take advantage of a new service offering. So it's forcing folks into operating differently, and that's where converged management is poised to help these customers.

Looking to the future

Gardner: Last word to you, Steve. When you look into your crystal ball and think about how things will be in three to five years, what is it about DCIM rather and some of these services that you think will be most impacting?

Wibrew: I think the trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment. That's going to increase quite dramatically, and this could be a much tighter alignment between how the facilities are run and how the IT infrastructure is operated and supported. It could be far more integrated than it is today.

The roles of IT are going to change, and a lot of the work now is still around design, planning, scripting, and orchestrating. In the future, we're going to see people, almost like a conductor in an orchestra, overseeing the operations within the data center through leading highly automated and optimized processes, which are actually delivered by automated solutions.
The trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment.

Gardner: Very good. I should also point out that I benefited greatly in learning more about DCIM on the HP website. There were videos, white-papers, and blog-posts. So, there’s quite a bit of information for those interested in learning more about DCIM. HP Technology Services website was a great resource for me. [Learn more about DCIM.]

We'll have to leave it there, gentlemen. You’ve been listening to a sponsored BriefingsDirect discussion on improving the management and automation of data centers and facilities. We’ve seen how IT operators and planners can keep their data centers from spinning out of control via exploiting new data-center infrastructure management capabilities.

I want to thank our guests, Aaron Carman, the HP Worldwide Critical Facilities Strategy Leader. Thanks so much, Aaron.

Carman: It's my pleasure. Thank you.

Gardner: And also Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. Thanks so much, Steve.

Wibrew: Thank you for listening.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience and come back next time for the next BriefingsDirect podcast discussion.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect discussion on how organizations need to better manage the impact that IT and big data now have on data centers and how Data Center Infrastructure Management helps. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Wednesday, August 03, 2011

Case Study: MSP InTechnology Improves Network Services Via Automation and Consolidation of Management Systems

Transcript of a BriefingsDirect podcast discussion on how InTechnology uses network management automation to improve delivery and service performance for network and communications services.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on a UK-based managed service provider’s journey to provide better information and services for its network, voice, VoIP, data, and storage customers. Their benefits have come from an alignment of many service management products into an automated lifecycle approach to overall network operations.

We'll hear how InTechnology has implemented a coordinated, end-to-end solution using HP solutions that actually determine the health of its networks by aligning their tools to ITIL methods. And, by using their system-of-record approach with a configuration management database, InTechnology is better serving its customers with lean resources by leveraging systems over manual processes. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We're here with an operations manager from InTechnology to learn about their choices and outcomes when it comes to better operations and better service for their hundreds of enterprise customers.

Please join me now in welcoming Ed Jackson, Operational System Support Manager at InTechnology. Welcome, Ed.

Ed Jackson: Thanks. Hi.

Gardner: Your organization is a managed service provider (MSP) for both large enterprises and small to medium-sized companies, and you've been facing an awful lot of growth over the past several years. But you have also been dealing with heterogeneity in terms of many different products in place for network operations. It sounds like you've tried to tackle two major things at once: growth and complexity. How has that worked out?

Jackson: In terms of our network growth, we've basically been growing exponentially year over year. In the past four years, we've grown our network about 75 percent. In terms of our product set, we've basically tripled that in size, which obviously leads to major complexity on both our network and how we manage the product lifecycle.

Previously, we didn’t have anything that could scale as well as the systems that we have in place now. We couldn’t hope to manage 8,000 or 9,000 network devices, plus being able to deliver a product lifecycle, from provisioning to decommission, which is what we have now.

Gardner: So our audience better understands the hurdles and challenges you've faced, you're providing voice, both VoIP and traditional telephone, and telephony services. You have data, managed Microsoft Exchange, managed servers, and virtual hosting. You're providing storage, backup and restore, and of course a variety of network services. So this is a really full set of different services and a whole lot of infrastructure to support that.

Jackson: Yeah. It's pretty massive in terms of the technologies involved. A lot of them are cutting-edge. We have many partners. And you are right, our suite of cloud services is very diverse and comprises what we believe is the UK’s most complete and "joined-up"set of pay-monthly voice and data services.

Their own pace

In practice what we aim to do is help our customers engage with the cloud at a pace that works for them. First, we provide connectivity to our nationwide network ring – our cloud. Once their estate is connected they can then cherry pick services from our broad pay-as-you-go (PAYG) menu.

For example, they might be considering replacing their traditional "tin" PBXs with hosted IP telephony. We can do that and demonstrate massive savings. Next we might overlay our hosted unified communications (UC) suite providing benefits such as "screen sharing," "video calling," and "click-to-dial." Again, we can demonstrate huge savings on planes, trains and automobiles.

Next we might overlay our exciting new hosted call recording package -- Unity Call Recording (UC) -- which is perfect if they are in a regulated industry and have a legal requirement to record calls. It’s got some really neat features including the ability to tag and bookmark calls to help easy searching and playback.

While we're doing this, we might also explore the data path. For example our new FlexiStor service provides what we think is the UK’s most straightforward PAYG service designed to manage data by its business "value" and not just as one big homogenous lump of data.

It treats data as critical, important or legacy and applies an appropriate storage process to each ... saving up to 40 percent against traditional data management methods. There’s much more of course, but that gives you a flavor, I hope.

Due to the HP product set that we have, we've been able to utilize all the integrations and have a fully managed, end-to-end lifecycle of the service.



Imagine trying to manage this disparate set of systems. It would be pretty impossible. But due to the HP product set that we have, we've been able to utilize all the integrations and have a fully managed, end-to-end lifecycle of the service, the devices, and the product sets that we have as a company.

Gardner: I have to imagine too that customer service and support is a huge part of what you do, day in and day out. You also have had to manage the help desk and provide automated alerts, fixes, and notifications, so that the manual help desk, which is of course quite costly, doesn’t overwhelm you. Can you address what you've attempted to do and what you have managed to do when it comes to automated support?

Jackson: In terms of our service and support, we've basically grown the network massively, but we haven’t increased any headcount for managing the network. Our 24/7 guys are the same as they were four or five years ago in terms of headcount.

We get on average around 5,000 incidents a month automatically generated from our systems and network devices. Of these incidents, only about 560 are linked to customer facing Interactions using our Service Desk Module in the Service Manager application.

Approximately 80 percent of our total incidents are generated automatically. They are either proactively raised, based on things like CPU and memory of network devices or virtual devices or even physical servers in our data centers, or reactively raised based on for example device or interface downs.

Massive burden

When you've got like 80 percent of all incidents raised automatically, it takes a massive burden off the 24/7 teams and the customer support guys, who are not spending the majority of their time creating incidents but actually working to resolve them.

Gardner: Let's back it up. Five years ago, when you didn't have any integrated systems and you were dealing with lots of data, perhaps spurious data, what did you think? I know that you're an ITIL shop and so you had to bring in that service management mindset, but what did you do in order to bring these products together or even add more products, but without them being also unwieldy in terms of management?

Jackson: It was spurred by really bad data that we had in the systems. We couldn't effectively go forward. We couldn't scale anymore. So, we got the guys at HP to come in and design us a solution based on products that we already had, but with full integration, and add in additional products such as HP Asset Manager and device Discovery and Dependency Mapping Inventory (DDMI).

With the systems that we already had in place, we utilized mainly HP Service Desk. So we decided to take the bold leap to go to Service Manager, which then gave us the ability to integrate it fully into the Operations Manager product and our Network Node Manager product.

Since we had the initial integrations, we've added extra integrations like Universal Configuration Management Database (UCMDB), which gives us a massive overview on how the network is progressing and how it's developing. Coupled with this, we've got Release Control, and we've just upgraded to the latest version of Service Manager 9.2.

For any auditor that comes in, we have a documented set of reports that we can give them. That will hopefully help us get this compliance and maintain it.



So it has given us a huge benefit in terms of process control, how ITIL is related. More importantly, one of the main things that we are going for at the moment is payment card industry (PCI) and ISO 27001 compliance.

For any auditor that comes in, we have a documented set of reports that we can give them. That will hopefully help us get this compliance and maintain it. One of the things as an MSP is that we can be compliant for the customer. The customer can have the infrastructure outsourced to us with the compliance policy in that. We can take the headache of compliance away from our customers.

Gardner: Having that full view and the ability to manage also discreetly is not only good business, but it sounds like it's an essential ingredient for the way in which you go to market?

Jackson: More and more these days, we have a lot of solicitors and law firms on our books, and we're getting "are you compliant" as a request before they place business with us. We're finding all across the industry that compliance is a must before any contract is won. So to keep one step ahead of the game, this is something that we're going to have to achieve and maintain, and the HP product set that we have is key in that.

Gardner: I suppose too that a data flow application like Connect-It 4.1 provides an opportunity to not only pull together disparate products and give that holistic view, but also provides that validation for any audits or compliance issues?

Recently upgraded

Jackson: We recently upgraded Connect-It from 4.1 to 9.3, and with that, we upgraded Asset Manager System to 9.3. Connect-It is the glue that holds everything together. It's a fantastic application that you can throw pretty much any data at, from a CSV file, to another database, to web services, to emails, and it will formulate it for you. You can do some complex integrations in that. It will give you the data that you want on the other side and it cleanses and parses, so that you can pass it on to other systems.

From our DDMI system, right through to our Service Manager, then into our Network Node Manager, we now have a full set of solutions that are held together by Connect-It.

We can discover the device on the network. We can then propagate it into Service Manager. We can add lots of financial details to it from other financial systems outside of the HP product set, but which are easy to integrate. We can therefore provision the circuit and provision the device and add to monitoring automatically, without any human intervention, just by the fact that the device gets shipped to the site.

It gets loaded up with the configuration, and then it's good to go. It's automatically managed right through to the decommissioning stage, or the upgrade stage, where it's replaced by another device. HP systems give us that capability.

Gardner: So these capabilities really do allow you to take on a whole new level of business and service. It sounds like the maintenance of the network, the integrity, and then the automation really helps you go to market in a whole new way than you could have just several years ago.

I don’t know of many other MSPs that have such an automated set of technology tools to help them manage the service that they provide to their customers.



Jackson: Definitely. One of the key benefits is it gives us a unique calling card for our potential customers. I don’t know of many other MSPs that have such an automated set of technology tools to help them manage the service that they provide to their customers.

Five years ago, this wasn't possible. We had disparate systems and duplicate data held in multiple areas So it wasn’t possible to have the integration and the level of support that we give our customers now for the new systems and services that we provide.

Gardner: Of course, HP has been engineering more integration into its product and you have been aggressive in adopting some of the newer versions, which is an important element of that, but I have to imagine that there is also a systems integrations function here or professional services. Have you employed any professional services or relied on HP for that?

Jackson: When we originally decided to take the step to upgrade from Service Desk to Service Manager and to get the network discovery product set in, we used HP’s Professional Services to effectively design the solution and help us implement it.

Within six months, we had Service Desk upgraded to Service Manager. We had an asset manager system that was fully integrated with our financials, our stock control. And we also had a Network Discovery toolset that was inventorying our estate. So we had a fully end-to-end solution.

Automatic incidents

I
nto that, we have helped to develop the Network Operations Management Solution into being able to generate automatic incidents. HP PS services provided a pivotal role in providing us with the kind of solutions that we have now.

Since then, we took that further, because we have very good in-house knowledgeable guys that really understand the HP systems and services. So we've taken it bit of a step further, and most of the stuff that we do now in terms of upgrades and things are done in-house.

Gardner: It's a very compelling story. I wonder if we have more than just the show-and-tell here. Do we have any metrics of success? Have you been able to point to faster time to resolution, maintaining service-level agreements (SLAs), or something along those lines, that we could help people appreciate what this does, not only functionally in terms of bringing new services to your customers, but also in terms of how you operate and some important metrics that affect your bottom line?

Jackson: Mean time to restore has come down significantly, by way over 15 percent. As I said, there has been zero increase in headcount over our systems and services. We started off with a few thousand network devices and only three or four different products, in data, storage, networks and voice. Now we've got 16 different kinds of product sets, with about 8,000, 9,000 network devices.

In terms of cost saving, and increased productivity, this has been huge. Our 24/7 teams and customer support teams are more proactive in using knowledge bases and Level 1 triage. Resolution of incidents has gone up by 25 percent by customer support teams and level 1 engineers; this enables the level 3 engineers to concentrate on more complex issues.


In terms of SLAs, we manage the availability of network devices. It gives us a lot more flexibility in how we give these availability metrics to the customers.



If you take a Priority 3, Priority 4 incident70 percent of those are now fixed by Level 1 engineers, which was unheard of five or six years ago. Also, we now have a very good knowledge base in the Service Manager tool that we can use for our Level 1 engineers.

In terms of SLAs, we manage the availability of network devices. It gives us a lot more flexibility in how we give these availability metrics to the customers. Because we're business driven by other third party suppliers, we can maintain and get service credits from them. We've also got a fully documented incident lifecycle. We can tell when the downtime has been on these services, and give our suppliers a bit of an ear bashing about it, because we have this information to hand them. We didn’t have that five or six years ago.

Gardner: So, by having event correlation and data to back up your assertions there's much less finger pointing. You know exactly who had dropped the ball.

Jackson: Exactly. With event correlation, we reduced our operations browsers down to just meaningful incidents, we filtered our events from over 100,000 a month to less than 20,000 many of these are duplicated and are correlated together. Most events are associated with knowledge base articles in Service Manager and contain instructions to escalate or how to resolve the event, increasingly by a level 1 engineer.

We can also run automatic actions from these events, and we can send the information to the relevant parties, and also raise an incident and send it directly to the correct assignment groups or teams that are involved in looking after that.

Internal SLA

For Priority 1 incidents, which by an internal SLA we have 15 minutes to communicate to the customer, we can do that now within two minutes, because the group that’s been assigned the incident are on the ball straight away and they can contact the customer and let them know of the potential or actual problem.

Contacting customers within agreed SLAs and how we can drive our suppliers to provide better service is fantastic because of the information that is available in the systems now. It gives us a lot more heads up on what’s happening around the network.

Gardner: And now that you have had this in place, this integrated lifecycle, end-to-end approach, you've got your UCMDB, is there now, in hindsight, an opportunity to do some analytics, perhaps even refine what you requirements are, and therefore cut your total cost at some level?

Jackson: We're building a lot of information, taken from our financial systems and placing it into our UCMDB and CMDB databases to give us the breakdown of cost per device, cost per month, because now this information is available.

We have a couple of data centers. One of our biggest costs is power usage. Now, we can break down by use of collecting the power information, using NNMi -- how much our power is costing per rack by terms of how many amps have been used over a set period of time, say a week or a month. where previously we had no way of determining how our power usage was being spent or how much was actually costing us per rack or per unit.

From this performance information, we can also give our customers extra value reports and statistics that we can charge as a value added managed solution for them.



It's given us a massive information boost, and we can really utilize the information, especially in UCMDB, and because it’s so flexible, we can tailor it to do pretty much whatever we want. From this performance information, we can also give our customers extra value reports and statistics that we can charge as a value added managed solution for them.

Gardner: For the benefit of our listeners, now that you've gone through this process, are there any lessons learned, anything you could relay in terms of, "If I had to do this again, I might do blank?" What would you offer to those who would now be testing the waters and embarking on such a journey?

Jackson: One of the main things is to have a clear goal in mind before you start. Plan everything, get it all written down, and have the processes looked at before you start implementing this, because it’s fairly hard to re-engineer if you decided that one of the actual solutions or one of the processes that you have implemented isn’t going to work. Because of the integration of all the systems, you might tend to find that reverse engineering them is a difficult task.

As a company, we decided to go for a clean start and basically said we'd filter all the data, take the data that we actually really required, and start off from scratch. We found that doing it that way, we didn’t get any bad data in there. All the data that we have now is pretty much been cleansed and enriched by the information that we can get from our automated systems, but also by utilizing the extra data that people have put in.

Gardner: Thanks so much. You've been listening now to a sponsored podcast discussion on a UK-based managed service provider, InTechnology, and their journey to provide better information and services for their voice, data, and storage customers. They've employed an automated lifecycle approach and it has benefited them in a number of levels.

Thanks to Ed Jackson, the Operational System Support Manager at InTechnology. Ed, we really appreciated your input.

Jackson: Okay. No problem.

Gardner: And this is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast discussion on how InTechnology uses network management automation to improve delivery and service performance for network and communications services. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: