Q&A: Imad Mouline, CTO of Compuware Division Gomez

Add Your Comments

August 13, 2010 — The service level agreement, and consequently performance, is an obvious and significant part of the hosting offering. Many providers operating in the more commoditized areas within hosting consider the SLA central to the product they are selling.

Performance monitoring, as a practice and as a technology, plays a fundamental role in evaluating and delivering on those SLAs, and is therefore a subject of ongoing investigation for hosting providers.

However, performance monitoring and performance monitoring tools are not as regularly regarded by hosting providers as presenting an opportunity – as a service feature or even a product to be sold. According to Imad Mouline, CTO of web performance monitoring technology provider Gomez (www.gomez.com), which was acquired last year by Compuware, says there are opportunities to provide hosting customers with performance monitoring that should be considered.

In an email Q&A with the WHIR, Mouline discusses the theory around monitoring application performance from the end-user perspective; how the evolving influence of cloud computing technology is impacting application performance and the way it is measured; and he discusses how his company creates opportunities for web hosting providers to partner and distribute its products to their customers.

WHIR: From a hosting perspective, website or web application performance tends to come up in the context of an e-commerce operation, and sales lost due to downtime. Do you consider that the main selling point for performance monitoring? Are there other key functions that a tool like yours is shepherding?

Imad Mouline: Although performance monitoring is important for customer-facing, revenue-generating e-commerce sites, every website has different objectives. However, regardless if a site’s objective is conversions or page views, all sites are impacted by performance. If end-users find a site slow or difficult to use, no matter what type of website it is, they’ll either find ways to work around it or leave for a competitive site, which impacts the bottom line.

Consider this: a one second delay in website response time can reduce online conversions by seven percent; and that the average online shopper waits a scant four seconds before going to a competitor’s site. Today you have to consider everything that your end-user sees when visiting your website. Many websites are really “composites” that incorporate numerous third party services including external ad servers, news tickers, and Web analytics engines. If any one of these components slows down, the entire end-user experience can be negatively impacted.

WHIR: You have a few hosting providers listed on your site as partners, and a couple of them are big colo operations, like Equinix and Savvis. You also have a few big CDN players. Can you explain in a bit more detail what characterizes a “partner” relationship? Are there different kinds of partnerships you pursue with hosts?

IM: Hosting service providers use Gomez services to maintain application delivery from their data centers all the way out to their end-users at the end of a long and complex Web application delivery chain (including third party services as well as network elements like ISPs, carriers, devices, browsers, etc., which can also impact the end-user experience). The key is regular monitoring of application performance from their end-users’ perspectives. Specifically, our hosting service provider customers can see which end-users may be experiencing a performance degradation, and then isolate and resolve the performance-impacting variable wherever it may be along the Web application delivery chain. And, in most cases, they’re able to do this before end-users are even aware an issue exists.

Our hosting service provider partners are also able to independently validate their own performance levels and their customers’ SLAs.

And, our partners resell Gomez solutions to enable their customers to monitor performance themselves. This gives our partners a distinct competitive advantage in an increasingly crowded market.

WHIR: From your point of view, should a service provider treat performance monitoring more as a tool for operating a reliable data center infrastructure – a quality of service thing – or can it also be treated more specifically as a feature, or even as a product?

IM: The short answer is both and more. As a hosting service provider monitoring your own performance levels, a more proactive approach can help ensure that you’re the one finding and fixing problems, not your customers or their end-users. By identifying and resolving performance-impacting variables you can avoid reputation-damaging performance lapses.

Hosting service providers should consider offering performance monitoring directly as a service to their own customers, because it helps build out a more comprehensive suite of services while nurturing customer trust.

But perhaps one of the most important aspects of monitoring performance levels for service providers is it can provide a competitive differentiator through the ability to validate customers’ SLAs. Today, to be truly effective and acceptable to all parties involved, the performance metrics should use a reputable and trusted third party monitoring source and look beyond metrics collected in the datacenter to measure what performance is like from the end user’s perspective. Fully documented and understood, SLAs backed by a credible source are more meaningful and provide a layer of transparency when presented to customers. Hosting service providers who are confident about their performance levels are typically receptive to the use of independent measurements.

WHIR: Is there a way for a hosting provider to – either through an established reseller system or maybe through something a little more home-grown – fairly easily turn performance monitoring or management into a product it can offer to its customers for a price?

IM: Today there are several hosting service providers offering Gomez’s SaaS-based solutions as part of their hosting platform for customers developing and deploying Web applications. Because Gomez is an independent and trusted third party monitoring source, our partners integrate measurement results into their dashboards and utilize our APIs.

Our channel managers work closely with individual partners to help them develop a go-to-market plan for leveraging Gomez in their business and provide comprehensive technical and sales training.

WHIR: I’ve heard a little bit about how you’re developing performance monitoring tools tailored to the cloud. Could you expand a bit on what exactly you’re introducing, and maybe offer some insight into how the development of cloud technology might be impacting application performance in general?

IM: Cloud computing infrastructures can have an enormous impact on application performance. Most cloud customers don’t have a lot of visibility into their cloud computing infrastructures, including their cloud service provider’s capacity management. For example, when running an application in the cloud, there’s little way to know how an unrelated company in the cloud that experiences a spike in traffic in this shared environment is going to impact a cloud customers’ own end-users and ultimately, their business. For cloud customers, end-user performance is the best measure of the overall system health.

Today’s cloud service providers tend to offer guarantees like 99.9 percent uptime, and while it’s great their servers are up and running, this has virtually no correlation to a customer’s ongoing application performance levels. There is currently a glaring lack of SLAs that guarantee customer-specific application performance levels in the cloud. For these reasons, Web hosting service providers that are considering offering cloud services can stand apart from the competition by offering performance-focused SLAs in the cloud; independently validating these SLAs and also giving customers the tools they need to monitor cloud-based application performance themselves.

Gomez views creating and validating performance-focused SLAs as a three step process that starts with the cloud customer:

1) Understand why you are using the cloud;

2) Evaluate service providers based on your reasons for using the cloud and your associated performance requirements/success criteria; and

3) Set SLAs based on these performance requirements and commit to ongoing performance monitoring in order to validate SLAs, ensure performance requirements are being met and verify that you’re getting what you’re paying for.

Certain reasons for moving to the cloud, such as trading capex for opex, do not directly raise issues of performance. But other reasons bring critical performance issues to the fore and must be addressed in order for cloud investments to truly serve the needs of a business.

For example, companies using the cloud for elasticity purposes need to understand what amount of capacity needs to be available, and how quick the ramp-up needs to be in order to maintain a seamless end-user experience. Companies using a cloud-bursting model need assurances that complex configurations will work seamlessly and right on time, also with minimal impact on the end-user experience. Finally, companies using the cloud to support business applications must consider, first and foremost, performance requirements for end-users in order to reinforce them in SLAs.

Cloud service providers that were recently tested, monitored and measured for performance and availability had problems (e.g. slow and missing content and functionality) at the edge of the Internet – where many application end-users are. To address this, Compuware Gomez recently revealed a free, first-of-its-kind portal that offers companies a first-hand look into the real-time, multi-geography performance of several leading IaaS offerings from Amazon, GoGrid, OpSource, and RackSpace; and offerings from Google and Microsoft. This portal gives companies a helpful framework for making critical cloud decisions, such as determining which applications are suitable for migration to the public cloud. According to Ovum, this portal is “a timely offering that will help boost market awareness of public cloud QoS issues.”

WHIR: Gomez was acquired by Compuware in the later part of 2009. Has being a part of a larger organization changed the nature of what Gomez does at all? Are there opportunities to incorporate or integrate with other features or services from other parts of the organization?

IM: From the deal’s inception, Gomez and Compuware have had a great deal of synergy and are united behind a common goal – helping customers optimize application performance by finding and fixing the most comprehensive range of problems both within the datacenter and across the Internet. These capabilities are not matched by any of Compuware’s competitors.

By accelerating the identification, diagnosis and resolution of business-impacting issues, Compuware and Gomez are helping customers ensure superior Web performance that enhances and protects their most critical assets, including brand, customer satisfaction and revenues.

Being part of the larger Compuware organization has enabled Gomez to fill a much-needed role as application performance management evolves to fit the realities of Web 2.0 world. However, for Gomez, what we do best remains the same – measuring and monitoring Web performance from the end-user’s perspective. We help our customers find and address problems across the complete Web application delivery chain in order to optimize their Web performance. Gomez’s capabilities are based in a real-world testing network comprising thousands of end-user desktops and devices, which gives companies a quick, easy view into true end-user Web experiences around the world.

 

Add Your Comments

  • (will not be published)