LONDON, ENGLAND - DECEMBER 02:  A staff member stands in a projection of live data feeds from (L-R) Twitter, Instagram and Transport for London by data visualisation studio Tekja at the Big Bang Data exhibition at Somerset House on December 2, 2015 in London, England. The show highlights the data explosion that's radically transforming our lives. It opens on December 3, 2015 and runs until February 28, 2016 at Somerset House.  (Photo by Peter Macdiarmid/Getty Images for Somerset House)

Slow Page Loads Hurting your Business? Try the Database Rx

Add Your Comments

Every second counts in today’s on-demand economy. And nowhere is this truer than the online world, where page load speed can make or break businesses—and their hosting providers.

It’s been reported that 40 percent of consumers abandon webpages that take more than three seconds to load. What does that do to revenue? Well, for ecommerce sites making $100,000 per day, a mere one second delay can result in losses of $2.5 million a year.

With so much on the line, hosters are expected to keep customer sites performing at optimal levels. However, many providers struggle mightily with this, especially during peak times. The platforms they use, such as WordPress, often can’t handle huge spikes in traffic and transactions that follow marketing blitzes, flash sales, holiday specials and even breaking news.

The Database as the Culprit

It all comes down to the database. Platforms like WordPress are backed by MySQL databases, which have inherent speed and scale limitations. Their underlying computer science was designed in the 1970s for 20th-century data requirements and hardware capabilities. That legacy math (i.e., tree-structure and associated algorithms) is incapable of delivering the flexibility, scale and speed needed for the on-demand world hosters now operate in. As a result, many hosters are bogged down by concurrency, caching and calibration issues.

  • Concurrency and Noisy Neighbors

As hosters with their own cloud infrastructures know all too well, managing concurrency is a constant battle. The more simultaneous users and load you have on a server, the more they fight for CPU, disk I/O and memory resources.

Each CPU core handles just one execution thread at a time, so the faster it can perform, the faster it can handle the next transaction. However, legacy database algorithms weren’t designed for concurrency. They require a tremendous amount of back-and-forth writing between disk and memory, which drives up I/O. With many noisy neighbors in multi-tenant environments, page load speeds become inconsistent and extremely slow. And you risk missing SLAs.

  • Caching Overload

Many hosters, including those running on third-party clouds like Google Cloud Engine, Amazon Web Services and Microsoft Azure, use caching to get around the database bottleneck. They front-end the database with boatloads of memory. But what’s in memory isn’t transactional, and inconsistencies between cache and disk happen all the time. The net-net is, while caching helps with page load speed, it causes new issues for hosters who end up spending more money on infrastructure and valuable time managing caching-related complexities.

  • Calibration Treadmill

Server loads fluctuate along with customers’ website traffic, peaks and valleys in your business, and changes in infrastructure resources. To keep sites performing as well as possible, you need to recalibrate MySQL instances to utilize resources according to conditions at hand.

The problem is, manual trial-and-error calibration eats up database administrator time (for hosters who have DBAs) and requires taking servers offline, which burdens other servers and affects speed. And, because you can’t manually recalibrate 24×7, your databases are almost never optimally tuned, and performance isn’t what it could, and should, be.

But what if your database wasn’t a bottleneck? If it didn’t cause page load speeds to plummet and require more money for infrastructure and more time managing caching and calibration?

The Adaptive Database as the Savior

Adaptive database engines, which hit the market about a year ago, eradicate these often-crippling legacy problems. Based on new science that leverages machine learning, they transform any MySQL instance into a self-configuring database—and make any WordPress-based site consistently high-performing, even in multi-tenant environments.

For instance, GEMServers, a high-performance WordPress hoster, upped its competitive advantage by using adaptive database technology. The company was able to accelerate customer page loads by 200 percent and handle transactions 39 times faster. This saved GEMservers from having to spin up more Google Cloud Engine machines for performance purposes, and enabled them to pass 20 percent in GCE-related cost savings on to customers.

  • Online auto-tuning – no DBAs required

With adaptive technology, your database is always optimally tuned, and you don’t have to lift a finger (or take down servers) to make it happen. Adaptive technology adjusts database behavior on-the-fly according to observed changes in data type, volume and velocity, and in the operational capabilities of hardware resources. It continuously optimizes how data is organized in memory and retrieved from disk.

  • Extreme concurrency at scale – no extra hardware

Adaptive databases operate in append-only mode, minimizing back-and-forth read/writes and reducing I/O by up to 80 percent. Your CPU gets more done with less cycles. You handle unexpected surges with predictable performance, and get orders of magnitude more headroom to take on more business with your existing hardware.

  • Less caching – no hassle

Adaptive databases deliver optimal performance without boatloads of cache, and without cache-related infrastructure spend and management hassles.

Hosting is a competitive industry. If you don’t deliver the page load speeds your customers need, you risk losing business. If you load up on hardware and cache to circumvent database speed and scale issues, your bottom line and productivity may suffer. Why not address the issue at the source, with adaptive database technology that delivers faster page loads, happier and more loyal customers and cost-effective multi-tenancy?

About the Author

Bob Buck_Deep Information SciencesDeep Information Sciences’ VP of Technology Bob Buck is a technology innovator with over 25 years of database and telecommunications experience. His success stems from his customer-centric approach to product strategy and positioning, which drives customer adoption and business velocity. Prior to Deep, Buck was principal solutions architect at NuoDB, where his responsibilities spanned engineering and sales. Earlier, he held engineering and architect roles at Object Design, VeriSign, Iron Mountain and Red Hat. With a strong background in mathematics, Buck has been integral in establishing cross-functional organizational initiatives, bringing representative experts together to solve fundamental issues of scale, performance and concurrency in the database, cloud and telecommunications industries. He holds a B.S. in mechanical engineering from Northeastern University.


Subscribe Now and Get Our Exclusive Report on "The Hosting Infrastructure Ecosystem"

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

Related Forum Threads

Add Your Comments

  • (will not be published)