The general narrative of the evolution of the public cloud infrastructure services market over the last several years has been that the handful of giants have used their scale and engineering resources to add features and lower prices to a point where most smaller providers can no longer compete, forcing even the big but late to arrive players like Dell and HP to exit the market altogether.
New York-based Infrastructure-as-a-Service provider DigitalOcean has been one of the exceptions. Having built a reputation and a following as the cloud infrastructure service for developers, the company has been growing fast, raising funds and expanding scale. Adrian Cockcroft, a well-known cloud infrastructure technologist and more recently a venture capitalist, working as a technology fellow at Battery Ventures, pointed DigitalOcean out in his overview of the cloud market last week at the Structure conference in San Francisco as one of the leaders, saying the company had been in “hyper growth” until about one year ago, even though recently its growth was slowing.
Similar to the biggest players, the likes of Amazon and Microsoft, there is pressure on DigitalOcean to expand data center footprint, both to add capacity in existing locations and to add new ones. DigitalOcean started in 2012 with two data centers, in New York and Amsterdam, but operates 11 today. It added three locations last year and two this year and plans to launch more in 2016, Luca Salvatore, network engineering manager at DigitalOcean, said in an interview.
Rather than following Amazon Web Services to Northern Virginia or Microsoft to Mumbai, DigitalOcean has its own site-selection drivers. Since its target audience consists of developers, it goes to places with strong developer communities, Salvatore said.
Last month, for example, on the heels of closing an $83 million Series B funding round, the company announced a new site in Toronto, citing about half a million software developers in the region. Its first Toronto data center reduces latency of the cloud for Toronto developers and makes it easier to comply with data sovereignty rules some customers in Canada are reportedly faced with.
Today, the company has two data center locations on the East Coast (Toronto and New York) and one on the West Coast (San Francisco). It also has data centers in London, Amsterdam, Frankfurt, and Singapore.
Its biggest footprint by capacity is in New York, where the company has three data centers, Salvatore said. It also has three in Amsterdam, which is Europe’s major connectivity hub, but those house fewer servers than the ones in New York.
DigitalOcean uses commercial data center providers to house its infrastructure, including Equinix, Digital Realty Trust-owned Telx, TelecityGroup, and Interxion.
With its amount of data centers and expansion rate, it’s not surprising that the most difficult thing about managing the infrastructure, at least from the networking perspective, is deployment of new racks of servers. “That comes down to how well your internal tools are, and how well you can automate things,” Salvatore said.
The company has built a lot of automated systems in-house specifically for the purpose of deploying new capacity. All provisioning and configuration after a physical rack of servers is installed and plugged into the network now takes place automatically.
While it’s not at the scale where it needs 100 Gigabit networking – unlike Facebook, for example – DigitalOcean has recently upgraded from 10 Gig to 40 Gig switches throughout the topology, top-of-rack, aggregation, and core. The last three data centers it added to its roster use 40 Gig Juniper switches.
If it goes to 100 Gigs any time soon, it will be at “the edge,” or at peering points where its data centers connect to the internet, Salvatore said. Commenting on Facebook’s recent announcement that the company has designed 100 Gig top-of-rack switches for its data centers, he said, “That’s some cool stuff. We’re not there yet.”
Where DigitalOcean’s approach to networking is similar to the approach at Facebook and other web-scale data center operators is the separation of the data plane from the control plane. The approach is one of the key ways to implement Software Defined Networking. The main benefit is having a centralized control plane that can manage the entire global network infrastructure.
Another pressure DigitalOcean shares with its big competitors is companies’ distrust of the public internet for critical data and applications. Enterprises want to use public cloud services, but they don’t want to use the internet to do it, so AWS, Azure, Google Cloud Platform, and IBM SoftLayer offer the option of connecting to their servers through direct private network connections in colocation data centers or directly from customers’ corporate data centers using network carriers like Level 3, AT&T, or Verizon, among others.
The option is not available for DigitalOcean’s services at the moment, but the company is seeing demand for it from some of its largest customers. “There’s definitely demand for it. We do see that need,” Salvatore said. The company may add the option “at some point,” he said.
Original article appeared here: DigitalOcean: From Two Data Centers to 11 in Three Years