This week, DockerCon 2015 was an opportunity for showcasing many of the new initiatives happening around the Docker platform, which is quickly becoming a popular way for building, shipping and running distributed applications.
Docker Addresses Networking Issues
Typically, network services such as network address translation (NAT), DHCP, and DNS are run from a single node. Multi-host means these network services are run on every compute node instead of running on only a single node, meaning fewer points of failure and high-availability.
On Monday, Docker announced native multi-host software-defined networking (SDN), as well as new capabilities that strengthen the portability of multi-container distributed applications.
Docker networking and plugins are part of Docker Engine, and after initial progress into networking, Docker’s new networking system for Docker Engine allows containers to communicate with each across hosts.
Configuring networking has also been made much more flexible by allowing users to plug in different networking drivers and third-party tools. This way, users can swap out “batteries” for third-party tools from providers such as Cisco, Microsoft, Midokura, Nuage Networks, Project Calico, VMware and Weave for SDN, and ClusterHQ for storage volumes.
Users can now create a network and attach containers to it, or use Docker Compose and Swarm to network a multi-container application across multiple hosts and can communicate seamlessly across a cluster of machines with a single command.
Docker also announced a collaboration with Amazon Web Services and Amazon EC2 Container Service to optimize Dockerized application scheduling for Amazon Elastic Compute Cloud instances. It will also provide a native cluster management experience for Docker users, letting Docker users launch tasks on Amazon ECS using the same APIs across their local dev environments.
(Developers can begin clustering their applications in Swarm and later plug in Mesosphere as a scheduling solution to scale their application to hundreds of hosts. )
An “Open Container Project” for Container Format and Runtime Standards
While it might not be the most exciting announcement, Docker announced a collaboration with industry leaders this week to create software container standards, which will ultimately ensure that container-based solutions don’t get sidelined by industry fragmentation.
The Open Container Project (OCP) will be managed under a vendor-neutral, open source, open governance model and be Housed under the Linux Foundation.
“With the Open Container Project, Docker is ensuring that fragmentation won’t destroy the promise of containers,” Linux Foundation executive director Jim Zemlin said in a statement. “Users, vendors and technologists of all kinds will now be able to collaborate and innovate with the assurance that neutral open governance provides.”
The OCP aims to build standards for container formats and runtimes built around openness, security, portability, composability, minimalism and backward compatibility.
Initial members include AWS, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware.
Docker is putting its existing code and draft standards of its image format and container runtime to serve as cornerstone technologies of the OCP, essentially allowing the OCP to manage the transition of its technology from an “insider” standard into an open industry standard guided by users and developers.
However, this standard container code only represents a small amount of the entire Docker project, and intentionally doesn’t include quickly developing areas where standardization could hurt innovation.
Docker CEO Ben Golub wrote in a blog post, “[T]he low-level container ‘plumbing’ in what we are donating today represents about 5 percent of the total Docker code base. The Docker client, engine, daemon, orchestration tools, etc. will continue to live at Docker. We will continue to provide a well-integrated tool chain for developers. We are purposely not trying to standardize the many things which are in areas where there is still a diversity of opinions and approaches.”
Docker Commercial Solutions for Organizations Needing Enterprise Assurances
Docker announced Tuesday that it would provide professional services including 24/7 support, certified Docker Engines and management tools which enterprises have demanded for their business-critical Dockerized distributed applications.
Docker’s commercial solutions for production environments are built around the experiences from Docker Hub Registry for sharing container images, and the beta launch of the Docker Trusted Registry in June 2014 which allowed for the secure storage and sharing of on-premise Docker images.
DTR provides organizations a unified developer experience and uniform set of tooling and shared microservices content, as well as audit logs for authorization and compliance.
The solutions are immediately available directly from Docker starting at $150 per month, as well as from Amazon Web Services, IBM, and Microsoft.
For instance, IBM offers DTR software as part of its flagship DevOps and Cloud offerings, including IBM UrbanCode and IBM Pure Application Systems. IBM also provides enterprise-class Docker containers for developers wanting to deploy production applications across hybrid cloud environments.
Microsoft has made the Docker Trusted Registry VM image available in the Azure Marketplace to provide organizations with a private repository of Docker images.
In related news, Google’s Google Container Registry for privately storing Docker container images came out of beta this week.
Other Dockercon News
Microsoft announced Visual Studio Online support for Docker, making it easier for customers to leverage Microsoft Azure to build distributed applications using Docker along with Microsoft’s Windows Server Containers and Hyper-V Container technologies.