The Internet is now synonymous with relentless growth. More Internet users are going mobile as their primary choice of access. Cloud infrastructure is becoming astronomically more popular among developers and operations engineers. More services are offering APIs to connect to other services. And IoT is going to bring 50 billion devices online by 2020, according to Cisco. Just 10 years ago, the Internet had 1 billion users according to an Internet Society report released last year. Today, the same report estimates it is closer to 3 billion.

And this growth is happening all prior to the majority of the world’s population even being connected to the Internet. A McKinsey report estimates as much as 60 percent of the world is not even online yet, putting the figure at 4.2 billion people without Internet access.

As the Internet’s growth progresses, interconnectivity becomes even more important. Many Internet users still do not fully understand how the network that provides their favorite services — like Snapchat messages, YouTube videos, or access to their bank — is largely a physical thing. The Internet in its most modern form is still thousands of miles of wrapped cable converging at certain geographical spots in the world to pass packets back and forth between companies, service providers, CDNs, and eventually to consumers.

Getting Facebook, Pinterest, Instagram, and all the other tech giants, connected physically is not an easy task. The same goes for providing an Internet-based security service.

In April of this year, OpenDNS opened a new data center in Johannesburg, South Africa. The location addition means providing companies and customers with a connection point in Africa, rather than routing traffic through Europe. As a result, OpenDNS engineers estimated the subsequent latency improved from a few hundred milliseconds, to just a few milliseconds RTT.

Much like launching a new UX project or developing a new app, the launch was a culmination of immense pre-work. OpenDNS Engineering Manager Jennifer Basalone says this process is always how it goes with establishing data centers. Having set up five data centers for OpenDNS, and building on experience doing the same work at Facebook, Basalone is more than well versed with the whole process.

Before a company can just drop a router into an Internet exchange (IX) and start peering with other companies to deliver service, a lot of vetting, research, and negotiation needs to be finished. Basalone outlined it into six major steps — all of which could of course be broken out into dozens of sub-steps.

1. Find a good IX

According to Basalone, it’s not quite like throwing a dart at a map of the globe. There are many considerations involved, like costs of operation, deployment issues, legal and tax considerations, allowance for private peering, security needs, data handling needs, the type of peers available in the IX, and quite a few others. Mainly, the goal is to find an IX with good neighbors so you can get customers closest to the providers of content they want.

Internet Exchanges Map

A map of Internet exchange locations from

2. Find what data center hosts that IX

This step is fairly straightforward, and will set the base for the needed research that follows.

3. Research the costs

A country’s stability, tax laws, and data privacy laws — among others — are all under consideration. The goal here is to “make sure [the targeted country] is not a US embargoed country,” Basalone said. “Then you really want to make sure the country is stable. And then look at the taxes that are involved.”

Now is also a good time to start looking at arbitration and service level agreements from the various providers involved to answer fundamental questions. For instance, can the data center provide the amount of power required to host enough servers and routers? Can it handle the amount of estimated traffic? Is there staff ready and available to help with outages 24/7?

Each country is different, and each country carries its own challenges. Brazil, for instance — a country OpenDNS does not currently operate in — reportedly has some fairly challenging import laws and taxes, being a country that likes to encourage as much local business as possible. Along with country-specific taxes and the cost of the actual equipment, there are import fees, monthly management costs, port and airport taxes, regional taxes. The list is extensive, Basalone said.

4. Pre-configure equipment and ship

Once all the costs are estimated, space is planned out, and a service agreement is vetted by legal and signed by both parties, it’s time to prep the equipment. Each router, switch or whatever other box required, needs to be booted, configured, and ready to ship out to location.Freight Yard

Once shipped, things can sometimes get complicated. The longer equipment takes to ship, or is stuck in customs, for example, the more security becomes a concern. “You can have equipment stuck in customs for 100 to 200 days, and not know when it will come out,” Basalone said. “Companies will sometimes even do three separate shipments and hope one makes it. The longer time it is en route, the higher chance it has of being stolen.”

5. Go Live

Once all the equipment has (hopefully) arrived safely, having gotten through complex customs laws, it’s time to send someone out to stack, rack, and bring it all online. But even then, it’s still not entirely a sure thing that the process will go smoothly. “Even if all the shipping and equipment setup works, nothing is going to come online until the transit provider comes online,” Basalone said.

Even choosing a transit provider alone can come with many considerations and sticking points.

For Basalone and her team, a data center go live date is a stressful and exciting time. It typically involves multiple engineers from different teams and locations, all working together to create an entirely new connection across the globe. “We all get together and sit in a chat room,” Basalone said. “And the network team lets us know our transit providers are online. Then we can bring our servers online.”

As far as the tools of the trade, Basalone’s team uses a host of them. “Our engineers use a wide variety of tools like Ansible, Puppet, NetHarbour, custom automation, and other provisioning support systems to automate as much of the build as possible,” she said.

6. Announce to the world

“Once everything is online, looks good, and everything’s a go, we request our awesome network engineering team enable anycast for the site,” she said. “At that point our anycast routes are announced in BGP from the new location and traffic will start flowing to our new data center.”

Along all six of these “simple” steps are minefields of issues that can cause the process to halt, or at least delay. Basalone said that is the number one thing her team tries to avoid when getting a data center live.

“Bringing up the site takes a couple of teams, so you don’t want to waste their time,” she said. “Having everything set up and well coordinated makes the process go a lot smoother.”

Sound pretty easy? Didn’t think so.

To learn more about OpenDNS’s engineering team and its progress, visit their blog here.

This post is categorized in: