Microsoft's Applied Incubation team director Paul Slater on building a future-proof data center

Kareem Anderson

Image Credit: Microsoft Datacenter

We reported a few weeks ago about some recent rumors of Microsoft possibly building their next data center in Phoenix, Arizona. While we have yet to hear anymore rumblings, Microsoft’s Applied Incubation team director Paul Slater is offering up some tips for any company planning a future for their next data center.

Just to lay some ground work, Slater believes that human services and work being replaced by robotics is not novel idea, but a reality. Slater recommends that anyone who is looking to build a data center that will last longer than ten years should incorporate automated task into their blueprints. Robotics in manufacturing and computing are changing and shifting away from simple task to more complex rule sets. In doing so, these ‘new’ robots will present management a new flexible opportunity in the design of data centers for the future. Slater explains, “we’d expect to see robots much more inside the data center.”

At the Data Center World Conference in Las Vegas, this past Monday, Slater presented a list of  considerations and suggestions to anyone about to build a data center. Data Center Knowledge are reporting on Slater’s speech in detail but we thought we’d offer you nuts and bolts of his presentation.  Of the seven key features Slater discusses, we found that the most important ones are location, standardization, and flexibility, seem to be ones that any potential data center should consider before any materials are ordered. Data center designers are suggested to consider items like taxes based on location, proximity to a grid, and environmental cost before any design are presented. Perhaps this is why open advances of land in Idaho, Texas, and Arizona present the ideal locations for many data centers.

Next designers should accommodate in their data center designs, a certain level of flexibility. Slater suggests that data centers should expect the unexpected and in some cases expect that things will change very rapidly. Data centers from a couple of years ago are currently running into issues with SSD storage units and thermodynamics they hadn’t left a little wiggle room for. Once the plans have been solidified for the data center, the software stack should be standardized and on some level, automated. “We ruthlessly standardize, and we ruthlessly automate,” Slater said. Slater believes that software that is standard is much easier to maintain than a mess of separate environment systems. That last part just sounds like common sense. Using one particular ecosystem built to communicate efficiently with itself, tends always to be a better experience.

Slater wraps up his presentation with more talk about integrated systems, flexible software, and quick deploy operations. Fortunately for Slater, he has the resources and insider knowledge of future technologies behind him, for the middle scale enterprise data center enthusiast, much of these suggestions come with their set of complications. It would seem the best bet for those looking to utilize this information is to take it as a framework and work within realistic confines of time, resources, and software availability.