The Evolution of White Box Gear, Open Compute and the Service Provider
There is a lot changing within the modern cloud and service provider world. Organizations are seeing the direct benefits of moving towards the cloud and are now positioning their spending cycles to create budgets to move their environment into the cloud. Trends around application delivery, data control, resource utilization, and even end-user performance are all driving more users to use cloud and service providers.
Consider this, according to Gartner, the worldwide public cloud services market is projected to grow 16.5 percent in 2016 to total $204 billion, up from $175 billion in 2015, according to Gartner, Inc. The highest growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 38.4 percent in 2016.
“The market for public cloud services is continuing to demonstrate high rates of growth across all markets and Gartner expects this to continue through 2017,” said Sid Nag, research director at Gartner. “This strong growth continues reflect a shift away from legacy IT services to cloud-based services, due to increased trend of organizations pursuing a digital business strategy.”
The biggest reason for this growth is the clear flexibility that you can from working with a cloud and service provider. Why is this the case? Because cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered “as a service” using Internet technologies.
This is where the modern service provider and the Open Compute Project (OCP) come in
With all of these new demands around new kinds of services and delivery methodologies – service providers simply needed a new way to deliver and control resources. This means building an architecture capably of rapid scalability and follows efficient economies of scale for a business. To accomplish this, there needed to be a revolutionary new way to think about the service provider data center and the architecture that defines it. This kind of architecture would be built around open standards and open infrastructure designs.
With that in mind, we introduce three very important topics.
- Understanding the Open Compute Project (OCP)
- Founded in 2011, the Open Compute Project has been gaining attention from more and more organizations. So, who should be considering the Open Compute platform and for what applications? The promise of lower cost and open standards for IT servers and other hardware seems like a worth-while endeavor; one that should benefit all users of IT hardware, as well as improving the energy efficiency of the entire data center ecosystem. The open source concept has proven itself successful for software, as witnessed by the widespread adoption and acceptance of Linux, despite early rejection from enterprise organizations.
The goal of Open Compute?
- To develop and share the design for “vanity free” IT hardware which is energy efficient and less expensive.
- OCP servers and other OCP hardware (such as storage and networking) in development are primarily designed for a single lowest common denominator — lowest cost and basic generic functions to serve a specific purpose. One OCP design philosophy is a “vanity free” no frills design, which starts without an OEM branded-faceplate. In fact, the original OCP server had no faceplate at all. It only used the minimal components necessary for a dedicated function — such as a massive web server farm (server had no video chips or connectors).
- Cloud Providers Are Now Using Servers based on OCP design
- Open compute servers are already generating a lot of interest and industry buzz. Imagine being able to architect completely optimized server technologies which deploy faster, are less expensive, and have just the right features that you need for scale and efficiency.
- This is where the new whitebox and OCP family of servers comes in. With an absolute focus on the key challenges and requirements of industry’s fastest-growing segment – the Service Provider –These type of servers take the OCP conversation to a new level. The customization level of these servers allows you the capability to design and deliver everything from stock offerings to custom systems; and even component-level designs. You also get system integration and data center support. The ultimate idea is to create economies of scale to drive TCO lower and ROI higher for those where “IT is the business.”
- Clear demand for OCP and “vanity-free” server architecture
- According to IDC, service providers will continue to break new ground in search of both performance gains and cost reductions as they expand their cloud architecture implementations. Additionally, the hosting-as-a-service model will continue to transition away from traditional models toward cloud-based delivery mechanisms like infrastructure as a service, spurring hyperscale growth in servers used for hosting (15% to 20% CAGR from 2013 to 2018).
- At Data Center Knowledge, we conducted a survey, sponsored by HP, to find out what types of workloads are being deployed, what service providers value, and where the latest server technology can make a direct impact. The results, from about 200 respondents, showed us what the modern data center and service provider really needed from a server architecture. They also showed clear demand from servers capable of more performance, while carrying fewer “bells and whistles.”
- 51% of respondents said that they would rather have a server farm with critical hardware components and less software-add-ons.
- When asked: “How much do server (hardware and software) add-on features impact your purchasing decision? (Easy-to-access drive holders, memory optimizations, easy upgradability, software management, etc.)” 73% of the survey respondents indicated that this was either important, or very important to them.
Here’s the reality – there is big industry adoption around OCP as well. Facebook is one of those organizations. According to Facebook, a small team of their engineers spent the past two years tackling a big challenge: how to scale our computing infrastructure in the most efficient and economical way possible.
The team first designed the data center in Palo Alto, before deploying it in Prineville, Oregon. The project resulted in Facebook building their own custom-designed servers, power supplies, server racks, and battery backup systems.
What did this mean for Facebook and their new data center?
- Usage of a 480-volt electrical distribution system to reduce energy loss.
- Remove anything in their servers that didn’t contribute to efficiency.
- Reuse hot aisle air in winter to both heat the offices and the outside air flowing into the data center.
- Eliminate the need for a central uninterruptible power supply.
Ultimately, this design produced and environment capable of consuming 38 percent less energy to do the same work as Facebook’s existing facilities, while costing 24 percent less.
This is where as a cloud-builder, service provider, or modern large enterprise you can really feel the impact. The concept of servers, without all the add-ons and built around OCP design standards, has sparked interest in the market since this type of server architecture allows administrators to scale out with only the resources that they need. This is why we are seeing vanity-free server solutions emerge as the service provider business model evolves.
Source: TheWHIR