CSC And IBM Expand Strategic Alliance

CSC And IBM Expand Strategic Alliance

CSC and IBM have announced a collaboration in which IBM will provide its cloud managed services for z Systems — IBM Cloud for z — and associated mainframe hardware, software, monitoring, and governance support to CSC clients who are moving to the cloud and want a more secure, scalable, flexible information technology infrastructure at significantly reduced operational costs.

The expanded alliance further advances CSC’s vision of the “Service-Enabled Enterprise” and IBM’s “as-a-service” strategy, both designed to increase client choice and innovation in adopting emerging technologies. The as-a-service strategy provides consumption-based pricing for the IBM z Systems environment to give clients’ greater capital investment flexibility.

“Many clients are looking for ways to make historically fixed cost more variable to migrate from legacy platforms to modernized applications and to a cloud enabled infrastructure,” said Stephen Hilton, CSC’s executive vice president & general manager, Global Infrastructure Services. “As a solution independent integrator, CSC is uniquely positioned to leverage transformative services like the IBM Cloud for z to benefit our clients. This new cloud-based service expands our cloud offering and is a natural evolution of our long-standing alliance with IBM. We are very excited to be providing our clients greater choice and capital flexibility when transforming legacy computing to a cloud-like service, and delighted that our close partnership with IBM has enabled us to bring this transformative offering to market.”

The agreement builds on an announcement earlier this year that the IBM Cloud is available to CSC clients as an option through CSC’s next generation IT services. Integration of the CSC Agility Platform™ with the IBM Cloud will enable clients across multiple industries — including healthcare, insurance and finance — to quickly leverage the benefits of the hybrid cloud and deploy some 10,000 applications that deliver rapid proof of value.

“IBM’s ability to bring its world-class strengths in customer service and information technology operations will enable CSC to maximize mainframe efficiencies now and in the future via the IBM Cloud for z,” said Philip Guido, GM of IBM Services in North America. “IBM’s leadership in next-generation cloud, security, big data, management processes and support aligns with CSC’s objectives of leading client digital transformation needs and providing long-term solutions to meet their ever-evolving requirements.”

IBM’s Cloud for z support of CSC will:

  • Enable the right level of computing resources and dynamically adjust capability levels as client needs change
  • Provide a predictable pricing model offering increased flexibility
  • Offer high levels of secure computing, coupled with configurable options of high availability that can help reduce business risk
  • Reduce total cost of computing and IT operations by using a shared infrastructure for software, server, disk, and tape needs

Resiliency services that will be provided to CSC include Disaster Recovery as a Service (DRaaS), which provides replication, backup and recovery of critical infrastructure, data, and systems to the cloud to enable a rapid recovery of applications and data.

Source: CloudStrategyMag

New York Philharmonic Selects Webair For Colocation Services

New York Philharmonic Selects Webair For Colocation Services

Webair has announced that the New York Philharmonic, the oldest symphony orchestra in the United States, is now a customer. Webair will provide the New York Philharmonic with managed cloud, cloud storage, backups, colocation, and network connectivity. The organization has also deployed a direct network tie-in from Webair’s NY1 data center facility on Long Island to its New York City premises, enabling it to leverage all of these managed services quickly, securely and with low latency. 

“The New York Philharmonic was looking for a provider with a state-of-the art network architecture to service its Internet infrastructure requirements, and in selecting Webair gained a strategic end-to-end cloud and services solutions partner that exceeded our diverse technical and business needs,” stated Terri-Ann Feindt, director of information technology at the New York Philharmonic. “Relocating a colocation environment is a challenging undertaking under any circumstances. With only 90-days’ notice, Webair quickly developed a comprehensive plan in collaboration with my technical team to ensure a seamless migration that met the business requirement of no downtime and included a failsafe contingency plan.”

Webair will also host the New York Philharmonic Leon Levy Digital Archives, which serves as a repository for nearly 175 years of Philharmonic history, and is the oldest and most comprehensive collection of any symphony orchestra. The ever-expanding Leon Levy Digital Archives currently makes available to the public more than 1.3 million pages, including printed programs, marked conducting scores, business documents, and photographs, dating back to 1842. Upon its completion in 2018, the Digital Archives will contain more than 3 million pages — including correspondence, marked scores and parts, contracts, and minutes from Board of Directors meetings — as well as all public documents from 1970 through today.

“The New York Philharmonic is a world-class cultural, educational and entertainment institution with a unique and demanding set of business, technological and connectivity requirements,” comments Michael Orza, chief executive officer, Webair. “Within a demanding timeframe, Webair was able meet the organization’s demanding uptime, agility, security and scalability requirements. Reinforced by our high-touch approach to providing customized solutions from NY1, our flagship data center facility, we were able to deliver connectivity and a breadth of business critical services to satisfy its growing and expanding digital environment.”

Source: CloudStrategyMag

Spark-powered Splice Machine goes open source

Spark-powered Splice Machine goes open source

Splice Machine, the relational SQL database system that uses Hadoop and Spark to provide high-speed results, is now available in an open source edition.

Version 2.0 of Splice Machine added Spark to speed up OLAP-style workloads while still processing conventional OLTP workloads with HBase. The open source version, distributed under the Apache 2.0 license, supplies both engines and most of Splice Machine’s other features, including Apache Kafka streaming support. However, it omits a few enterprise-level options like encryption, Kerberos support, column-level access control, and backup/restore functionality.

Splice Machine is going open source for two reasons. First, to get into the hands of developers, letting them migrate data to it, test it on their own hardware or in the cloud, then upgrade to the full version if it fits the bill. Motive No. 2, as is the case with any open source project, is to allow those developers to contribute back to the project if they’re inclined.

The first motive is more relevant here. Originally, Splice Machine was offered in a free-to-use edition minus some enterprise features. The open source version provides a less ambiguous way to offer a freebie, as there’s less fear a user will casually violate the license agreement by enabling the wrong item (see: Oracle). Going open source also helps defray criticisms about Splice Machine as a proprietary black box, which InfoWorld’s Andy Oliver hinted at in his original 2014 discussion of the database.

School's Out for Summer, But HostingCon Global 2016 Serves Up Ample Learning Opportunities

School's Out for Summer, But HostingCon Global 2016 Serves Up Ample Learning Opportunities

The countdown to HostingCon Global 2016 in New Orleans is on with less than a week to go before the hosting and cloud industry touches down at the Ernest N. Morial Convention Center. Education is one of the defining aspects of the HostingCon conferences, and with so many excellent sessions and opportunities for learning we wanted to spend the next week offering a preview for our readers who are attending HostingCon.

In advance of the conference, The WHIR talked to several HostingCon speakers to give you a taste of some of the sessions and what you may want to mark on your schedules. To kick things off we talked to our colleague Cheryl Kemp, Director of Community and Conference Content for Penton Technology and HostingCon Chair, to give us a sense of what’s new this year at HostingCon.

SEE ALSO: Making My Way to NOLA: HostingCon Global 2016

HostingCon Global starts on Sunday with in-depth educational workshops starting at 1 p.m. and ending just in time for the second annual HostingCon Game Show from 5 p.m. to 6:30 p.m. This year’s edition is hosted by cPanel chief business officer Aaron Phillips, and should be a fun way to start a busy week.

“I’m really excited about the program this year,” Cheryl says. “We’ve always had this big cloud ecosystem that includes a lot of infrastructure, and peripheral software providers, but this year we put a bigger focus on offering some education that was specific to them; we’re bringing in a lot of new speakers this year and people who are involved in different aspects of the industry.”

This year the educational sessions offer a lot more opportunities for networking, based on attendee feedback, Cheryl tells us.

“Past attendees gave us feedback that they loved the networking aspect of HostingCon, but that they wished there was some more structure around it,” she says.

LOOKING FOR MORE? RSVP TODAY TO WHIR NEW ORLEANS NETWORKING EVENT

In order to deliver that this year HostingCon is offering speed roundtable sessions on Monday afternoon, something that was introduced last year. With the format, groups of attendees have 20 minutes of dedicated time with an expert in a small group setting before they move on to the next expert. The small group allows attendees to speak up and ask questions. “It’s a way to incorporate networking into the education sessions,” Cheryl says.

On Wednesday afternoon HostingCon will offer new industry workgroups that will allow attendees to work in small groups discussing industry trends or problems and present to the larger group at the end. These workshops will allow attendees to work through challenging topics together and get to know their peers in a productive format.

Asking Cheryl to pick her favorite HostingCon session is like asking a parent to pick their favorite child, but she did tell us that she is really looking forward to the session in the Exhibit Hall on Tuesday with Chris Tarbell, who prior to working as Managing Director of FTI Consulting Global Risk & Investigations Practice was an FBI special agent who took on Anonymous and Silk Road. It certainly sounds like a session that’s not-to-be-missed.

Don’t forget that you can add sessions to your schedule on the HostingCon website by selecting the star next to the session name.

What are some of the HostingCon sessions you are most looking foward to? Let us know in the comments and follow along this week as we offer more previews on educational sessions at HostingCon. Please email nicole.henderson [at] penton.com if you want to meet up with the WHIR at the conference. I love to connect with our readers!

Source: TheWHIR

How to Create a Business Resiliency Strategy Using Data Center Partners and Cloud

How to Create a Business Resiliency Strategy Using Data Center Partners and Cloud

Increased dependency on the data center also means that overall outages and downtime are much costlier as well. According to a new study by Ponemon Institute and Emerson Network Power, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).

Throughout their research of 63 data center environments, the study found that:

  • Downtime costs for the most data center-dependent businesses are rising faster than average.
  • Maximum downtime costs increased 32 percent since 2013 and 81 percent since 2010.
  • Maximum downtime costs for 2016 are $2,409,991.

Now, with this in mind let’s start with two important points.

  1. What is Business Resiliency? Business resiliency and its associated services specifically revolve around the protection of your data. A proactive resiliency program would include HA, security, backup, and anything that impacts the confidentiality of data or compromises compliance. So, the idea becomes to unify the entire resiliency strategy plan to include all aspects of data protection.
  2. What is the business objective? Creating a proactive approach where a business can handle disruptions while still maintaining true resilience.

Today organizations are leveraging data center providers and cloud services for their business resiliency and continuity planning. To create a good plan there are several key steps that must be understood. Remember, business resiliency means protecting the entire business. Even if some specific units don’t need to be brought back immediately there has to be a good plan around it all. In creating a resiliency strategy start with these two concepts:

  • Use your BIA. Easily one of the most important steps in designing a resiliency plan and something that’s helps you better understand your business. These documents outline specific functions for each business unit, application, workload, user, and much more. Most of all, it helps outline which applications/workloads are critical and how quickly they need to be brought back up. By having such a document, an organization can eliminate numerous variables in selecting a partner which is capable of meeting the company’s business resiliency needs. Furthermore, you can align specific resiliency services to your applications, users, and entire business units.
  • Understand Business Resiliency Components and Data Center Offerings. Prior to moving to a data center partner or cloud provider, establish your resiliency requirements, recovery time and future objectives. Much of this can be accomplished with a BIA, but planning for the future will involve conversations with the IT team and other business stakeholders. Once those needs are established, it’s important to communicate them to the hosting provider. The idea here is to align the thought process to ensure a streamlined DR environment.

Incorporating Resiliency Management Tools and Services

One of the most important management aspects within any environment is the ability to have clear visibility into your data center ecosystem. This means using not only native tools, but ones provided by the data center partner. Working with management and monitoring tools to create business resiliency is very important. It’s also important to have a good view into the physical infrastructure of the data center environment.

  • Uptime, Status Reports, and Logs. Having an aggregate report will help administrators better understand how their environment is performing. Furthermore, managers can make efficiency modifications based on the status reports provided by a data center’s reporting system. Furthermore, working with a good log management system is absolutely critical. This not only creates an effective paper trail, it also helps with infrastructure efficiencies. A good log monitoring system is one of the first steps in designing a proactive, rather than a reactive environment. Many times logs will show an issue arising before it becomes a major problem. Administrators can act on those log alerts and resolve problems at a much steadier pace rather than reacting to an emergency.
  • Mitigating Risk and Protecting Data. It’s important to work with a comprehensive suite of highly standardized and customizable-tiered offerings which can support all levels of business requirements. Good data center partners can deliver a broad spectrum of resiliency solutions which range from multi-data center designs to cost-effective cloud-enabled DRaaS. Furthermore, you can use professional services to assess, design, and implement disaster recovery environments, as well as managed services to help ensure business continuity in the event of a disruption. Partner offerings can be tailored to specific customer needs; and remain flexible, agile and scalable to continue to meet evolving requirements.
  • Meeting Regulations and Staying Compliant. Data center partners can provide structured DR and security methodologies, processes, procedures, and operating models on which to build your resiliency programs. Leading data center models are founded on industry best-practices, methodologies, and frameworks including Lean Six Sigma, ITIL V3, ISO 27001, ISO 22301, and BS25999. In fact, data center partner consultants can help organizations meet FISMA, HIPPA, FFIEC, FDIC, PCI, and SOX compliance requirements. Furthermore, a partner’s DR audit and testing solutions helps organizations to meet corporate and regulatory audit requirements by demonstrating maturity of a business resiliency program.

The process of selecting the right data center partner should include planning around contract creation and ensuring that the required management and business resiliency tools are in place. Remember, your data center is an important extension of your organizations and must be properly managed and protected. Good data center providers will oftentimes offer tools for direct visibility into an infrastructure. This means engineers will have a clear understanding of how their racks are being utilized, powered and monitored. These types of decisions make an infrastructure more efficient and much easier to manage in the long-term.

Source: TheWHIR

Partner Compatibility Strategies for MSPs, CSPs and Hosting and Other Service Providers: Part Two

Partner Compatibility Strategies for MSPs, CSPs and Hosting and Other Service Providers: Part Two

Best Practices for Enabling your Go to Market Model – Part 2

Part one of this series explored the overall vendor mix and compatibility strategies in building an ecosystem, what that might look like and some of the challenges in building vendor/partner relationships. Although there is no guaranteed formula in creating and supporting these ecosystems, there are best practices to employ when managing them. Building upon these best practices will help manage the go to market strategy with the ecosystem and be successful in communication with them.

Once a few partners are chosen and the ecosystem is growing, the next step is defining a set of clear expectations for both vendor and partner. This is a critical step that is often overlooked. A partnership needs care and feeding, just like any other relationship. Setting things up clearly from the start is critical for success.

Defining the priorities and goals in the vendor/partner relationship establishes a framework to build upon. Partners should agree on what the priorities and goals are and who is responsible for them.

A good, basic joint business plan will bring all parties together to agree on priorities, goals, responsibilities and time frames.

It does not have to be anything fancy, lengthy or require weeks of effort. But it does need to provide the documentation on what the partnership is based on and why it’s being done.

Some critical elements of a business plan for a vendor/partner or partner/partner relationship are:

Clear Objective Unified Goals

This is the first step in creating a plan together. All the goals in the plan need to clearly state what the goals are and how it benefits both parties. It is at this stage that partners can clearly define the value propositions to each other and if the value propositions and joint goals align, then the rest of the plan will come together far more easily.

Stakeholders and Responsible Parties

Once the goals of the relationship are defined, stakeholders and responsible parties for each objective along the way are mapped. More than likely, this will be a combination of both the partner and vendor teams to bring the goals over the finish line.

At this stage, it’s extremely important to define ownership about who is accountable.

Timeframes

Next up would be a best-case timeline. As with all goals, timelines towards success are needed. Drawing a line in the virtual sand helps prioritize actions and base other activities around the completed outcomes. Be cautious not to be too “set in stone” with them though. Sometimes it’s necessary to reset expectations due to things out of the partners’ control. Revisit the plan and adjust as necessary.

Milestones

As with any good plan, milestones or markers of progress should be built into the planning and review process. These are the “mini goals” within a larger goal framework. These milestones will ensure that the plan is on track, and that the teams are moving forward with regard to the specific goals. Often times, these are also good markers for other activities that can be done to bolster goal related efforts, like marketing or training.

Review Cadence – When the business plan is initially laid out, the cadence of how the partners are going to check in with one other to ensure everything is on track should be established. Best practice is a formalized QBR of some kind quarterly or every other month; along with a weekly or bi monthly status check with documented accomplishments, next steps, and open issues.

The next critical step in any ecosystem is setting up the communication channels. Research shows that communication is one of the biggest stumbling blocks in partner/vendor relationships. It seems that we often struggle to get and keep this aspect working. Partners can establish guidelines for communication models that work most of the time.

People have different ways they take in information. Some like to read, some like to watch, some like to do. And in today’s world, there are many different ways to consume information. If we want to get communication to work, then we need to commit to a multi-channel, multi-format process for communicating with the teams.

Some of the most important factors to consider:

Be Consistent

The best way for us to establish communication channels with vendors and partners is to become consistent and predictable in the time and type of communications we push out to our ecosystem.

Communication channels need to be at regular, predictable times and formats. For example, if communication is through a a monthly newsletter, send it on the same day every month. This pattern will set up as an expected way for partners and vendors to receive information.

Format

As many studies have shown, people need to see, hear and talk about something seven times before it is committed to memory. Studies also show that people learn in a variety of ways, so we should consider at least 3 different delivery mechanisms for messages. As a best-case scenario, the top three should be email, video and social. These 3 types of communication have shown to be the most accepted and successful ways to communicate with vendors and partners.

Be Relevant

The biggest difference between successful and unsuccessful communication strategies comes down to relevance of material. The more relevant the message is to the audience receiving it, the more likely communications is well received. Consider a newsletter packed with marketing information. A developer or other technical person is not going to read the newsletter because it is of no value to them. Likewise, a newsletter with only technical content would be glossed over by a salesperson. The closer you can align your content with the reader, the more effective your communications become.

Enable Communication at All Levels

Establishing communications using the best practices noted here will hit most of the communications channels we need to address: sales, technical, marketing, development, and executive. Depending on the vendor chosen and the strategic nature of the partnership, aligning at the executive level is a critical best practice often overlooked.

Again, like the business planning exercise, this does not have to be a big production; instead it is a connection between the executive levels in each organization to ensure they are in alignment with the strategic visions for each company. As much time as we spend making sure the product, sales, technical and marketing is in sync, it can all be washed away if the strategic vision for either the partner or vendor changes. Making sure communication is enabled within all levels of the organization is a best practice for success.

In conclusion, building out a partner ecosystem can be a powerful differentiator for the company, the market evaluation and as an enabler to your clients’ success. If the necessary time is taken to evaluate all aspects of a potential relationship, and then manage the relationship with these industry best practices, your company will be on its way to a successful, vibrant ecosystem that positively impacts top and bottom lines.

Join me at HostingCon July 24-27 to explore partnerships in even more depth.

Source: TheWHIR

Data Center Customers Want More Clean Energy Options

Data Center Customers Want More Clean Energy Options

datacenterknowledgelogoBrought to you by Data Center Knowledge

Today, renewable energy as core part of a company’s data center strategy makes more sense than ever, and not only because it looks good as part of a corporate sustainability strategy. The price of renewable energy has come down enough over the last several years to be competitive with energy generated by burning coal or natural gas, but there’s another business advantage to the way most large-scale renewable energy purchase deals are structured today.

Called Power Purchase Agreements, they secure a fixed energy price for the buyer over long periods of time, often decades, giving the buyer an effective way to hedge against energy-market volatility. A 20-year PPA with a big wind-farm developer insures against sticker shock at the pump for a long time, which for any major data center operator, for whom energy is one of the biggest operating costs, is a valuable proposition.

SEE ALSO: Report: US No Longer Lowest-Risk Country for Data Centers

Internet and cloud services giants, who operate some of the world’s largest data centers, are privy to this, and so is the Pentagon. The US military issecond only to Google in the amount of renewable energy generation capacity it has secured through long-term PPAs, according to a recent Bloomberg report.

Data Center Providers Enter Clean-Energy Space

Google’s giant peers, including Amazon Web Services, Facebook, Microsoft, and Apple, are also on Bloomberg’s list of 20 institutions that consume the most renewable energy through such agreements, and so are numerous US corporations in different lines of business, such as Wal-Mart Stores, Dow Chemical, and Target. There are two names on the list, however, that wouldn’t have ended up on it had Bloomberg complied it before last year: Equinix and Switch SuperNAP.

Both are data center service providers, companies that provide data center space and power to other companies, including probably all of the other organizations on the list, as a service. The main reason companies like Equinix and Switch wouldn’t make the list in 2014 is that there wasn’t a strong-enough business case for them to invest in renewable energy for their data centers. There was little interest from customers in data center services powered by renewable energy.

While still true to a large extent, this is changing. Some of the biggest and most coveted users of data center services are more interested than ever in powering as much of their infrastructure with renewable energy as possible, and being able to offer this service will continue growing in importance as a competitive strategy for data center providers.

Just last week, Digital Realty Trust, also one of the world’s largest data center providers, announced it had secured a wind power purchase agreement that would cover the energy consumption of all of its colocation data centers in the US.

More Interest from Data Center Customers

According to a recent survey of consumers of retail colocation and wholesale data center services by Data Center Knowledge, 70 percent of these users consider sustainability issues when selecting data center providers. About one-third of the ones that do said it was very important that their data center providers power their facilities with renewable energy, and 15 percent said it was critical.

Survey respondents are about equally split between wholesale data center and retail colocation users from companies of various sizes in a variety of industry verticals, with data center requirements ranging from less than 10kW to over 100MW. More than half are directly involved in data center selection within their organizations.

Most respondents (70 percent) said their interest in data centers powered by renewable energy would increase over the next five years. More than 60 percent have an official sustainability policy, while 25 percent are considering developing one within the next 18 months.

Download results of the Data Center Knowledge survey in full: Renewable Energy and Data Center Services in 2016

While competitive with fossil-fuel-based energy, renewable energy still often comes at a premium. The industry isn’t yet at the level of sophistication where a customer can choose between data center services powered by renewables as an option – and pay accordingly – or regular grid energy that’s theoretically cheaper. Even utilities, save for a few exceptions, don’t have a separate rate structure for renewable energy.

The options for bringing renewable energy directly to data centers today are extremely limited. Like internet giants, Equinix and Switch have committed to buying an amount of renewable energy that’s equivalent to the amount of regular grid energy their data centers in North America consume, but it doesn’t mean all that energy will go directly to their facilities. This is an effective way to bring more renewable generation capacity online, but it does little to reduce data center reliance on whatever fuel mix supplies the grids the facilities are on for both existing and future demand.

If, however, more utilities started offering renewable energy as a separate product, with its own rate – as Duke Energy has done in North Carolina after being lobbied by Google – data center providers would be able to offer the option to their customers, and it would probably be a popular option, even if it meant paying a premium. According to our survey, close to one-quarter of data center customers would “probably” be willing to pay a premium for such a service. Eight percent said they would “definitely” be willing to do so, and 37 percent said “possibly.”

At no additional cost, however, 40 percent said they would “definitely” be more interested in using data center services powered by renewable energy.

As the survey shows, interest in renewable energy among users of third-party data center services is on the rise, and if more utilities and data center providers can find effective ways to offer clean energy to their end users, they will find that there is not only an appetite for it in the market, but also that the appetite is growing.

Download results of the Data Center Knowledge survey in full: Renewable Energy and Data Center Services in 2016

Source: TheWHIR

OpenStack Fuels Surge of Regional Rivals to Top Cloud Providers

OpenStack Fuels Surge of Regional Rivals to Top Cloud Providers

datacenterknowledgelogoBrought to you by Data Center Knowledge

As the handful of top cloud providers expand around the world, battling it out in as many markets as they can get to, they are also increasingly competing with smaller regional players in addition to each other. One of the biggest reasons for this surge in regional cloud players is OpenStack.

The family of open source cloud infrastructure software has lowered the barrier for entry into the cloud service provider market. Combined with the rise of local regulatory and data sovereignty concerns and demand for alternatives to the top cloud providers, OpenStack has fueled an emergence of numerous regional cloud providers around the world over the last two years, according to the market research firm IDC.

Most of these regional players are using OpenStack, IDC said in a statement this week. The analysts expect growth in regional cloud providers to continue.

The announcement focuses on one major sector of the cloud market: Infrastructure-as-a-Service. Amazon Web Services continues to dominate it, “followed by a long tail of much smaller service providers.”

The firm forecasts the size of the global IaaS market to more than triple between 2015 and 2020, going from $12.6 billion last year to $43.6 billion four years from now.

This growth is poised to ensure continued growth in demand for data center capacity around the world, as both top cloud providers and smaller regional players expand their infrastructure to support more and more users.

Read more: How Long Will the Cloud Data Center Land Grab Last?

Unlike the early years of cloud, when the majority of the growth was driven by born-on-the-web startups and individual developers, the next phase of growth will be fueled to a large extent by enterprises.

Almost two-thirds of respondents to a recent IDC survey of more than 6,000 IT organizations said they were already using public cloud IaaS or were planning to start using it by the end of this year.

Enterprises are increasingly looking to public cloud services to help them make their businesses more agile, Deepak Mohan, a research director at IDC, said in a statement:

“This is bringing about a shift in IT infrastructure spending, with implications for the incumbent leaders in enterprise infrastructure technologies. Growth of public cloud IaaS has also created new service opportunities around adoption and usage of public cloud resources. With changes at the infrastructure, architectural, and operational layers, public cloud IaaS is slowly transforming the enterprise IT value chain.”

See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning

Source: TheWHIR

Friday's Five: A Handful of Tech Headlines You May Have Missed, July 15

Friday's Five: A Handful of Tech Headlines You May Have Missed, July 15

As we head into the weekend there’s that nagging feeling that you may have missed something. You’re busy, and it’s hard to keep up with every piece of news that is important to your business. This weekly column aims to wrap up the news we didn’t get to this week (in no particular order), and that may have slipped under your radar, too. If you’ve got something to add, please chime in below in the comments section or on social media. We want to hear from you.

This week was Microsoft’s Worldwide Partner Conference in Toronto, and I was in attendance to bring you the latest news and coverage. Here’s what you may have missed:

  1. Microsoft Wins Big in Fight for User Privacy as Irish Search Warrant Found Invalid: In a huge victory for Microsoft, a court has found an Irish search warrant invalid.
  2. Microsoft’s Brad Smith on Building a Cloud for Good, and How LinkedIn is Part of the Plan: What exactly will LinkedIn bring to Microsoft? Brad Smith gave us a bit of a hint in his keynote this week.
  3. Facebook Hits Like Button on Office 365, and Other Microsoft Cloud News from WPC 2016: Microsoft is relying on customers like Facebook to convince customers that Microsoft is not only “cool again”, but is also the biggest and best cloud for the enterprise user.
  4. Microsoft WPC 2016: Day 1 Keynote with Satya Nadella: Microsoft CEO Satya Nadella kicked off Microsoft’s Worldwide Partner Conference with some announcements around new partnerships.
  5. Microsoft Aims to Launch Azure Stack by Mid-2017: Dell, HPE and Lenovo will deliver pre-configured Azure Stack integrated systems to help speed implementation of Azure in data centers.

Elsewhere, here’s what we’re reading:

Source: TheWHIR

Report: Enterprise Adoption Driving Strong Growth Of Public Cloud IaaS

Report: Enterprise Adoption Driving Strong Growth Of Public Cloud IaaS

Public cloud infrastructure as a service (IaaS) offerings are rapidly gaining acceptance among enterprises as a viable alternative to on-premises hardware for IT infrastructure. A recent survey of over 6,000 IT organizations found that nearly two thirds of the respondents are either already using or planning to use public cloud IaaS by the end of 2016. International Data Corporation (IDC) forecasts public cloud IaaS revenues to more than triple, from $12.6 billion in 2015 to $43.6 billion in 2020, with a compound annual growth rate (CAGR) of 28.2% over the five-year forecast period.

“Public cloud services are increasingly being seen as an enabler of business agility and speed,” said Deepak Mohan, research director, Public Cloud Storage and Infrastructure at IDC. “This is bringing about a shift in IT infrastructure spending, with implications for the incumbent leaders in enterprise infrastructure technologies. Growth of public cloud IaaS has also created new service opportunities around adoption and usage of public cloud resources. With changes at the infrastructure, architectural, and operational layers, public cloud IaaS is slowly transforming the enterprise IT value chain.”

The public cloud IaaS market grew 51% in 2015. IDC expects this high growth to continue through 2016 and 2017 with a CAGR of more than 41%. The growth rate is expected to slow after 2017 as enterprises shift from cloud exploration to cloud optimization. In addition, alternatives such as managed private cloud will grow in maturity and availability, providing IT organizations more options as they plan their infrastructure transformation.

For many enterprises, a hybrid infrastructure mixing existing IT infrastructure with cloud infrastructure represents the optimal path to public cloud IaaS adoption. In fact, hybrid cloud infrastructure is already a common pattern at several large enterprises and IDC predicts that 80% of IT organizations will be committed to hybrid architectures by 2018.

From a worldwide perspective, a number of regional public cloud services have emerged in the last two years. A majority of these are based on OpenStack, which has lowered the barrier for creation and set up of new cloud services. IDC expects to see continued growth in regional public cloud service providers, driven by regulatory and data sovereignty concerns, and increasing demand for local alternatives to the global public cloud service providers.

The public cloud IaaS market is currently dominated by a few large service providers, led by Amazon, followed by a long tail of much smaller service providers. In 2015, 56% of the revenue and 59% of the absolute growth went to the top 10 IaaS vendors. The dominance of the leading providers is expected to continue throughout the forecast period, as economies of scale and continued investment drive the cycle of adoption and growth.

The IDC report, Worldwide Public Cloud Infrastructure as a Service Forecast, 2016-2020 (IDC #US41556916), provides a detailed forecast for the public cloud IaaS market. Revenues and growth are provided for the storage and compute segments and for three geographic regions (the Americas, Europe, the Middle East and Africa, and Asia/Pacific). IDC defines public cloud infrastructure as a service as the aggregate of virtual server compute, the raw ephemeral and persistent storage capacity, and the associated network capability, delivered through a public cloud deployment model.

Source: CloudStrategyMag