IBM's Watson Goes to Cybersecurity School

IBM's Watson Goes to Cybersecurity School

IBM will address the cybersecurity skills gap by sending Watson to school, the company announced Tuesday. Watson for Cyber Security is part of a year-long research project in collaboration with 8 universities in the US and Canada.

The cloud-based cognitive system has been “trained” in the language of security, and beginning this fall Watson will be scaled to receive training from the California State Polytechnic University, Ponoma; Pennsylvania State University; the Massachusetts Institute of Technology; New York University; the University of Maryland, Baltimore County (UMBC); the University of New Brunswick; the University of Ottawa, and the University of Waterloo. IBM’s X-Force research library will also be used as training material for Watson.

IBM hopes Watson will discover patterns and evidence of otherwise-hidden cyber attacks, allowing IBM to improve security analysts’ capabilities. Cognitive systems could automate the connections between data, emerging threats, and remediation strategies, the company said. It plans to use Watson for Cyber Security for deployments beginning in beta production this year.

Read more: Obama Names 12 Members to New Commission on Enhancing National Cybersecurity

Security analysts may need help, given the explosion in data available. IBM says the average organization sees over 200,000 pieces of security event data each day, and enterprises spend $1.3 million and nearly 21,000 hours just on false positives. The company also notes that the 75,000 items in the National Vulnerability Database, the 10,000 security research papers a year, and 60,000 security blogs published each month challenge analysts to move at the speed of information.

The looming problem is that they may not have an easy time hiring help, as studies have indicated that cybersecurity skills are in short supply.

“Even if the industry was able to fill the estimated 1.5 million open cyber security jobs by 2020, we’d still have a skills crisis in security,” said Marc van Zadelhoff, General Manager, IBM Security. “The volume and velocity of data in security is one of our greatest challenges in dealing with cybercrime. By leveraging Watson’s ability to bring context to staggering amounts of unstructured data, impossible for people alone to process, we will bring new insights, recommendations, and knowledge to security professionals, bringing greater speed and precision to the most advanced cybersecurity analysts, and providing novice analysts with on-the-job training.”

Read more: IBM Cloud Unit CTO Retires, Watson Fails to Impress at CES

In addition to Watson’s training, UMBC announced it will create an Accelerated Cognitive Cybersecurity Laboratory in collaboration with IBM Research.

Cybersecurity may be the niche Watson needs to get out there and get a job in the real world, after failing to impress at this year’s Consumer Electronics Show.

Source: TheWHIR

Console Releases New Enterprise Cloud Access Platform

Console Releases New Enterprise Cloud Access Platform

Console has announced that general availability of its new platform will commence in June 2016, with select customers active on the platform today.

With Console, companies can bypass the public Internet and directly connect to business-critical cloud services, such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. In addition, the Console platform enables direct connections with a variety of other cloud service providers, enterprises and networks. Services accessible via the Console platform offer highly reliable, private and flexible direct connections that avoid the associated risks of sending data over the public Internet. The Console platform extracts all of the Layer 2 and Layer 3 configuration complexity for the enterprise to connect its network privately and directly to others.

Building on its continued rapid growth in the the U.S., Canada, Europe, Asia, and the Middle East, the innovative Console platform is available across the company’s global footprint which consists of more than 160 global Points of Presence (PoPs). Console is showcasing its new platform at ITW2016 in Chicago until May 11.

“Finding a better way to interconnect workloads, applications and data is essential in the digital era,” said Al Burgio, Founder and CEO of Console. “Console’s unique technology, along with our global network, enables enterprises to avoid any uncertainty over public Internet connectivity, reassuring them of their investment in the cloud and security of business-critical data.”

“Enterprises are increasingly consuming IT on demand, using a mix of public and private cloud infrastructure services, multiple software-as-a-service (SaaS) vendors as well as company owned resources,” according to Jim Davis, senior analyst, service providers with 451 Research. “Enterprises need to think about moving from basic connectivity to more sophisticated interconnection platforms and tools in order to take advantage of cloud and network ecosystems. Automation of secure, private access combined with simplicity of operation and maintenance will help enterprises adopt and consume more cloud services,” Davis said.

The value of the Console platform is enhanced by its unique and innovative social interface for IT professionals that makes interconnecting to the cloud, enterprises and partners as easy as connecting on a social network. By providing this highly functional, feature rich social platform, Console simplifies the interconnection, operations and management of network and cloud services for enterprise IT users.  Additionally, this social feature of the platform allows companies to invite their key partners and vendors in their supply chain onto the Console platform for private data connections that bypass the public Internet.

Console’s social functionality is currently available by invitation only, followed by general availability in June 2016.

Source: CloudStrategyMag

Epsilon Delivers Intelligent Networking For The Daisy Group

Epsilon Delivers Intelligent Networking For The Daisy Group

Epsilon was selected by Daisy Worldwide, the UK’s largest provider of international numbering solutions, to deliver its Epsilon Intelligent Network eXchange (eINX) solution. Daisy Worldwide will use eINX to interconnect with their customers and partners in their local regions while ensuring network performance on a global scale.

Customers and partners in the US, Europe and Asia will benefit from intelligent routing delivered via eINX, which supports regional breakout. Regional breakout enables traffic to be routed in-region creating new networking efficiencies and offering users the best possible quality of experience (QoE). eINX also supports real-time automated routing to avoid network faults or areas of diminished quality.

“eINX enables service providers to differentiate their offerings with quality. It removes the limits on what the network can do and opens up the possibility of more complex, demanding and innovative services. We offer guaranteed QoS and QoE on IP that is supported with real-time network data and analytics,” said Jerzy Szlosarek, CEO at Epsilon. “eINX is supporting new innovation in cloud, content and applications because it offers them a reliable and resilient foundation in high-performance networking.”

Daisy Worldwide is part of Daisy Group which is one of the UK’s largest independent provider of IT infrastructure and managed services, with 60,000 direct customers, 1,500 partners, 3,700 employees, and 30 locations nationwide. Daisy Worldwide supports a variety of the largest enterprises globally with end-to-end voice and managed services solutions.

“Epsilon offers a flexible opex-driven model as well as unique capabilities around the world. They have made it simple to adopt an IP-based solution and eINX accelerates our ability to deliver high quality services both locally and globally,” said Hayley Duckmanton, commercial director at Daisy Worldwide. “As we grow our business, eINX is ready to support us with scalability and intelligence.”

eINX combines Epsilon’s global network, which is supported by 500+ preconnected carriers in 170+ countries, with bespoke networking technology. It is powered by Cataleya’s next-generation session border controller (SBC) Orchid One.

Orchid One enables guaranteed QoS and QoE end-to-end on IP with near real-time visibility from transport to application layers. It enables Epsilon to gain real-time insights into each session, service, application, MOS/R-factor scores, network and end-to-end SLAs. Cataleya is a wholly-owned subsidiary of the Epsilon Global Communications group.

Source: CloudStrategyMag

Interoute Launches New Virtual Data Centre Zone In Istanbul

Interoute Launches New Virtual Data Centre Zone In Istanbul

Interoute has extended its networked cloud platform further into Asia with the opening of an Interoute Virtual Data Centre (VDC) zone in Istanbul. Interoute’s global integrated cloud and network platform is now available in 15 locations globally across three continents. The new zone in Istanbul further demonstrates Interoute’s ongoing expansion of its cloud platform and makes it the only cloud provider to offer local cloud in Turkey connected to global cloud zones across the rest of Europe, Asia, and North America, from the same platform.

This new Turkish zone creates an opportunity for European businesses to access new users and opportunities for growth in the dynamic business hub that is Istanbul, while businesses in the Middle East and south west Asian markets can easily reach new customers through the low latency, high throughput Interoute Networked cloud. The new Interoute VDC zone also offers local business in Istanbul an alternative to other cloud providers whose zones are solely in the US or Western Europe. This proximity enables applications to be hosted much closer to users in the region, eliminating the need to send data thousands of kilometres back and forth for processing.

Matthew Finnie, Interoute CTO, commented, “Istanbul is major market and a key strategic digital bridge between the Europe, the Middle East and south west Asia. 6 years ago, we expanded our pan European fibre backbone into the heart of Istanbul’s dynamic business hub. This is the foundation we’ve built on to extend our global cloud platform into the region. Businesses, entrepreneurs and developers using Interoute VDC can enjoy the benefits of a high performance cloud close their users, whilst having instant access as part of Interoute’s globally networked cloud platform, to Europe, USA, and Asia.”

Interoute VDC is a networked cloud offering both public and private cloud services built straight into Interoute’s vast next generation network. Performance is optimised as data is routed across this global backbone, offering market leading low latencies[1]. The platform supports customer data, software, and digital services alongside existing legacy customer IT hardware, enabling digital transformation and service creation for enterprises.

1. See http://www3.interoute.com/reports/cloudnetworkcomparison

Source: CloudStrategyMag

Huawei Unveils Cloud Transformation Solution At TM Forum

Huawei Unveils Cloud Transformation Solution At TM Forum

Huawei unveils Cloud Transformation solution at TM Forum and proposed that “clouds” will be the new engine that drives carriers’ digital transformation. The cloud-and-network synergy solution brings carriers’ existing network advantages into full play so as to help them carry out digital transformation.

The Internet is a massive part of people’s lives and work, bringing today’s individual and enterprise users the urgent need for Internetized ROADS experience (Real-time and on-demand, all-online, DIY and social). Carriers are now under great pressure of increasing competition. However, they are still incompetent in terms of agile innovation and operational efficiency due to a rigid and closed IT infrastructure. In contrast, Internet companies’ cloud-based IT architecture is more agile for service innovation.

Huawei believes carriers should prioritize cloudification during digital transformation. Cloudification empowers carriers in two ways

  1. Expanding the B2B Cloud Service Market and Enabling Agile Service Innovation
    With the Internetization of B2B and the maturing of cloud technologies, many forward-looking carriers such as DT, Telefonica and China Telecom have set their eyes on the government and enterprise cloud service market. Huge customer base, vast network resources and localized services are just some of the advantages carriers would need to exploit in order to gain a head start in the cloud service blue ocean market.
  2. Smooth Evolution to Cloudified IT Architecture Improves Operational Efficiency and Reduces Management Costs
    Huawei helps carriers evaluate the cloudification maturity of their operational systems in three phases. In the Cloud 1.0 phase (virtualization), software and hardware are decoupled. In the cloud 2.0 phase (cloudification), virtualized resources are centrally managed and scheduled. In the cloud 3.0 phase (native clouds), software can be independently developed, deployed and managed on clouds. Most carriers are evolving from Cloud 1.0 to Cloud 2.0. Their greatest challenge is implementing unified management for heterogeneous virtual resources and datacenters scattered in different places to enable agile service provisioning and boost resource utilization.

Based on OpenStack’s unified open architecture, the integrated resource pool supports unified management in 4 aspects: unified management for multi-datacenters, cloud and non-cloud resources, heterogeneous virtual platforms, and unified O&M. The solution supports smooth network evolution and allows customers to use a variety of devices from different vendors. The solution can be deployed in a “geographically distributed, logically centralized, resource-sharing and on-demand service” way. It dynamically meets the customized service requirements of both government and industry customers, while adapting to their existing data centers, significantly boosting resource utilization.

With this solution, networks are reconstructed centered on datacenters and deployed in a hierarchical manner (within datacenters, between datacenters, from datacenters to terminal users). Carriers can offer customized networks to meet customer requirements. Network as a Service (NaaS) increases business revenue, enhances customer loyalty and brings the value of “cloud-and-network synergy” into full play.

Huawei’s cloud transformation solution has been successfully deployed in more than 200 cases by global carriers. It has built more than 10 open labs worldwide for joint innovation and fast commercial deployment of new services, thereby helping carriers excel through digital transformation.

Source: CloudStrategyMag

Report: 57% of Organizations Lack Cloud Strategy

Report: 57% of Organizations Lack Cloud Strategy

While the common assumption is that the cloud represents reduced costs and better application performance, many organizations will fail to realize those benefits, according to research by VMTurbo, the application performance control system for cloud and virtualized environments. A multi-cloud approach, where businesses operate a number of separate private and public clouds, is an essential precursor to a true hybrid cloud. Yet in the survey of 1,368 organizations 57% of those surveyed had no multi-cloud strategy at all. Similarly, 35% had no private cloud strategy, and 28% had no public cloud strategy.

“A lack of cloud strategy doesn’t mean an organization has studied and rejected the idea of the cloud; it means it has given adoption little or no thought at all,” said Charles Crouchman, CTO of VMTurbo. “As organizations make the journey from on-premise IT, to public and private clouds, and finally to multi- and hybrid clouds, it’s essential that they address this. Having a cloud strategy means understanding the precise costs and challenges that the cloud will introduce, knowing how to make the cloud approach work for you, and choosing technologies that will supplement cloud adoption. For instance, by automating workload allocation so that services are always provided with the best performance for the best cost. Without a strategy, organizations will be condemning themselves to higher-than-expected costs, and a cloud that never performs to its full potential.”

Above and beyond this lack of strategy, SMEs in particular were shown to massively underestimate the costs of cloud implementation. While those planning private cloud builds gave an average estimated budget of $148,605, SMEs that have already completed builds revealed an average cost of $898,508: more than six times the estimates.

Other interesting statistics from the survey included:

  • Adopting cloud is not a quick, simple process: Even for those organizations with a cloud strategy, the majority (60%) take over a year to plan and build their multi-cloud infrastructure, with six percent taking over three years. Private and public cloud adoption is also relatively lengthy, with 66% of private cloud builds, and 51% of public cloud migrations, taking over a year.
  • Growth of virtualization is inevitable and exponential: The number of virtual machines in organizations is growing at a rate of 29% per year; compared to 13% for physical. With virtualization forming a crucial platform for cloud services, it suggests that the technology will favor a cloud approach in the future.
  • Organizations’ priorities are split: When asked how they prioritize workloads in their multi-cloud infrastructure, organizations were split between workload-based residence policies (27% of respondents), performance-based (23%), user-based (22%) and cost-based (13%). Ten percent had no clearly-defined residence policies.

“The cloud is the future of computing — increasingly, the question for organizations is when, not if, they make the move,” continued Charles Crouchman. “However, organizations need to understand that the cloud does not follow the same rules as a traditional IT infrastructure, and adapt their approach accordingly. For instance, workload priorities are still treated as static. Yet the infrastructure housing those workloads, and the ongoing needs of the business, are completely fluid. An organization using the cloud should be able to adapt its workloads dynamically so that they always meet the business’s priorities at that precise time. Without this change in outlook, organizations will soon find themselves squandering the potential the cloud provides.”

To download the full report, click here.

Source: CloudStrategyMag

The Evolution of White Box Gear, Open Compute and the Service Provider

The Evolution of White Box Gear, Open Compute and the Service Provider

There is a lot changing within the modern cloud and service provider world. Organizations are seeing the direct benefits of moving towards the cloud and are now positioning their spending cycles to create budgets to move their environment into the cloud. Trends around application delivery, data control, resource utilization, and even end-user performance are all driving more users to use cloud and service providers.

Consider this, according to Gartner, the worldwide public cloud services market is projected to grow 16.5 percent in 2016 to total $204 billion, up from $175 billion in 2015, according to Gartner, Inc. The highest growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 38.4 percent in 2016.

“The market for public cloud services is continuing to demonstrate high rates of growth across all markets and Gartner expects this to continue through 2017,” said Sid Nag, research director at Gartner. “This strong growth continues reflect a shift away from legacy IT services to cloud-based services, due to increased trend of organizations pursuing a digital business strategy.”

The biggest reason for this growth is the clear flexibility that you can from working with a cloud and service provider. Why is this the case? Because cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered “as a service” using Internet technologies.

This is where the modern service provider and the Open Compute Project (OCP) come in

With all of these new demands around new kinds of services and delivery methodologies – service providers simply needed a new way to deliver and control resources. This means building an architecture capably of rapid scalability and follows efficient economies of scale for a business. To accomplish this, there needed to be a revolutionary new way to think about the service provider data center and the architecture that defines it. This kind of architecture would be built around open standards and open infrastructure designs.

With that in mind, we introduce three very important topics.

  1. Understanding the Open Compute Project (OCP)
    • Founded in 2011, the Open Compute Project has been gaining attention from more and more organizations. So, who should be considering the Open Compute platform and for what applications? The promise of lower cost and open standards for IT servers and other hardware seems like a worth-while endeavor; one that should benefit all users of IT hardware, as well as improving the energy efficiency of the entire data center ecosystem. The open source concept has proven itself successful for software, as witnessed by the widespread adoption and acceptance of Linux, despite early rejection from enterprise organizations.

The goal of Open Compute?

  • To develop and share the design for “vanity free” IT hardware which is energy efficient and less expensive.
  • OCP servers and other OCP hardware (such as storage and networking) in development are primarily designed for a single lowest common denominator — lowest cost and basic generic functions to serve a specific purpose. One OCP design philosophy is a “vanity free” no frills design, which starts without an OEM branded-faceplate. In fact, the original OCP server had no faceplate at all. It only used the minimal compo­nents necessary for a dedicated function — such as a massive web server farm (server had no video chips or connectors).
  1. Cloud Providers Are Now Using Servers based on OCP design
    • Open compute servers are already generating a lot of interest and industry buzz. Imagine being able to architect completely optimized server technologies which deploy faster, are less expensive, and have just the right features that you need for scale and efficiency.
    • This is where the new whitebox and OCP family of servers comes in. With an absolute focus on the key challenges and requirements of industry’s fastest-growing segment – the Service Provider –These type of servers take the OCP conversation to a new level. The customization level of these servers allows you the capability to design and deliver everything from stock offerings to custom systems; and even component-level designs. You also get system integration and data center support. The ultimate idea is to create economies of scale to drive TCO lower and ROI higher for those where “IT is the business.”
  2. Clear demand for OCP and “vanity-free” server architecture
    • According to IDC, service providers will continue to break new ground in search of both performance gains and cost reductions as they expand their cloud architecture implementations. Additionally, the hosting-as-a-service model will continue to transition away from traditional models toward cloud-based delivery mechanisms like infrastructure as a service, spurring hyperscale growth in servers used for hosting (15% to 20% CAGR from 2013 to 2018).
    • At Data Center Knowledge, we conducted a survey, sponsored by HP, to find out what types of workloads are being deployed, what service providers value, and where the latest server technology can make a direct impact. The results, from about 200 respondents, showed us what the modern data center and service provider really needed from a server architecture. They also showed clear demand from servers capable of more performance, while carrying fewer “bells and whistles.”
      • 51% of respondents said that they would rather have a server farm with critical hardware components and less software-add-ons.
      • When asked: “How much do server (hardware and software) add-on features impact your purchasing decision? (Easy-to-access drive holders, memory optimizations, easy upgradability, software management, etc.)” 73% of the survey respondents indicated that this was either important, or very important to them.

Here’s the reality – there is big industry adoption around OCP as well. Facebook is one of those organizations. According to Facebook, a small team of their engineers spent the past two years tackling a big challenge: how to scale our computing infrastructure in the most efficient and economical way possible.

The team first designed the data center in Palo Alto, before deploying it in Prineville, Oregon. The project resulted in Facebook building their own custom-designed servers, power supplies, server racks, and battery backup systems.

What did this mean for Facebook and their new data center?

  • Usage of a 480-volt electrical distribution system to reduce energy loss.
  • Remove anything in their servers that didn’t contribute to efficiency.
  • Reuse hot aisle air in winter to both heat the offices and the outside air flowing into the data center.
  • Eliminate the need for a central uninterruptible power supply.

Ultimately, this design produced and environment capable of consuming 38 percent less energy to do the same work as Facebook’s existing facilities, while costing 24 percent less.

This is where as a cloud-builder, service provider, or modern large enterprise you can really feel the impact. The concept of servers, without all the add-ons and built around OCP design standards, has sparked interest in the market since this type of server architecture allows administrators to scale out with only the resources that they need. This is why we are seeing vanity-free server solutions emerge as the service provider business model evolves.

Source: TheWHIR

Dutch Data Center Group Says Draft Privacy Shield Weak

Dutch Data Center Group Says Draft Privacy Shield Weak

datacenterknowledgelogoBrought to you by Data Center Knowledge

An alliance of data center providers and data center equipment vendors in Holland, whose members include some of the world’s biggest data center companies, has come out against the current draft of Privacy Shield, the set of rules proposed by the European Commission as replacement for Safe Harbor, a legal framework that governed data transfer between the US and Europe before its annulment by the EC last year.

The Dutch Datacenter Association issued a statement Monday saying Privacy Shield “currently offers none of the improvements necessary to better safeguard the privacy of European citizens.”

The list of nearly 30 association participants includes Equinix and Digital Realty, two of the world’s largest data center providers, as well as European data center sector heavyweights Colt, based in London, and Interxion, a Dutch company headquartered just outside of Amsterdam.

In issuing the statement, the association sided with the Article 29 Working Party, a regulatory group that consists of data protection officials from all EU member states. Article 29 doesn’t create or enforce laws, but data regulators in EU countries base their laws on its opinions, according to the Guardian.

Related: Safe Harbor Ruling Leaves Data Center Operators in Ambiguity

In April, the Working Party said it had “considerable reservations about certain provisions” in the draft Privacy Shield agreement. One of the reservations was that the proposed rules did not provide for adequate privacy protections for European data. Another was that Privacy Shield wouldn’t fully protect Europeans from mass surveillance by US secret services, such as the kind of surveillance the US National Security Administration has been conducting according to documents leaked by the former NSA contractor Edward Snowden.

Amsterdam is one of the world’s biggest and most vibrant data center and network connectivity markets. Additionally, there are several smaller but active local data center markets in the Netherlands, such as Eindhoven, Groningen, and Rotterdam.

There are about 200 multi-tenant data centers in the country, according to a 2015 report by the Dutch Datacenter Association. Together, they house about 250,000 square meters of data center space.

The association has support from a US partner, called the Internet Infrastructure Coalition, which it referred to as its “sister organization.” David Snead, president of the I2Coalition, said his organization understood the concerns raised by Article 29.

“We believe that many of the concerns raised by the Working Party can be resolved with further discussions,” he said in a statement.

Original article appeared here: Dutch Data Center Group Says Draft Privacy Shield Weak

Source: TheWHIR

IBM's Watson is going to cybersecurity school

IBM's Watson is going to cybersecurity school

It’s no secret that much of the wisdom of the world lies in unstructured data, or the kind that’s not necessarily quantifiable and tidy. So it is in cybersecurity, and now IBM is putting Watson to work to make that knowledge more accessible.

Towards that end, IBM Security on Tuesday announced a new year-long research project through which it will collaborate with eight universities to help train its Watson artificial-intelligence system to tackle cybercrime.

Knowledge about threats is often hidden in unstructured sources such as blogs, research reports and documentation, said Kevin Skapinetz, director of strategy for IBM Security.

“Let’s say tomorrow there’s an article about a new type of malware, then a bunch of follow-up blogs,” Skapinetz explained. “Essentially what we’re doing is training Watson not just to understand that those documents exist but to add context and make connections between them.”

How the skills shortage is transforming big data

How the skills shortage is transforming big data

In the early days of computing, developers were often jacks of all trades, handling virtually any task needed for software to get made. As the field matured, jobs grew more specialized. Now we’re seeing a similar pattern in a brand-new domain: big data.

That’s according to P.K. Agarwal, regional dean and CEO of Northeastern University’s recently formed Silicon Valley campus, who says big data professionals so far have commonly handled everything from data cleaning to analytics, and from Hadoop to Apache Spark.

“It’s like medicine,” said Agarwal, who at one time was California’s CTO under former Governor Arnold Schwarzenegger. “You start to get specialties.”

That brings us to today’s data-scientist shortage. Highly trained data scientists are now in acute demand as organizations awash in data look for meaning in all those petabytes. In part as a response to this, other professionals are learning the skills to answer at least some of those questions for themselves, earning the informal title of citizen data scientist.