DDN Partners With Hewlett Packard Enterprise

DDN Partners With Hewlett Packard Enterprise

DataDirect Networks (DDN®) has entered into a partnership with global high-performance computing (HPC) leader Hewlett Packard Enterprise (HPE) to integrate DDN’s parallel file system storage and flash storage cache technology with HPE’s HPC platforms. The focus of the partnership is to accelerate and simplify customers’ workflows in technical computing, artificial intelligence and machine learning environments.

Enhanced versions of EXAScaler® and GRIDScaler® parallel file systems solutions, and the latest release of IME®, a software defined scale out NVMe data caching solution, will be tightly integrated with HPE servers and the HPE Data Management Framework (DMF) software, enabling optimized workflow solutions with large-scale data management, protection and disaster recovery capabilities. They will also provide an ideal complement to HPE’s Apollo portfolio, aimed at high-performance computing workloads in complex computing environments.

“Congratulations to DDN and HPE on this expanded collaboration, which seeks to maximize data throughput and data protection,” said Rich Loft, director of the technology development division at the National Center for Atmospheric Research (NCAR) — and a user of an integrated DDN and HPE solution as the foundation of the Cheyenne supercomputer.  “Both of these characteristics are important to the data workflows of the atmospheric and related sciences.”

“With this partnership, two trusted leaders in the high-performance computing market have come together to deliver high value solutions as well as a wealth of technical field expertise for customers with data intensive needs,” said Paul Bloch, president, DDN. “In this hybrid age of hard drive-based parallel file systems, web/cloud and flash solutions, customers are demanding truly scalable storage systems that deliver deeper insight in their datasets. They want smarter productivity, better TCO, and best in class data management and protection. DDN’s and HPE’s integrated solutions will provide them with just that.”

DDN has been a trusted market leader for storage and parallel file system implementations at scale for nearly twenty years. The integrated offerings from DDN and HPE combine compute and storage in the fastest, most scalable and most reliable way possible.

“At HPE we’re committed to providing best practice options for our customers in the rapidly growing markets for high-performance computing, artificial intelligence and machine learning,” said Bill Mannel, vice president and general manager for HPC and AI Segment Solutions, HPE. “HPE and DDN have collaborated on many successful deployments in a variety of leading-edge HPC environments. Bringing these capabilities to the broader community of HPC users based on this partnership will accelerate the time to results and value that our customers see from their compute and storage investment.” 

Source: CloudStrategyMag

Review: H2O.ai automates machine learning

Review: H2O.ai automates machine learning
ed choice plumInfoWorld

Machine learning, and especially deep learning, have turned out to be incredibly useful in the right hands, as well as incredibly demanding of computer hardware. The boom in availability of high-end GPGPUs (general purpose graphics processing units), FPGAs (field-programmable gate arrays), and custom chips such as Google’s Tensor Processing Unit (TPU) isn’t an accident, nor is their appearance on cloud services.

But finding the right hands? There’s the rub—or is it? There is certainly a perceived dearth of qualified data scientists and machine learning programmers. Whether there’s a real lack or not depends on whether the typical corporate hiring process for data scientists and developers makes sense. I would argue that the hiring process is deeply flawed in most organizations.

If companies teamed up domain experts, statistics-literate analysts, SQL programmers, and machine learning programmers, rather than trying to find data scientists with Ph.D.s plus 20 years of experience who were under 39, they would be able to staff up. Further, if they made use of a tool such as H2O.ai’s Driverless AI, which automates a significant portion of the machine learning process, they could make these teams dramatically more efficient.

As we’ll see, Driverless AI is an automatically driven machine learning system that is able to create and train surprisingly good models in a surprisingly short time, without requiring data science expertise. However, while Driverless AI reduces the level of machine learning, feature engineering, and statistical expertise required, it doesn’t eliminate the need to understand your data and the statistical and machine learning algorithms you’re applying to it.  

Source: InfoWorld Big Data

Cray And Microsoft Bring Supercomputing To Microsoft Azure

Cray And Microsoft Bring Supercomputing To Microsoft Azure

Cray Inc. has announced an exclusive strategic alliance with Microsoft Corp. that gives enterprises the tools to enable a new era of discovery and insight, while broadening the availability of supercomputing to new markets and new customers. Under the partnership agreement, Microsoft and Cray will jointly engage with customers to offer dedicated Cray supercomputing systems in Microsoft Azure data centers to enable customers to run AI, advanced analytics, and modeling and simulation workloads at unprecedented scale, seamlessly connected to the Azure cloud.

Cray’s tightly coupled system architecture and Aries interconnect addresses the exponential demand for compute capability, real-time insights, and scalable performance needed by enterprises today. With this new partnership, Cray and Microsoft have made it easier for cloud customers to harness the power of supercomputing and multiply their problem-solving potential. Cray and Microsoft will also bring these advantages to a new set of customers who were previously unable to purchase or maintain an on-premise Cray system.

The availability of Cray supercomputers in Azure empowers researchers, analysts, and scientists with the ability to train AI deep learning models in fields such as medical imaging and autonomous vehicles in a fraction of the time. Pharmaceutical and biotech scientists driving precision medicine discovery can now perform whole genome sequencing, shortening the time from computation to cure. Automotive and aerospace product engineers can now conduct crash simulation, computational fluid dynamic simulations, or build digital twins for rapid and precise product development and optimized maintenance. Geophysicists in energy companies can accelerate oil field analysis and reduce exploration risk through superior seismic imaging fidelity and faster reservoir characterization. All performed in days and minutes, not months and weeks.

“Our partnership with Microsoft will introduce Cray supercomputers to a whole new class of customers that need the most advanced computing resources to expand their problem-solving capabilities, but want this new capability available to them in the cloud,” said Peter Ungaro, president and CEO of Cray. “Dedicated Cray supercomputers in Azure not only give customers all of the breadth of features and services from the leader in enterprise cloud, but also the advantages of running a wide array of workloads on a true supercomputer, the ability to scale applications to unprecedented levels, and the performance and capabilities previously only found in the largest on-premise supercomputing centers. The Cray and Microsoft partnership is expanding the accessibility of Cray supercomputers and gives customers the cloud-based supercomputing capabilities they need to increase their competitive advantage.”

“Using the enterprise-proven power of Microsoft Azure, customers are running their most strategic workloads in our cloud,” said Jason Zander, corporate vice president, Microsoft Azure, Microsoft Corp. “By working with Cray to provide dedicated supercomputers in Azure, we are offering customers uncompromising performance and scalability that enables a host of new previously unimaginable scenarios in the public cloud. More importantly, we’re moving customers into a new era by empowering them to use HPC and AI to drive breakthrough advances in science, engineering and health.”

As part of the partnership agreement, the Cray® XC™  and Cray CS™ supercomputers with attached Cray ClusterStor storage systems will be available for customer-specific provisioning in select Microsoft Azure data centers, directly connected to the Microsoft Azure network. The Cray systems easily integrate with Azure Virtual Machines, Azure Data Lake storage, the Microsoft AI platform, and Azure Machine Learning services. Customers can also leverage the Cray Urika®-XC analytics software suite and CycleCloud for hybrid HPC management.

Source: CloudStrategyMag

CloudJumper Powers WaaS Platform In Switch’s Tier 5® Data Centers

CloudJumper Powers WaaS Platform In Switch’s Tier 5® Data Centers

CloudJumper has announced the company has expanded its strategic relationship with ProfitBricks, a leading channel-focused cloud Infrastructure as a Service (IaaS) provider, to deploy nWorksSpace WaaS solutions within the Switch Tier 5® Data Center campus in Las Vegas. CloudJumper partners now have the ability to build highly competitive WaaS and cloud application delivery solutions on this premier brand of ProfitBricks infrastructure which has been recognized by the Uptime Institute for “enhanced availability and reliability.”[Switch, Switch Announces Its New Tier 5® Data Center Standard, June 8, 2017].

The Tier 5® Data Center Standard was introduced this year by Switch, a leader in data center design, development, and mission critical operations. With ProfitBricks cloud infrastructure operations located within Switch’s Core Campus in Las Vegas, CloudJumper will host a growing number of nWorkSpace accounts in this environment. The Switch Tier 5® Data Center Standard not only encompasses the resiliency and redundancy of other data center ratings systems, but also evaluates more than 30 additional key elements, such as long-term power system capabilities, the number of available carriers, zero roof penetrations, the location of cooling system lines in or above the data center, physical and network security and 100% use of renewable energy.

ProfitBricks is a next-generation cloud computing IaaS hosting service, addressing the needs of solution providers for high-performance and dedicated-core IaaS options that can be quickly scaled to variable levels of compute power and storage capacity. The ProfitBricks system employs an intuitive, easy-to-use management interface to configure and manage services delivered with a predictable and affordable fee structure. The combination is a cloud platform on which VARs, systems integrators, and managed service providers can build cloud-based solutions for their customers, as well as managed cloud practices. The company’s competitive advantages have been enhanced through partnership with Switch, a highest-rated data center provider whose core business is the design, construction, and operation of ultra-advanced data center facilities.

“The migration of IT service providers to the nWorkSpace platform continues to accelerate as partners regularly applaud CloudJumper’s channel-friendly business model, excellent margins, unmatched support, and choice of data center partners,” said Max Pruger, chief sales officer, CloudJumper. “We are excited to expand our involvement with ProfitBricks in the Switch Tier 5® data center campus because of the exceptional opportunities that will be made available to our partners in the design of their service portfolios.”

“IT service providers interested in building customizable, reliable, flexible, and scalable solution portfolios are discovering the advantages of ProfitBricks,” said Aaron Garza, vice president of Business Development, ProfitBricks. “The alignment with CloudJumper and ProfitBricks takes WaaS and cloud application delivery to a whole new level, allowing mutual channel partners to design and deliver cloud-based IT solutions that are aligned with market demands and industry requirements.”

Source: CloudStrategyMag