IBM Opens Four New Cloud Data Centers

IBM Opens Four New Cloud Data Centers

IBM has announced the opening of four new IBM Cloud data centers in the United States to support growing enterprise demand for cloud infrastructure that can provide access to services like IoT, blockchain, quantum computing, and cognitive.

IBM Cloud’s global network includes cloud data centers in key local markets around the world so clients can run their data and applications locally to meet performance and regulatory needs. With data centers across 19 countries and six continents, enterprises can provision cloud infrastructure when and where they need. The new cloud data centers in the U.S. can provide clients with infrastructure to manage and gain insight from their data while also taking advantage of IBM’s advanced cognitive services with Watson on the IBM Cloud.

The opening of two new facilities in Dallas, Texas, and two new facilities in Washington, D.C., is a key part of IBM’s investment to expand its global cloud footprint in 2017.

As enterprises increasingly turn to AI to generate value from their data, demand for public and hybrid cloud infrastructure will continue to grow. According to IDC, worldwide spending on public cloud services and infrastructure will reach $203.4 billion by 2020, a 21.5% compound annual growth rate – nearly seven times the rate of overall IT spending growth.1

“IBM is making major investments to expand our global cloud data centers in 2017 and provide the infrastructure necessary for enterprises to run their cognitive, big data, blockchain and IoT workloads,” said John Considine, general manager for cloud infrastructure, IBM. “IBM’s growing global cloud footprint gives enterprises the flexibility and scale to run their most complex workloads when and where they need.”

The new U.S. facilities can help enable companies to digitize business and operations and drive cognitive innovation through the IBM Cloud. Clients of all sizes are already taking advantage of the benefits of the IBM Cloud including Bitly and Halliburton.

IBM’s Expanding Global Cloud Footprint
IBM now has more than 55 global cloud data centers in 19 countries spanning six continents to help enterprises manage and gain insight into their data no matter where it resides. The opening of additional facilities in Dallas, Texas, and Washington, D.C., marks 22 IBM data centers across the U.S. 

The news reinforces IBM’s commitment to expand its cloud presence around the world in 2017 and builds on strong global momentum from 2016. In 2016, IBM opened the industry’s first cloud data center in the Nordics as well as a new cloud data center located outside of Seoul in South Korea. Additionally, IBM announced in November that it is tripling its cloud data center capacity in the U.K. with four new facilities.

Each of the four new facilities now open in the U.S. has the capacity for thousands of physical servers and offers a full range of cloud infrastructure services, including bare metal servers, virtual servers, storage, security services and networking. With services deployed on demand and full remote access and control, customers can create their ideal public, private or hybrid cloud environments.

IBM’s cloud infrastructure is cognitive at the core and geared for big data workloads. IBM operates a large fleet of bare metal servers, which are ideal for high performance cloud applications. IBM also offers the latest NVIDIA® GPU accelerators – including the Tesla® P100, Tesla K80 and the Tesla M60 – to help enable enterprises to quickly and efficiently run compute-heavy workloads, such as AI, deep learning and high performance data analytics.

 1. IDC: Worldwide Semiannual Public Cloud Services Spending Guide, February 20, 2017

Source: CloudStrategyMag

MapD SQL database gains enterprise-level scale-out, high availability

MapD SQL database gains enterprise-level scale-out, high availability

MapD, the SQL database and analytics platform that uses GPU acceleration for performance orders of magnitude ahead of CPU-based solutions, has been updated to version 3.0.

The update provides a mix of high-end and mundane additions. The high-end goodies consist of deep architectural changes that enable even greater performance gains in clustered environments. But the mundane things are no less important, as they’re aimed at making life easier for enterprise database developers—the audience most likely to use MapD.

Previous versions of MapD (not to be confused with Hadoop/Spark vendor MapR) were able to scale vertically but not horizontally. Users could add more GPUs to a given box, but they couldn’t scale MapD across multiple GPU-equipped servers. An online demo shows version 3 allowing users to explore in real time an 11-billion-row database of ship movements across the continental U.S. using MapD’s web-based graphical dashboard app.

mapdIDG

A live demo of MapD 3.0 running on multiple nodes. An 11-billion-row database of ship movements throughout the continental U.S., can be explored and manipulated in real time, with both the graphical explorer and standard SQL commands.

Version 3 adds a native shared-nothing distributed architecture to the database—a natural extension of the existing shared-nothing architecture MapD used to split processing across GPUs. Data is automatically sharded in round-robin fashion between physical nodes. MapD founder Todd Mostak noted in a phone call that it ought to be possible in the future to manually adjust sharding based on a given database key.

The big advantage to using multiple shared-nothing nodes, according to Mostak, isn’t just a linear speed-up in processing—although that does happen. It also means a linear speed-up for ingesting data into the cluster, which is useful in lowering the bar to entry for database developers who want to try their data out on MapD.

Other features in version 3.0 —chief among them high availability—are what you’d expect from a database aimed at enterprise customers. Nodes can be clustered into HA groups, with data synchronized between them by way of a distributed file system (typically GlusterFS) and a distributed log (by way of an Apache Kafka record stream or “topic”).

Another addition aimed at attracting a general database audience is a native ODBC driver. Third-party tools such as Tableau or Qlik Sense can now plug into MapD without the overhead of the previous JDBC-to-ODBC solution.

A hybrid architecture is one thing that’s not yet possible with MapD’s scale-out system. MapD does have cloud instances available in Amazon Web Services, IBM Soflayer, and Google Cloud, but Mostak pointed out that MapD doesn’t currently support a scenario where nodes in an on-prem installation of MapD can be mixed with nodes from a cloud instance.

Most of MapD’s customers, he explained, have “either-or” setups—either entirely on-prem or entirely in-cloud—with little to no demand to mix the two. At least, not yet.

Source: InfoWorld Big Data

Oracle Public Cloud Services Now Available At Fujitsu Data Center

Oracle Public Cloud Services Now Available At Fujitsu Data Center

Fujitsu Limited and Oracle Corporation Japan have announced that Oracle Cloud Platform services, including Oracle Database Cloud Service, are now available from Oracle’s public-cloud services environment-Oracle Cloud-now hosted in a Fujitsu data center, a first for Japan.

A variety of Oracle Cloud Platform services were made available from March 27, 2017 followed by the April 20 release of Fujitsu Cloud Service K5 DB powered by Oracle Cloud (K5 DB (Oracle)), which adds Oracle Database Cloud Service to the lineup of database options from Fujitsu Cloud Service K5.

Oracle and Fujitsu have a long history of collaboration when it comes to processors, servers, and software. This synergy now extends to the datacenter, where Oracle’s cloud services will be available locally to Japanese customers backed by Fujitsu.

Fujitsu has the largest number of Oracle-certified Oracle Cloud engineers in Japan, and offers a coordinated portfolio of services to assist in the deployment and operations of Oracle Public Cloud, to help organizations build new modern cloud-based solutions and transition their enterprise systems, including mission-critical operations, to the cloud.

Fujitsu and Oracle formed a strategic alliance announced on July 6, 2016, and based on a strategic collaboration to deliver enterprise-grade, world-class cloud services to customers in Japan and their subsidiaries around the world, have commenced sales of public cloud services from Japan. Together with making Oracle Public Cloud services available from Fujitsu’s robust and reliable datacenter in Japan, can now be used as part of Fujitsu Cloud Service K5, Fujitsu’s public-cloud service.

“The Oracle Cloud Platform running in Fujitsu’s Japan datacenter alongside Fujitsu Cloud Service K5 DB powered by Oracle Cloud is a natural continuation of the three decade history Oracle and Fujitsu have working together to help customers achieve competitive advantage” said Edward Screven, chief corporate architect, Oracle. “By combining Fujitsu’s system integration expertise with Oracle’s cloud services, Fujitsu and Oracle will accelerate the transition of our joint customers’ enterprise systems to cloud.”

The Oracle Cloud Platform offered by Fujitsu and Oracle

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of public cloud services across SaaS, PaaS, and IaaS. Oracle Cloud Platform, which includes Oracle’s analytics, application development, data management, and integration services, has experienced steady growth, adding thousands of customers in fiscal 2017. Global enterprises, SMBs, and ISVs are turning to Oracle Cloud Platform to build and run modern Web mobile, and cloud-native applications.

By delivering Oracle Public Cloud services, including Oracle Database Cloud Service with high-available, high-scalable features, such as Oracle Real Application Clusters1, from Fujitsu’s robust, reliable datacenter, mission-critical systems can be used in a cloud environment with peace of mind and superlative performance. Because Fujitsu provides access to Oracle products and services such as Oracle Database, via a public cloud environment, which is used in the enterprise systems of many of its customers, Fujitsu is able to meet its customers’ diverse needs for enterprise-grade cloud services, including support for private clouds.

About Fujitsu Cloud Service K5 DB powered by Oracle Cloud

Based on the Oracle Database Cloud Service, this service incorporates Fujitsu’s systems-integration know-how and is delivered as the kind of distinctive database service that customers expect from Fujitsu. For example, this automates the settings used when creating a database, such as the security settings, encryption, and operational monitoring needed when deploying and building a database. Customers need not learn any new cloud-specific technologies, and can immediately start using Oracle Database Cloud Service. They can also use the service in peace of mind thanks to Fujitsu’s high-quality one-stop support.

This service connects the Fujitsu K5 cloud service, which supports systems of record (SoR)2 and systems of engagement (SoE)3, to Oracle Cloud Services, extending K5’s database offerings with K5 DB (Oracle). This enables companies to move their existing ICT assets into the cloud and enhance their support for SoR.

Engineers on hand to assist with customer cloud-transition needs

As part of this service offering, Fujitsu maintains a network of engineers and services to facilitate Oracle Cloud deployment and operation. Fujitsu has the largest number of people with “ORACLE MASTER Cloud Oracle Database Cloud Service” certification in Japan (winner of Oracle Certification Award 2016), with more than 100 engineers already on staff. Structuring the “Cloud Solution for Oracle” as a service to aid in Oracle Cloud deployment and operation has, with certified engineers4, realized rapid, secure and steady transition to Oracle Cloud, responsively meeting customer cloud transition needs.

A Diamond member in the Oracle PartnerNetwork5, Fujitsu and Oracle have a relationship that spans more than three decades. This joint project works to increase Fujitsu’s systems integration capabilities and further strengthens Oracle technology expertise. By increasing the number of Oracle Cloud engineers in the Fujitsu Group and formalizing their place in the organization, Fujitsu and Oracle look forward to being able to work together more closely.

  1. Oracle Real Application Clusters: Feature that increases the availability of database systems.
  2. Systems of Record (SoR): Existing systems that record company data and perform business processes.
  3. Systems of Engagement (SoE): Systems that implement digital transformations, including business-process transformation and new-business development.
  4. Certified engineers: Holders of “ORACLE MASTER Cloud Oracle Database Cloud Service” or “ORACLE MASTER Platinum Oracle Database 11g/12c” certification.
  5. Oracle PartnerNetwork

Source: CloudStrategyMag

Opus Interactive Chosen As Jelastic’s First West Coast Cloud Hosting Partner

Opus Interactive Chosen As Jelastic’s First West Coast Cloud Hosting Partner

Jelastic has launched the first cloud-hosting partner on the West Coast, Opus Interactive.

Opus Interactive is an Oregon-based cloud hosting and colocation company that specializes in delivery of customized IT solutions. The company will provide Jelastic cloud hosting services from its data center in Hillsboro.  The Tier III facility is owned by Infomart and was recently selected as the flagship datacenter for LinkedIn based on sustainability, security, and energy savings advantages.

“We’re very selective about our hosting partners. Reliability, scalability, and security are critical to the needs of our users who range in size from independent developers to large enterprise applications.” says Ruslan Synytsky, CEO and Co-founder at Jelastic, “We’re excited for the partnership with Opus Interactive. They’ve been in the industry since 1996, and have a proven track record for customer service that they deliver from world class facilities.”

Opus Interactive’s Platform-as-a-Service (PaaS), powered by Jelastic, lets developers and companies benefit from the wide set of platform features, such as:

  • Vertical and horizontal auto-scaling
  • Intuitive application topology wizard
  • Zero downtime deployment with automated traffic distribution
  • One-click installation for popular applications
  • Easy creation of dev, test, production environments
  • Integrated CI and CD tools for automation

Opus Interactive will be Jelastic’s first datacenter partner in the Pacific Northwest, an area labeled “The Silicon Forest” whose datacenter draw includes the likes of Facebook, Google, and Microsoft. Low cost power, connectivity, and direct access to undersea cable links to the Asia Pacific market are huge attractors.

Connectivity between people, places, and things is quickly being adopted. Gartner predicts that by 2020, 21 billion Internet of Things (IoT) devices will be connected. In tandem, the demand for hosted apps that enable that connectivity and the development platforms they stem from is also rising. Developers want a reliable platform that gives them flexibility and visibility that they can manage from inception to end-user adoption. And they want it to be simple.

“Understanding the start to finish needs of agencies and developers was baked into our service offering from the beginning. We launched our company out of an IT department in a creative agency 20 years ago,” said Eric Hulbert, CEO of Opus Interactive, “Obviously, a lots changed in the way services are delivered, but what has stayed the same is – whether it’s Java, Ruby, Python, or something else, it still needs a reliable platform that will grow to support the uptake. The simpler the better:  Jelastic is the all-in-one streamlined service the developer community has been waiting for.”

Opus Interactive`s platform, powered by Jelastic, provides highly available hosting on top of up-to-date blade servers from HPE. All customers are welcome to try easy installation, deployment and management of applications for a free 14-day trial.

 

Source: CloudStrategyMag

TierPoint Achieves, Renews Key Industry Certifications

TierPoint Achieves, Renews Key Industry Certifications

TierPoint LLC has announced that it has successfully completed its most recent round of HIPAA, PCI-DSS, GLBA, and SOC 2 Type II annual compliance audits for all data centers it operates. TierPoint also holds EU-US Privacy Shield certification and ITAR registration on a company-wide basis.

“In our industry, there is an increased focus on security, privacy protection and reliability standards,” said Paul Mazzucco, chief security officer, TierPoint. “Our commitment is to not only meet but exceed industry standards, with a year-round program of rigorous testing and analysis of our infrastructure.”

TierPoint is among a select few national infrastructure providers that have achieved compliance certifications for such a large national footprint of facilities, which (in TierPoint’s case) includes 40 data centers in 20 markets with 8 multi-tenant cloud pods. Compliance certifications are considered important by many organizations seeking a colocation, cloud or hybrid IT solution with uniform processes and practices that meet industry standards for physical and operational efficiency, privacy protection, and security.  

Source: CloudStrategyMag

SolarWinds Database Performance Analyzer Supports Azure SQL Database

SolarWinds Database Performance Analyzer Supports Azure SQL Database

SolarWinds has announced the availability of SolarWinds® Database Performance Analyzer with support for Microsoft® Azure® SQL Database in the Azure Marketplace.

SolarWinds Database Performance Analyzer delivers deep visibility into the performance of top database platforms, including Microsoft SQL Server® 2016, and provides advice for optimization and tuning to accelerate database performance. Using agentless architecture and unique Multi-Dimensional Performance Analysis™, it quickly finds the root cause of complex problems and improves the performance of on-premises, virtualized, cloud, or hybrid IT application environments. With its availability in the Azure Marketplace, the thousands of organizations running millions of Azure SQL Database instances can now benefit from these capabilities, with simplified deployment in minutes.

“We’re thrilled to offer SolarWinds Database Performance Analyzer with Azure SQL Database support in the Azure Marketplace,” said Gerardo Dada, vice president, product marketing, SolarWinds. “By helping to eliminate potential overprovisioning, slow end-user experience, and overspend, SolarWinds Database Performance Analyzer can help cloud developers achieve the ROI and cost efficiency they seek in the cloud.”

According to the SolarWinds IT Trends Report 2017: Portrait of a Hybrid IT Organization, databases are one of the top three infrastructure elements IT organizations are migrating to the cloud. Furthermore, the study found that by weighted rank, the top reason for prioritizing these areas of their IT environments for migration were greatest potential for return on investment (ROI) and cost efficiency.

“We think customers will benefit from SolarWinds Database Performance Analyzer with support for Microsoft Azure SQL Database and are pleased to make it available for easy deployment through the Azure Marketplace,” said Andrea Carl, director, commercial communications at Microsoft Corp. “Now our customers running millions of Azure SQL Database instances benefit having additional tools to quickly root out problems and improve overall performance.”  

SolarWinds Database Performance Analyzer is part of the SolarWinds end-to-end hybrid IT performance management portfolio of products. The SolarWinds portfolio also includes SolarWinds Server & Application Monitor (SAM), which provides deep visibility into the performance of business-critical applications and the infrastructure that supports them on-premises and in the cloud, as well as SolarWinds Network Performance Monitor (NPM), which provides comprehensive network performance monitoring with the NetPath™ feature for critical path visualization on-premises and in the cloud.

Source: CloudStrategyMag

Fusion Secures $2.1 Million, Five Year Contract

Fusion Secures .1 Million, Five Year Contract

Fusion has secured a $2.1 million, five year cloud solutions contract with a leading health system. The company cited Fusion’s productivity enhancing cloud communications and collaboration services, integrated cloud connectivity with Quality of Service guarantees, experienced, live technical support and exemplary reputation for providing a superior customer experience.

Fusion’s team of experts will provide a comprehensive suite of cloud services including:

  • An advanced, fully integrated cloud services platform providing scalable, converged voice and data solutions to accommodate future growth
  • A single source solution for the cloud, with one integrated invoice and single point of contact
  • Advanced billing, reporting, monitoring and management systems
  • 24 x 7 network operations monitoring
  • 24 x 7 live maintenance, technical and customer support

The health system was impressed with Fusion’s robust, geographically diverse nationwide network, which in combination with its integrated, single source cloud solutions will enable the company to reliably and securely connect multiple managed branches of the health system across multiple states.

Fusion’s ability to provide a comprehensive suite of additional cloud services as part of its end-to-end managed network solution was also an important consideration for the health care institution as it looks to migrate more of its business to the cloud over time.

“For more than 30 years, this leading hospital system has established a tradition of excellence sponsoring community initiatives and collaborations that address the causes and consequences of poverty, including residential care for the homeless and low-income childcare as well as education-related efforts for childcare providers and disengaged parents. We are gratified to have earned the trust and confidence of this community-focused organization that shares Fusion’s dedication to providing an exceptional customer experience,” said Russell P. Markman, president of business services, Fusion.

Source: CloudStrategyMag

Data science could keep United out of more trouble

Data science could keep United out of more trouble

I’ve avoided flying United for many years. On my last trip to Japan about 10 years back, somewhere along the way an employee took my ticket and said I’d get another one in Japan. Wrong! On my return, United told me I had to buy a new ticket for around $7,000.

Anyhow, we’ve all heard about United’s overbooking disaster, where a passenger faced a lot worse abuse than I did. With the right data and analytics, another outcome could have been possible.

When the tickets were sold, United’s ticketing system could have seen there was a high probability that the other flight would arrive late and that crew members frequently bumped passengers. The ticketing system could have reserved a number of seats as standby or told the last four passengers booking them that they might be bumped. Then, when the other flight was coming in with the crew that needed to get back home, United simply could have avoided boarding the last four.

In fact, flight data is a cornucopia of statistical information. You could learn a lot about the following:

  1. Weather patterns by season and even in unseasonable years. Sure, we have radar, but how do these patterns affect objects in the air?
  2. Flight delays (travel sites already report this).
  3. Domino effects, such as how a delayed flight or weather pattern impacts other flights.
  4. Maintenance issues, such as how frequently by plane type (or airline) parts have to be replaced or fail.

Also, you can glean a lot of customer and customer preference information. The company I work for calls these “signals,” which I like better than “events,” because they aren’t always events and “time series” is too generic. You could learn the following:

  1. Which customers will likely cancel if assigned a middle seat (my bladder is small in the air and I have broad shoulders). This goes beyond my profile preference for an aisle or a window to identify how much I prefer an aisle.
  2. Which customers are most price sensitive and influenced by cost.
  3. How frequently a customer flies your airline after being bumped or experiencing other customer service problems.

Using statistics, machine learning, and a simple rules engine — and connecting some of these data sets — airlines could:

  1. Automatically offer discounts and other incentives to passengers with flexible schedules to fill empty seats.
  2. Offer status upgrades to passengers who are likely to be incentivized to fly your airline over others (American is doing this, but I don’t know how targeted it is).
  3. Detect probable weather problems, automatically hold seats, and start rebooking before the connection even lands. (Delta does this once the delay happens, but it does so poorly with suboptimal routes.)
  4. Avoid overbooking and simply offer preselected seats. Also, instead of “dumb bidding” in the open air, send a text message to passengers who are likely to take a lower offer. This prevents people from sitting around and waiting for higher compensation.
  5. When you have to select someone, choose the person least likely to care. You have the data.
  6. Detect problematic decision-making or identify employees who frequently do stupid things (like drag people off airplanes).
  7. Assuming there’s a connection between complaints and bad PR, detect when a policy or practice is likely to cause your stock to drop should it go viral on video.

I realize that not every problem can be solved by search (full disclosure: I work for a search company) and math, but a lot of the dumbest stuff and everyday annoyances could. All it takes is motivation. Unfortunately, so far, U.S.-based airlines seem to lack a strong economic reason to care about customer service.

Source: InfoWorld Big Data

Report Shows Need For Enterprise-Wide Plans To Combat Network Intrusions

Report Shows Need For Enterprise-Wide Plans To Combat Network Intrusions

The BakerHostetler 2017 Data Security Incident Response Report highlights the critical need for senior executives in all industries to understand and be ready to tackle the legal and business risks associated with cyberthreats and to have enterprisewide tactics in place to address intrusions before they happen.

The report provides a broad range of lessons to help executives identify risks, appraise response metrics and apply company-specific risk mitigation strategies based on an analysis of more than 450 cyber incidents that BakerHostetler’s Privacy and Data Protection team handled last year. The firm’s experience shows that companies should be focused on the basics, such as education and awareness programs, data inventory efforts, risk assessments, and threat information sharing.

Theodore Kobus, leader of the Privacy and Data Protection team, said, “Like other material risks companies face, cybersecurity readiness requires an enterprisewide approach tailored to the culture and industry of the company. There is no one-size-fits-all approach.”

Why incidents occur

Phishing/hacking/malware incidents accounted for the plurality of incidents for the second year in a row, at 43 percent – a 12 percentage point jump from a year earlier. The only category for which phishing/hacking/malware was not the most common incident cause was finance and insurance, where employee action/mistake was the top reason.

Ransomware attacks — where malware prevents or limits users from accessing their system until a ransom is paid — have increased by 500% from a year earlier, according to industry research. The BakerHostetler report details the typical ransomware scenario and the challenges that such incidents present. “Having a regularly scheduled system backup and a bitcoin wallet to pay a ransom will help with operational resiliency. Ransomware is not likely to go away, and incidents will probably increase over the short term, so companies should be prepared,” added Kobus.

Included in the report is a checklist of actions companies can take to minimize their risk against these attacks and to respond promptly and thoroughly should a cyber breach occur. Topping the list is increasing awareness of cybersecurity issues through training and education. In addition, the report lists six other core steps most businesses should take to prepare for an incident and mitigate risk.

“It’s no longer a question of which industries are most at risk. All industries are faced with the task of managing dynamic data security risks. Even companies in the retail, restaurant and hospitality industries, while highly regulated, had the fourth-highest rate of data security incidents,” Kobus added.

Key statistics from BakerHostetler’s 2017 Data Security Incident Response Report:

  • Incident causes: Phishing/hacking/malware 43%, employee action/mistake 32%, lost/stolen device or records 18%, other criminal acts 4%, internal theft 3%.
  • Industries affected: Health care 35%, finance and insurance 16%, education 14%, retail/restaurant/hospitality 13%, other 9%, business and professional services 8%, and government 5%.
  • Company size by revenue: Less than $100 million 39%, between $100 million and $500 million 33%, $500 million to $1 billion 17%, and greater than $1 billion 11%.
  • Most breaches discovered internally: 64% of breaches were internally discovered (and self-reported) compared with 36% that were externally discovered. In 2015, only 52% of incidents were self-reported.
  • Incident response timeline: On average 61 days from occurrence to discovery; eight days from discovery to containment; 40 days from engagement of forensics until investigation is complete; 41 days from discovery to notification.
  • Notifications and lawsuits filed: In 257 incidents where notification to individuals was given, only nine lawsuits were filed. This is partially explained by companies being prepared to better manage incidents.
  • No notification required: 44% of incidents covered by the report required no notification to individuals — similar to 2015 results.
  • Average size of notification: Incidents in the retail/restaurant/hospitality industry had the highest average notification at 297,000, followed by government at 134,000 and healthcare at 61,000. All other industries had less than 10,000 notifications per incident.
  • Forensic investigation costs: The average total cost of forensic investigations in 2016 was
  • $62,290, with the highest costs in excess of $750,000.
  • Health care: The number of incidents rose last year, but the average size of the incidents decreased. Of the incidents analyzed by the BakerHostetler report, 35% were in healthcare, yet the average size of the incident notification was 61,000 — only the third highest of all industries surveyed.
  • Triggering state breach notification laws: Just over half of cyber incidents last year (55%) were subject to state breach notification statutes, down slightly from the year prior. Of the incidents where notification was required, the highest percentages were those involving Social Security numbers (43%) and health care information (37%). Only 12% of cases involved payment card data.
  • Active state attorneys general: AG’s made inquiries after notifications were made in 29% of incidents, although overall regulatory investigations and inquiries were down to 11% in 2016, from 24% in 2015, and litigation was down to 3% last year compared with 6% the prior year.

Back to the basics

The first line of defense in protecting a company’s data and reputation during a cybersecurity incident is to outfit the organization with baseline procedures and processes to reduce the company’s risk profile. By focusing on key areas like employee awareness and education, companies can help prevent incidents while laying the groundwork for a successful response and reducing the likelihood events will be severe should they happen.

“Employees are often cited as a company’s greatest asset. In the cybersecurity arena, they can also be a liability. The report’s numbers reinforce the ongoing need to focus on effective employee awareness and training. They also show that a defense-in-depth approach is necessary, because even well-trained employees can make mistakes or be tricked,” said Kobus.

The full 2017 BakerHostetler Data Security Incident Response Report can be found here. The Privacy and Data Protection team will host a webinar on the findings on May 9 at noon ET. Kobus also will be participating in a morning panel titled, “Shakedown Street: Cyber Extortion, Data Breach and the Dirty Business of Bitcoin” on April 20 at the Global Privacy Summit in Washington, D.C.

Source: CloudStrategyMag

Unitas Global And Canonical Partner

Unitas Global And Canonical Partner

Unitas Global and Canonical have announced they will provide a new fully-managed and hosted OpenStack private cloud to enterprise clients around the world.

This partnership, developed in response to growing enterprise demand to consume open source infrastructure, OpenStack and Kubernetes without the need to build in-house development or operations capabilities, will enable enterprise organizations to focus on strategic Digital Transformation initiatives rather than day-to-day infrastructure management.

This partnership, along with Unitas Global’s large ecosystem of system integrators and partners, will enable customers to choose an end-to-end infrastructure solution to design, build, and integrate custom private cloud infrastructure based on OpenStack. It can then be delivered as a fully-managed solution anywhere in the world allowing organizations to easily consume the private cloud resources they need without building and operating the cloud itself.

Private cloud solutions provide predictable performance, security and the ability to customize the underlying infrastructure. This new joint offering combines Canonical’s powerful automated deployment software and infrastructure operations with Unitas Global’s infrastructure and guest-level managed services in data centers globally.

“Canonical and Unitas Global combine automated, customizable OpenStack software alongside fully-managed private cloud infrastructure, providing enterprise clients with a simplified approach to cloud integration throughout their business environment,” explains Grant Kirkwood, CTO and founder, Unitas Global. “We are very excited to partner with Canonical to bring this much-needed solution to market, enabling enhanced growth and success for our clients around the world.”

“By partnering with Unitas Global, we are able to deliver a flexible and affordable solution for enterprise cloud integration utilizing cutting-edge software built on fully-managed infrastructure,” comments Arturo Suarez, BootStack product manager, Canonical. “At Canonical, it is our mission to drive technological innovation throughout the enterprise marketplace by making flexible, open source software available for simplified consumption wherever needed, and we are looking forward to working side-by-side with Unitas Global to deliver upon this promise.”

Source: CloudStrategyMag