Review: Domo is good BI, not great BI

Review: Domo is good BI, not great BI

In the last couple of years I have reviewed four of the leading business intelligence (BI) products: Tableau, Qlik Sense, Microsoft Power BI, and Amazon QuickSight. In general terms, Tableau sets the bar for ease of use, and Power BI sets the bar for low price.

Domo is an online BI tool that combines a large assortment of data connectors, an ETL system, a unified data store, a large selection of visualizations, integrated social media, and reporting. Domo claims to be more than a BI tool because its social media tool can lead to “actionable insights,” in practice every BI tool either leads to actions that benefit the business or winds up tossed onto the rubbish heap.

Domo is a very good and capable BI system. It stands out with support for lots of data sources and lots of chart types, and the integrated social media feature is nice (if overblown). However, Domo is harder to learn and use than Tableau, Qlik Sense, and Power BI, and at $2,000 per user per year it is multiples more expensive.

Depending on your needs, Tableau, Qlik Sense, or Power BI is highly likely to be a better choice than Domo.  

Source: InfoWorld Big Data

BMJ Relies On Datapipe For Chinese Expansion With Alibaba Cloud

BMJ Relies On Datapipe For Chinese Expansion With Alibaba Cloud

Datapipe has announced that BMJ, a global health care knowledge provider, has used Datapipe’s expertise to launch its hybrid multi-cloud environment and enter the Chinese market using Alibaba Cloud. The announcement follows last year’s partnership in which BMJ implemented a DevOps culture and virtualized its IT infrastructure with Datapipe’s private cloud.

In 2016, Datapipe announced it was named a global managed service provider (MSP) partner of Alibaba Cloud. Later that year, it was named Asia Pacific Managed Cloud Company of the Year by Frost & Sullivan. BMJ opened a local Beijing office in 2015, and its attraction to engaging Datapipe’s services at the outset was due to Datapipe’s on-the-ground support in China and knowledge of the local Chinese market.

“We see the People’s Republic of China as a key part of our growing international network. Therefore, we needed the technical expertise to be able to expand our services into China, and a partner to help us navigate the complex frameworks required to build services there. Datapipe, with its on-the-ground support in China and knowledge of the market has delivered local public-cloud infrastructure utilizing Alibaba Cloud,” said Sharon Cooper, chief digital officer, BMJ.

Last year, BMJ used Datapipe’s expertise to move to a new, agile way of working. BMJ fully virtualized its infrastructure and automated its release cycle using Datapipe’s private cloud environment. Now, it has implemented a hybrid multi-cloud solution using both AWS and Alibaba Cloud, fully realizing the strategy it started working towards two years ago.

 “It is exciting to be working with a company that has both a long, distinguished history and is also forward-thinking in embracing the cloud,” said Tony Connor, head of EMEA Marketing, Datapipe. “Datapipe partnered with Alibaba Cloud last year in order to better support our clients’ global growth both in and out of China. We are delighted to continue to deliver for BMJ, building upon our private cloud foundations, and taking them to China with public cloud infrastructure through our relationship with AliCloud.”

“We have now fully realized the strategy that we first mapped out two years ago, when we started our cloud journey. In the first year, we were able to fully virtualise our infrastructure using Datapipe’s private cloud, and in the process, move to a new, agile way of working. In this second year, we have embraced public cloud and taken our services over to China,” said Alex Hooper, head of operations, BMJ.

“Previously, we could only offer stand-alone software products in China, which are delivered on physical media and require quarterly updates to be installed by the end-user. With Datapipe’s help, we now have the capability to offer BMJ’s cloud-based services to Chinese businesses,” Hooper added.

This has been made possible by utilizing data centers located in China, using Alibaba Cloud, which links to BMJ’s core infrastructure and gives BMJ all the benefits of public cloud infrastructure, but located within China to satisfy the requirements of the Chinese authorities.

“With Datapipe’s help, it was surprisingly easy to run our services in China and link them to our core infrastructure here in the UK, Datapipe has done an exemplary job,” Hooper said.



BMJ has seen extraordinary change in its time. Within its recent history, it has transitioned from traditional print media to becoming a digital content provider. With Datapipe’s help it now has the infrastructure and culture in place to cement its position as a premier global digital publisher and educator.

Source: CloudStrategyMag

ONNX makes machine learning models portable, shareable

ONNX makes machine learning models portable, shareable

Microsoft and Facebook have announced a joint project to make it easier for data analysts to exchange trained models between different machine learning frameworks.

The Open Neural Network Exchange (ONNX) format is meant to provide a common way to represent the data used by neural networks. Most frameworks have their own specific model format that will only work with models from other frameworks by way of a conversion tool.

ONNX allows models to be swapped freely between frameworks without the conversion process. A model trained on one framework can be used for inference by another framework.

Microsoft claims the ONNX format provides advantages above and beyond not having to convert between model formats. For instance, it allows developers to choose frameworks that reflect the job and workflow at hand, since each framework tends to be optimized for different use cases: “fast training, supporting flexible network architectures, inferencing on mobile devices, etc.”

Facebook notes that a few key frameworks are already on board to start supporting ONNX. Caffe2, PyTorch (both Facebook’s projects), and Cognitive Toolkit (Microsoft’s project) will provide support sometime in September. This, according to Facebook, “will allow models trained in one of these frameworks to be exported to another for inference.”

The first wave of ONNX-supporting releases won’t cover everything out of the gate. In PyTorch’s case, Facebook notes that “some of the more advanced programs in PyTorch such as those with dynamic flow control” won’t benefit fully from ONNX support yet.

It’s not immediately clear how ONNX model sizes shape up against those already in common use. Apple’s Core ML format, for instance, was designed by Apple so that small but accurate models could be deployed to and served from end-user devices like the iPhone. But Core ML is proprietary. One of ONNX’s long-term goals is to make it easier to deliver models for inference to many kinds of targets.

Source: InfoWorld Big Data

ZeroStack Joins Remotely-Managed Private Cloud Marketplace

ZeroStack Joins Remotely-Managed Private Cloud Marketplace

ZeroStack, Inc. has announced it has joined the OpenStack Marketplace for Remotely-Managed Private Clouds, a collection of leading private cloud solutions for enterprises that want the self-service and agility of on-premises cloud without the pain of configuring and managing complex cloud infrastructure. ZeroStack achieves this using Z-Brain — a SaaS-driven operations portal that leverages machine learning combined with on-premises infrastructure to “bring the cloud home.” Enterprise application developers can now rapidly build, test, and deploy on-premises, production-ready, distributed web applications, mobile applications, containerized applications, and big data analytics workloads.

“Remote IT management has been a rapidly-growing trend over the past two years, beginning with cloud-managed Wi-Fi systems and moving into other IT infrastructure areas like security and SD-WAN,” said Zeus Kerravala, principal analyst at ZK Research. “Remotely-managed clouds like the ZeroStack Intelligent Cloud Platform deliver the security, performance, and control of private cloud combined with the ease-of-use of public cloud, and they use a SaaS portal to automate provisioning, configuration, monitoring, and management.”

ZeroStack’s implementation of OpenStack leverages machine learning technology in the Z-Brain SaaS portal to eliminate capacity planning, VM sizing, software maintenance, patching, and performance monitoring so IT administrators and software development teams can focus on delivering cloud-native applications.

“ZeroStack’s unique SaaS-driven operational model securely automates remote management and delivers continuous innovation using AI and machine learning technologies, dramatically reducing dependence on human experts,” said Kamesh Pemmaraju, Vice President of Product Management at ZeroStack. “We invite enterprises to compare our self-driving cloud with any other remotely-managed private cloud solution on the market.”

Source: CloudStrategyMag

American Axess Selects Epsilon

American Axess Selects Epsilon

Epsilon has announced a partnership with American Axess, Inc. This partnership provides American Axess customers with on-demand access to a suite of connectivity solutions via Epsilon’s Infiny platform.

In order to satisfy multinational corporations’ growing demand for direct connects into top Cloud Service Providers (CSPs), including Amazon Web Services (AWS), Google Cloud Platform and Microsoft Azure, American Axess has connected to Infiny in Miami, providing access to Epsilon’s North American footprint and interconnection options. By backhauling services to their network in Cali, Colombia, American Axess will facilitate superior options for interconnectivity between North and South America for its customers.

“Epsilon is a proven leader in the communication services industry, and we are delighted at the opportunity to utilize their platform to provide our customers with exceptional interconnectivity services,” says Lou W. Dabbe, owner and CEO, American Axess. “Infiny is a perfect fit for our business due to its global reach and ability to enhance the projection of our service across the world and to public cloud locations. Their team understands what it takes to be successful in the cloud, and Infiny is a platform that supports our customers growing needs.”

Infiny delivers a comprehensive set of enterprise, voice, local access, cloud and global connectivity services from a single, self-service platform. Utilizing this new and innovative technology, partners have the ability to procure and manage services via the Infiny web-based portal, APIs, and Android and iOS apps.

“Since launching Infiny in March, we have been dedicated to sharing this exceptional connectivity platform with international customers to enhance their experience when procuring and managing critical connectivity services,” shares Jerzy Szlosarek, CEO, Epsilon. “Our alliance with American Axess will build upon this mission, further validating our ability to deliver the highest quality connectivity services available to Latin America and the Caribbean.”

Phase two Epsilon will expore building upon this partnership by potentially bringing its Infiny platform to Colombia over the American Axess international network.

Source: CloudStrategyMag

Qligent Named IABM Design & Innovation Awards 2017 Finalist

Qligent Named IABM Design & Innovation Awards 2017 Finalist

Qligent has been named as a finalist for the IABM Design & Innovation Awards 2017. The winners will be announced at a special IBC 2017 ceremony taking place on Saturday, September 16.

The IABM Design & Inovation Awards 2017 are spread across 10 categories with 40 finalists. Qligent was shortlisted in the Test, Quality, Control & Monitoring category for Match, the company’s new automated programmatic error detection software for its Vision cloud-based monitoring and analysis platform. Qligent will exhibit Match alongside other products at Stand 8.E47 throughout the IBC show, taking place September 15-19 at the RAI Amsterdam.

Qligent Match provides a real-time, automated, software-based solution for compressed and uncompressed signals on any delivery platform (OTT, IPTV, satellite, cable, terrestrial). The software spots programmatic errors and anomalies introduced by the repeated multiplexing, overlaying, embedding, inserting and other real-time processes as signals move downstream from the origination to the end-user.

Lightweight and affordable, Match arms users with a toolset to identify today’s most common media distribution errors, including airing of incorrect programs, the capture of programmatic local ad splicing errors, and assigning foreign languages to the wrong audio track among other errors. This approach efficiently addresses the growing problem of TV operations becoming more error-prone, driven by the rapid expansion of program streams along with the last-mile need for localization and personalization for the enduser.

“With so much transition taking place today in TV operations and content delivery, there is a growing need for an automated verification and reporting solution that targets the presentation of content as it is being delivered,” said Ted Korte, COO, Qligent. “Unlike other QoS, QoE, and Compliance monitoring solutions, Match is unique in its ability to verify the dynamic changes of the program stream across large geographic deployments, affordably, and in real-time. We are thankful and excited to be recognized by IABM for our product development and problem-solving efforts associated with Match.”

Source: CloudStrategyMag

EdgeConneX® And Cedexis Publish New White Paper

EdgeConneX® And Cedexis Publish New White Paper

EdgeConneX® and Cedexis have announced the availability of a new white paper titled, “Cloud, Content, Connectivity and the Evolving Internet Edge.” The study uses Cedexis’ Real User Measurements (RUM) to reveal why the superior performance of Edge content and connectivity is driving more and more industry deployments.

Noted in the study, CDNs have taken the first step towards the Network Edge, realizing up to a 47% better response time compared with regions without Edge deployments. Cloud providers, meanwhile, are lagging in edge deployments and this is evident in higher latency — up to twice as high in regions without cloud-enabled data centers.

Cloud hyperscalers are bridging this gap using direct connections, including both physical and Software Defined Networks (SDN). The white paper compares Cedexis’ own panel of Internet measurements to services localized with EdgeConneX and Megaport, the world’s leading Network as a Service (NaaS) provider. Megaport’s global platform uses SDN to provide the most secure, seamless and on-demand way for enterprises, networks and services to interconnect. The response time improves 50% to 85% for users who bypass the public Internet, opting for direct cloud access via Megaport from an EdgeConneX Edge Data Center® (EDC).

“Megaport’s Software Defined Network is designed to enable superior performance for companies accessing public cloud services,” states Nicole Cooper, Executive Vice President, Americas, Megaport. “As the results in this whitepaper demonstrate, users at the network Edge can benefit from lower latency by using our SDN as part of their cloud deployments.”

Additionally, these changes in Internet architecture will result in a greater need for third-party load balancing services. As companies manage hybrid clouds, multiple CDNs and connectivity options, Global Server Load Balancing, offered by Cedexis, will be crucial to ensuring efficient content and application delivery.

“Cedexis is pleased to partner with EdgeConneX to further understand the impact of proximity and direct connectivity,” notes Simon Jones, head of marketing and evangelist, Cedexis. “By combining our data on billions of Real User Measurements with information on EdgeConneX local deployments, we are better able to understand the evolving Internet landscape. Customers needing to navigate the growth in hybrid cloud and content will be looking for solutions that manage multiple local and regional service providers.”

“The robust edge ecosystem within each of our Edge Data Centers is expanding each day as the Internet must support the nonstop content, cloud and application demand,” states Clint Heiden, chief commercial officer, EdgeConneX. “Along with Cedexis, we are pleased to validate and showcase how we are bringing the Internet community together to improve peering and connectivity globally.”

EdgeConneX specializes in providing purpose-built, power dense Edge Data Center solutions that enable the fastest delivery of data to end-users. EdgeConneX has created a new Edge of the Internet by designing and deploying facilities that are strategically positioned nearest to network provider aggregation points, ensuring the lowest latency data delivery with improved security and quality of service.

Source: CloudStrategyMag

Army Re-Ups With IBM For $135 Million In Cloud Services

Army Re-Ups With IBM For 5 Million In Cloud Services

IBM has announced that the U.S. Army’s Logistics Support Activity (LOGSA) awarded IBM a contract to continue providing cloud services, software development and cognitive computing, constituting the technical infrastructure for one of the U.S. federal government’s biggest logistics systems.

The 33-month, $135 million contract represents a successful re-compete of work that LOGSA signed with IBM in September 2012. Under that managed services agreement, the Army pays only for cloud services that it actually consumes. The efficiencies created by this arrangement have enabled the Army to avoid about $15 million per year in operational costs — a significant yield for the Army and taxpayers.

In addition to continuing to provide managed services as part of this new contract, IBM also will help the Army focus on:

  • Improving cybersecurity by applying the risk management framework (RMF) security controls to LOGSA’s IT enterprise. RMF is the unified information security framework for the entire U.S. federal government; it replaces legacy IT security standards
  • Incorporating cognitive computing that enhances readiness by anticipating needs
  • Speeding application modernization

As part of this new contract, IBM also will help the Army predict vehicle maintenance failures from more than 5 billion data points of on-board sensors that will be stored within this environment. In addition, the Army is adopting Watson IoT services and a new Watson IoT Equipment Advisor solution that analyzes unstructured, structured and sensor data directly from military assets.

The solution, part of the IBM Watson IoT for Manufacturing and Industrial Products product suite, includes IBM Predictive Maintenance and Quality System, an integrated solution that monitors, analyzes, and reports on information gathered from devices and equipment and recommends maintenance procedures. It also includes Watson Explorer, a cognitive exploration and content analysis platform that enables users to securely capture and analyze both structured and unstructured data. With the platform, the Army will look to extract enhanced insights from its vehicle data and recommend optimal repair methods and procedures. By combining tactical vehicle sensor and maintenance data, the Army better understands the health of its vehicles and can take proactive repair measures.

IBM recently completed a proof of concept that demonstrated the effectiveness of Watson cognitive computing for 10% of the Army’s Stryker vehicle fleet. Under this new contract, LOGSA will increase its ability to provide that predictive and prescriptive maintenance information to the Army.

LOGSA provides on-time integrated logistics support of worldwide Army operations, impacting every soldier, every day. As the Army’s authoritative source for logistics data, LOGSA provides logistics intelligence, life cycle support, technical advice, and assistance to the current and future force; integrates logistics information (force structure, readiness, and other logistics data) for worldwide equipment readiness and distribution analysis; and provides asset visibility for timely and proactive decision-making.

“LOGSA and the Army can now take advantage of the technological innovation that cloud offers — especially cognitive computing and analytics — so that the Army can continue to reap cost savings, further streamline its operations and deliver services to its clients,” said Lisa Mascolo, managing director, U.S. Public Service, IBM’s Global Business Services. “We’re pleased to continue our work with the Army to demonstrate the viability of cloud for mission applications and the promised benefits of efficiency and taxpayer savings.”

“Over the past four and a half years, LOGSA has benefitted from the business and technical advantages of the cloud,” said LOGSA Commander Col. John D. Kuenzli. “Now, we’re moving beyond infrastructure as-a-service and embracing both platform and software as-a service, adopting commercial cloud capabilities to further enhance Army readiness.”

“When Gen. Perna took command of the Army Materiel Command, he said we cannot conduct tomorrow’s operations using yesterday’s processes and procedures,” Kuenzli added. “He has since emphasized understanding the leading indicators to readiness, and getting in front of the Army’s logistics challenges. The services we have received from IBM and the potential of IBM Watson IoT truly enable LOGSA to deliver cutting-edge business intelligence and tools to give the Army unprecedented logistics support at efficient and affordable means.”

In addition to private cloud deployments, IBM manages five dedicated federal cloud data centers, including a cloud environment accredited up to impact* level 5 (IL-5). These were built to meet Federal Risk and Authorization Management Program (FedRAMP) and Federal Information Security Management Act (FISMA) requirements for government workloads.

*The Defense Information System Agency’s (DISA’s) information impact levels consider the potential impact of information being compromised. IL-5 gives the cloud provider the authority to manage controlled, unclassified information. For IBM’s work with the Army’s private cloud at Redstone Arsenal in Huntsville, Ala., the Army expects the company to achieve DISA’s IL-6 – the agency’s highest level – by early 2018, which would certify IBM to work with classified information up to “secret.” Presently, IBM is the only company authorized at IL-5 to run IaaS solutions on government premises.

Source: CloudStrategyMag

What is big data? Everything you need to know

What is big data? Everything you need to know

Every day human beings eat, sleep, work, play, and produce data—lots and lots of data. According to IBM, the human race generates 2.5 quintillion (25 billion billion) bytes of data every day. That’s the equivalent of a stack of DVDs reaching to the moon and back, and encompasses everything from the texts we send and photos we upload to industrial sensor metrics and machine-to-machine communications.

That’s a big reason why “big data” has become such a common catch phrase. Simply put, when people talk about big data, they mean the ability to take large portions of this data, analyze it, and turn it into something useful.

Exactly what is big data?

But big data is much more than that. It’s about:

  • taking vast quantities of data, often from multiple sources
  • and not just lots of data but different kinds of data—often, multiple kinds of data at the same time, as well as data that changed over time—that didn’t need to be first transformed into a specific format or made consistent
  • and analyzing the data in a way that allows for ongoing analysis of the same data pools for different purposes
  • and doing all of that quickly, even in real time.

In the early days, the industry came up with an acronym to describe three of these four facets: VVV, for volume (the vast quantities), variety (the different kinds of data and the fact that data changes over time), and velocity (speed).

Big data vs. the data warehouse

What the VVV acronym missed was the key notion that data did not need to be permanently changed (transformed) to be analyzed. That nondestructive analysis meant that organizations could both analyze the same pools of data for different purposes and could analyze data from sources gathered for different purposes.

By contrast, the data warehouse was purpose-built to analyze specific data for specific purposes, and the data was structured and converted to specific formats, with the original data essentially destroyed in the process, for that specific purpose—and no other—in what was called extract, transform, and load (ETL). Data warehousing’s ETL approach limited analysis to specific data for specific analyses. That was fine when all your data existed in your transaction systems, but not so much in today’s internet-connected world with data from everywhere.

However, don’t think for a moment that big data makes the data warehouse obsolete.  Big data systems let you work with unstructured data largely as it comes, but the type of query results you get is nowhere near the sophistication of the data warehouse. After all, the data warehouse is designed to get deep into data, and it can do that precisely because it has transformed all the data into a consistent format that lets you do things like build cubes for deep drilldown? Data warehousing vendors have spent many years optimizing their query engines to answer the queries typical of a business environment.

Big data lets you anayze much more data from more sources, but at less resolution. Thus, we will be living with both traditional data warehouses and the new style for some time to come.  

The technology breakthroughs behind big data

To accomplish the four required facets of big data—volume, variety, nondestructive use, and speed—required several technology breakthroughs, including the development of a distributed file system (Hadoop), a method to make sense of disparate data on the fly (first Google’s MapReduce, and more recently Apache Spark), and a cloud/internet infrastructure for accessing and moving the data as needed.

Until about a dozen years ago, it wasn’t possible to manipulate more than a relatively small amount of data at any one time. (Well, we all thought our data warehouses were massive at the time. The context has shifted dramatically since then as the internet produced and connected data everywhere.) Limitations on the amount and location of data storage, computing power, and the ability to handle disparate data formats from multiple sources made the task all but impossible.

Then, sometime around 2003, researchers at Google developed MapReduce. This programming technique simplifies dealing with large data sets by first mapping the data to a series of key/value pairs, then performing calculations on similar keys to reduce them to a single value, processing each chunk of data in parallel on hundreds or thousands of low-cost machines. This massive parallelism allowed Google to generate faster search results from increasingly larger volumes of data.

Around 2003, Google created the two breakthroughs that made big data possible: One was Hadoop, which consists of two key services:

  • reliable data storage using the Hadoop Distributed File System (HDFS)
  • high-performance parallel data processing using a technique called MapReduce.

Hadoop runs on a collection of commodity, shared-nothing servers. You can add or remove servers in a Hadoop cluster at will; the system detects and compensates for hardware or system problems on any server. Hadoop, in other words, is self-healing. It can deliver data—and run large-scale, high-performance processing jobs—in spite of system changes or failures.

Although Hadoop provides a platform for data storage and parallel processing, the real value comes from add-ons, cross-integration, and custom implementations of the tech- nology. To that end, Hadoop offers subprojects, which add functionality and new capabilities to the platform:

  • Hadoop Common: The common utilities that sup- port the other Hadoop subprojects.
  • Chukwa: A data collection system for managing large distributed systems.
  • HBase: A scalable, distributed database that sup- ports structured data storage for large tables.
  • HDFS: A distributed le system that provides high throughput access to application data.
  • Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying.
  • MapReduce: A software framework for distributed processing of large data sets on compute clusters.
  • Pig: A high-level data- ow language and execution framework for parallel computation.
  • ZooKeeper: A high-performance coordination service for distributed applications.

Most implementations of a Hadoop platform include at least some of these subprojects, as they are often necessary for exploiting big data. For example, most organizations choose to use HDFS as the primary distributed file system and HBase as a database, which can store billions of rows of data. And the use of MapReduce or the more recent Spark is almost a given since they bring speed and agility to the Hadoop platform.

With MapReduce, developers can create programs that process massive amounts of unstructured data in parallel across a distributed cluster of processors or stand-alone computers. The MapReduce framework is broken down into two functional areas:

  • Map, a function that parcels out work to different nodes in the distributed cluster.
  • Reduce, a function that collates the work and resolves the results into a single value.

One of MapReduce’s primary advantages is that it is fault-tolerant, which it accomplishes by monitoring each node in the cluster; each node is expected to report back periodically with completed work and status updates. If a node remains silent for longer than the expected interval, a master node makes note and reassigns the work to other nodes.

Apache Hadoop, an open-source framework that uses MapReduce at its core, was developed two years later. Originally built to index the now-obscure Nutch search engine, Hadoop is now used in virtually every major industry for a wide range of big data jobs. Thanks to Hadoop’s Distributed File System and YARN (Yet Another Resource Negotiator), the software lets users treat massive data sets spread across thousands of devices as if they were all on one enormous machine.

In 2009, University of California at Berkeley researchers developed Apache Spark as an alternative to MapReduce. Because Spark performs calculations in parallel using in-memory storage, it can be up to 100 times faster than MapReduce. Spark can work as a standalone framework or inside Hadoop.

Even with Hadoop, you still need a way to store and access the data. That’s typically done via a NoSQL database like MongoDB, like CouchDB, or Cassandra, which specialize in handling unstructured or semi-structured data distributed across multiple machines. Unlike in data warehousing, where massive amounts and types of data are converged into a unified format and stored in a single data store, these tools don’t change the underlying nature or location of the data—emails are still emails, sensor data is still sensor data—and can be stored virtually anywhere.

Still, having massive amounts of data stored in a NoSQL database across clusters of machines isn’t much good until you do something with it. That’s where big data analytics comes in. Tools like Tableau, Splunk, and Jasper BI let you parse that data to identify patterns, extract meaning, and reveal new insights. What you do from there will vary depending on your needs.

InfoWorld Executive Editor Galen Gruman, InfoWorld Contributing Editor Steve Nunez, and freelance writers Frank Ohlhorst and Dan Tynan contributed to this story.

Source: InfoWorld Big Data

Microsoft Leads In SaaS Market

Microsoft Leads In SaaS Market

New Q2 data from Synergy Research Group shows that the enterprise SaaS market grew 31% year on year to reach almost $15 billion in quarterly revenues, with collaboration being the highest growth segment. Microsoft remains the clear leader in overall enterprise SaaS revenues, having overtaken long-time market leader Salesforce a year ago. Microsoft was already rapidly growing its SaaS revenues, but in Q2 its acquisition of LinkedIn gave its SaaS business a further boost. In terms of overall SaaS market rankings, Microsoft and Salesforce are followed by Adobe, Oracle, and SAP, with other leading companies including ADP, IBM, Workday, Intuit, Cisco, Google, and ServiceNow. It’s notable that the market remains quite fragmented, with different vendors leading each of the main market segments. Among the major SaaS vendors those with the highest overall growth rates are Oracle, Microsoft, and Google.

In many ways the enterprise SaaS market is now mature. However, spending on SaaS remains relatively small compared to on-premise software, meaning that SaaS growth will remain buoyant for many years. Synergy forecasts that the SaaS market will double in size over the next three years, with strong growth across all segments and all geographic regions.

“IaaS and PaaS markets tend to get more attention and are indeed growing more rapidly, but the SaaS market is substantially bigger and will remains so for many years,” said John Dinsdale, a chief analyst and research director at Synergy Research Group. “Traditional enterprise software vendors like Microsoft, SAP, Oracle and IBM still have a huge base of on-premise software customers and they are all now pushing to aggressively convert those customers to a SaaS-based consumption model. At the same time, born-in-the-cloud software vendors like Workday, Zendesk and ServiceNow continue to light a fire under the market and help to propel enterprise spending on SaaS.”

Source: CloudStrategyMag