ONNX makes machine learning models portable, shareable

ONNX makes machine learning models portable, shareable

Microsoft and Facebook have announced a joint project to make it easier for data analysts to exchange trained models between different machine learning frameworks.

The Open Neural Network Exchange (ONNX) format is meant to provide a common way to represent the data used by neural networks. Most frameworks have their own specific model format that will only work with models from other frameworks by way of a conversion tool.

ONNX allows models to be swapped freely between frameworks without the conversion process. A model trained on one framework can be used for inference by another framework.

Microsoft claims the ONNX format provides advantages above and beyond not having to convert between model formats. For instance, it allows developers to choose frameworks that reflect the job and workflow at hand, since each framework tends to be optimized for different use cases: “fast training, supporting flexible network architectures, inferencing on mobile devices, etc.”

Facebook notes that a few key frameworks are already on board to start supporting ONNX. Caffe2, PyTorch (both Facebook’s projects), and Cognitive Toolkit (Microsoft’s project) will provide support sometime in September. This, according to Facebook, “will allow models trained in one of these frameworks to be exported to another for inference.”

The first wave of ONNX-supporting releases won’t cover everything out of the gate. In PyTorch’s case, Facebook notes that “some of the more advanced programs in PyTorch such as those with dynamic flow control” won’t benefit fully from ONNX support yet.

It’s not immediately clear how ONNX model sizes shape up against those already in common use. Apple’s Core ML format, for instance, was designed by Apple so that small but accurate models could be deployed to and served from end-user devices like the iPhone. But Core ML is proprietary. One of ONNX’s long-term goals is to make it easier to deliver models for inference to many kinds of targets.

Source: InfoWorld Big Data

ZeroStack Joins Remotely-Managed Private Cloud Marketplace

ZeroStack Joins Remotely-Managed Private Cloud Marketplace

ZeroStack, Inc. has announced it has joined the OpenStack Marketplace for Remotely-Managed Private Clouds, a collection of leading private cloud solutions for enterprises that want the self-service and agility of on-premises cloud without the pain of configuring and managing complex cloud infrastructure. ZeroStack achieves this using Z-Brain — a SaaS-driven operations portal that leverages machine learning combined with on-premises infrastructure to “bring the cloud home.” Enterprise application developers can now rapidly build, test, and deploy on-premises, production-ready, distributed web applications, mobile applications, containerized applications, and big data analytics workloads.

“Remote IT management has been a rapidly-growing trend over the past two years, beginning with cloud-managed Wi-Fi systems and moving into other IT infrastructure areas like security and SD-WAN,” said Zeus Kerravala, principal analyst at ZK Research. “Remotely-managed clouds like the ZeroStack Intelligent Cloud Platform deliver the security, performance, and control of private cloud combined with the ease-of-use of public cloud, and they use a SaaS portal to automate provisioning, configuration, monitoring, and management.”

ZeroStack’s implementation of OpenStack leverages machine learning technology in the Z-Brain SaaS portal to eliminate capacity planning, VM sizing, software maintenance, patching, and performance monitoring so IT administrators and software development teams can focus on delivering cloud-native applications.

“ZeroStack’s unique SaaS-driven operational model securely automates remote management and delivers continuous innovation using AI and machine learning technologies, dramatically reducing dependence on human experts,” said Kamesh Pemmaraju, Vice President of Product Management at ZeroStack. “We invite enterprises to compare our self-driving cloud with any other remotely-managed private cloud solution on the market.”

Source: CloudStrategyMag

American Axess Selects Epsilon

American Axess Selects Epsilon

Epsilon has announced a partnership with American Axess, Inc. This partnership provides American Axess customers with on-demand access to a suite of connectivity solutions via Epsilon’s Infiny platform.

In order to satisfy multinational corporations’ growing demand for direct connects into top Cloud Service Providers (CSPs), including Amazon Web Services (AWS), Google Cloud Platform and Microsoft Azure, American Axess has connected to Infiny in Miami, providing access to Epsilon’s North American footprint and interconnection options. By backhauling services to their network in Cali, Colombia, American Axess will facilitate superior options for interconnectivity between North and South America for its customers.

“Epsilon is a proven leader in the communication services industry, and we are delighted at the opportunity to utilize their platform to provide our customers with exceptional interconnectivity services,” says Lou W. Dabbe, owner and CEO, American Axess. “Infiny is a perfect fit for our business due to its global reach and ability to enhance the projection of our service across the world and to public cloud locations. Their team understands what it takes to be successful in the cloud, and Infiny is a platform that supports our customers growing needs.”

Infiny delivers a comprehensive set of enterprise, voice, local access, cloud and global connectivity services from a single, self-service platform. Utilizing this new and innovative technology, partners have the ability to procure and manage services via the Infiny web-based portal, APIs, and Android and iOS apps.

“Since launching Infiny in March, we have been dedicated to sharing this exceptional connectivity platform with international customers to enhance their experience when procuring and managing critical connectivity services,” shares Jerzy Szlosarek, CEO, Epsilon. “Our alliance with American Axess will build upon this mission, further validating our ability to deliver the highest quality connectivity services available to Latin America and the Caribbean.”

Phase two Epsilon will expore building upon this partnership by potentially bringing its Infiny platform to Colombia over the American Axess international network.

Source: CloudStrategyMag

Qligent Named IABM Design & Innovation Awards 2017 Finalist

Qligent Named IABM Design & Innovation Awards 2017 Finalist

Qligent has been named as a finalist for the IABM Design & Innovation Awards 2017. The winners will be announced at a special IBC 2017 ceremony taking place on Saturday, September 16.

The IABM Design & Inovation Awards 2017 are spread across 10 categories with 40 finalists. Qligent was shortlisted in the Test, Quality, Control & Monitoring category for Match, the company’s new automated programmatic error detection software for its Vision cloud-based monitoring and analysis platform. Qligent will exhibit Match alongside other products at Stand 8.E47 throughout the IBC show, taking place September 15-19 at the RAI Amsterdam.

Qligent Match provides a real-time, automated, software-based solution for compressed and uncompressed signals on any delivery platform (OTT, IPTV, satellite, cable, terrestrial). The software spots programmatic errors and anomalies introduced by the repeated multiplexing, overlaying, embedding, inserting and other real-time processes as signals move downstream from the origination to the end-user.

Lightweight and affordable, Match arms users with a toolset to identify today’s most common media distribution errors, including airing of incorrect programs, the capture of programmatic local ad splicing errors, and assigning foreign languages to the wrong audio track among other errors. This approach efficiently addresses the growing problem of TV operations becoming more error-prone, driven by the rapid expansion of program streams along with the last-mile need for localization and personalization for the enduser.

“With so much transition taking place today in TV operations and content delivery, there is a growing need for an automated verification and reporting solution that targets the presentation of content as it is being delivered,” said Ted Korte, COO, Qligent. “Unlike other QoS, QoE, and Compliance monitoring solutions, Match is unique in its ability to verify the dynamic changes of the program stream across large geographic deployments, affordably, and in real-time. We are thankful and excited to be recognized by IABM for our product development and problem-solving efforts associated with Match.”

Source: CloudStrategyMag

EdgeConneX® And Cedexis Publish New White Paper

EdgeConneX® And Cedexis Publish New White Paper

EdgeConneX® and Cedexis have announced the availability of a new white paper titled, “Cloud, Content, Connectivity and the Evolving Internet Edge.” The study uses Cedexis’ Real User Measurements (RUM) to reveal why the superior performance of Edge content and connectivity is driving more and more industry deployments.

Noted in the study, CDNs have taken the first step towards the Network Edge, realizing up to a 47% better response time compared with regions without Edge deployments. Cloud providers, meanwhile, are lagging in edge deployments and this is evident in higher latency — up to twice as high in regions without cloud-enabled data centers.

Cloud hyperscalers are bridging this gap using direct connections, including both physical and Software Defined Networks (SDN). The white paper compares Cedexis’ own panel of Internet measurements to services localized with EdgeConneX and Megaport, the world’s leading Network as a Service (NaaS) provider. Megaport’s global platform uses SDN to provide the most secure, seamless and on-demand way for enterprises, networks and services to interconnect. The response time improves 50% to 85% for users who bypass the public Internet, opting for direct cloud access via Megaport from an EdgeConneX Edge Data Center® (EDC).

“Megaport’s Software Defined Network is designed to enable superior performance for companies accessing public cloud services,” states Nicole Cooper, Executive Vice President, Americas, Megaport. “As the results in this whitepaper demonstrate, users at the network Edge can benefit from lower latency by using our SDN as part of their cloud deployments.”

Additionally, these changes in Internet architecture will result in a greater need for third-party load balancing services. As companies manage hybrid clouds, multiple CDNs and connectivity options, Global Server Load Balancing, offered by Cedexis, will be crucial to ensuring efficient content and application delivery.

“Cedexis is pleased to partner with EdgeConneX to further understand the impact of proximity and direct connectivity,” notes Simon Jones, head of marketing and evangelist, Cedexis. “By combining our data on billions of Real User Measurements with information on EdgeConneX local deployments, we are better able to understand the evolving Internet landscape. Customers needing to navigate the growth in hybrid cloud and content will be looking for solutions that manage multiple local and regional service providers.”

“The robust edge ecosystem within each of our Edge Data Centers is expanding each day as the Internet must support the nonstop content, cloud and application demand,” states Clint Heiden, chief commercial officer, EdgeConneX. “Along with Cedexis, we are pleased to validate and showcase how we are bringing the Internet community together to improve peering and connectivity globally.”

EdgeConneX specializes in providing purpose-built, power dense Edge Data Center solutions that enable the fastest delivery of data to end-users. EdgeConneX has created a new Edge of the Internet by designing and deploying facilities that are strategically positioned nearest to network provider aggregation points, ensuring the lowest latency data delivery with improved security and quality of service.

Source: CloudStrategyMag

Army Re-Ups With IBM For $135 Million In Cloud Services

Army Re-Ups With IBM For 5 Million In Cloud Services

IBM has announced that the U.S. Army’s Logistics Support Activity (LOGSA) awarded IBM a contract to continue providing cloud services, software development and cognitive computing, constituting the technical infrastructure for one of the U.S. federal government’s biggest logistics systems.

The 33-month, $135 million contract represents a successful re-compete of work that LOGSA signed with IBM in September 2012. Under that managed services agreement, the Army pays only for cloud services that it actually consumes. The efficiencies created by this arrangement have enabled the Army to avoid about $15 million per year in operational costs — a significant yield for the Army and taxpayers.

In addition to continuing to provide managed services as part of this new contract, IBM also will help the Army focus on:

  • Improving cybersecurity by applying the risk management framework (RMF) security controls to LOGSA’s IT enterprise. RMF is the unified information security framework for the entire U.S. federal government; it replaces legacy IT security standards
  • Incorporating cognitive computing that enhances readiness by anticipating needs
  • Speeding application modernization

As part of this new contract, IBM also will help the Army predict vehicle maintenance failures from more than 5 billion data points of on-board sensors that will be stored within this environment. In addition, the Army is adopting Watson IoT services and a new Watson IoT Equipment Advisor solution that analyzes unstructured, structured and sensor data directly from military assets.

The solution, part of the IBM Watson IoT for Manufacturing and Industrial Products product suite, includes IBM Predictive Maintenance and Quality System, an integrated solution that monitors, analyzes, and reports on information gathered from devices and equipment and recommends maintenance procedures. It also includes Watson Explorer, a cognitive exploration and content analysis platform that enables users to securely capture and analyze both structured and unstructured data. With the platform, the Army will look to extract enhanced insights from its vehicle data and recommend optimal repair methods and procedures. By combining tactical vehicle sensor and maintenance data, the Army better understands the health of its vehicles and can take proactive repair measures.

IBM recently completed a proof of concept that demonstrated the effectiveness of Watson cognitive computing for 10% of the Army’s Stryker vehicle fleet. Under this new contract, LOGSA will increase its ability to provide that predictive and prescriptive maintenance information to the Army.

LOGSA provides on-time integrated logistics support of worldwide Army operations, impacting every soldier, every day. As the Army’s authoritative source for logistics data, LOGSA provides logistics intelligence, life cycle support, technical advice, and assistance to the current and future force; integrates logistics information (force structure, readiness, and other logistics data) for worldwide equipment readiness and distribution analysis; and provides asset visibility for timely and proactive decision-making.

“LOGSA and the Army can now take advantage of the technological innovation that cloud offers — especially cognitive computing and analytics — so that the Army can continue to reap cost savings, further streamline its operations and deliver services to its clients,” said Lisa Mascolo, managing director, U.S. Public Service, IBM’s Global Business Services. “We’re pleased to continue our work with the Army to demonstrate the viability of cloud for mission applications and the promised benefits of efficiency and taxpayer savings.”

“Over the past four and a half years, LOGSA has benefitted from the business and technical advantages of the cloud,” said LOGSA Commander Col. John D. Kuenzli. “Now, we’re moving beyond infrastructure as-a-service and embracing both platform and software as-a service, adopting commercial cloud capabilities to further enhance Army readiness.”

“When Gen. Perna took command of the Army Materiel Command, he said we cannot conduct tomorrow’s operations using yesterday’s processes and procedures,” Kuenzli added. “He has since emphasized understanding the leading indicators to readiness, and getting in front of the Army’s logistics challenges. The services we have received from IBM and the potential of IBM Watson IoT truly enable LOGSA to deliver cutting-edge business intelligence and tools to give the Army unprecedented logistics support at efficient and affordable means.”

In addition to private cloud deployments, IBM manages five dedicated federal cloud data centers, including a cloud environment accredited up to impact* level 5 (IL-5). These were built to meet Federal Risk and Authorization Management Program (FedRAMP) and Federal Information Security Management Act (FISMA) requirements for government workloads.

*The Defense Information System Agency’s (DISA’s) information impact levels consider the potential impact of information being compromised. IL-5 gives the cloud provider the authority to manage controlled, unclassified information. For IBM’s work with the Army’s private cloud at Redstone Arsenal in Huntsville, Ala., the Army expects the company to achieve DISA’s IL-6 – the agency’s highest level – by early 2018, which would certify IBM to work with classified information up to “secret.” Presently, IBM is the only company authorized at IL-5 to run IaaS solutions on government premises.

Source: CloudStrategyMag

What is big data? Everything you need to know

What is big data? Everything you need to know

Every day human beings eat, sleep, work, play, and produce data—lots and lots of data. According to IBM, the human race generates 2.5 quintillion (25 billion billion) bytes of data every day. That’s the equivalent of a stack of DVDs reaching to the moon and back, and encompasses everything from the texts we send and photos we upload to industrial sensor metrics and machine-to-machine communications.

That’s a big reason why “big data” has become such a common catch phrase. Simply put, when people talk about big data, they mean the ability to take large portions of this data, analyze it, and turn it into something useful.

Exactly what is big data?

But big data is much more than that. It’s about:

  • taking vast quantities of data, often from multiple sources
  • and not just lots of data but different kinds of data—often, multiple kinds of data at the same time, as well as data that changed over time—that didn’t need to be first transformed into a specific format or made consistent
  • and analyzing the data in a way that allows for ongoing analysis of the same data pools for different purposes
  • and doing all of that quickly, even in real time.

In the early days, the industry came up with an acronym to describe three of these four facets: VVV, for volume (the vast quantities), variety (the different kinds of data and the fact that data changes over time), and velocity (speed).

Big data vs. the data warehouse

What the VVV acronym missed was the key notion that data did not need to be permanently changed (transformed) to be analyzed. That nondestructive analysis meant that organizations could both analyze the same pools of data for different purposes and could analyze data from sources gathered for different purposes.

By contrast, the data warehouse was purpose-built to analyze specific data for specific purposes, and the data was structured and converted to specific formats, with the original data essentially destroyed in the process, for that specific purpose—and no other—in what was called extract, transform, and load (ETL). Data warehousing’s ETL approach limited analysis to specific data for specific analyses. That was fine when all your data existed in your transaction systems, but not so much in today’s internet-connected world with data from everywhere.

However, don’t think for a moment that big data makes the data warehouse obsolete.  Big data systems let you work with unstructured data largely as it comes, but the type of query results you get is nowhere near the sophistication of the data warehouse. After all, the data warehouse is designed to get deep into data, and it can do that precisely because it has transformed all the data into a consistent format that lets you do things like build cubes for deep drilldown? Data warehousing vendors have spent many years optimizing their query engines to answer the queries typical of a business environment.

Big data lets you anayze much more data from more sources, but at less resolution. Thus, we will be living with both traditional data warehouses and the new style for some time to come.  

The technology breakthroughs behind big data

To accomplish the four required facets of big data—volume, variety, nondestructive use, and speed—required several technology breakthroughs, including the development of a distributed file system (Hadoop), a method to make sense of disparate data on the fly (first Google’s MapReduce, and more recently Apache Spark), and a cloud/internet infrastructure for accessing and moving the data as needed.

Until about a dozen years ago, it wasn’t possible to manipulate more than a relatively small amount of data at any one time. (Well, we all thought our data warehouses were massive at the time. The context has shifted dramatically since then as the internet produced and connected data everywhere.) Limitations on the amount and location of data storage, computing power, and the ability to handle disparate data formats from multiple sources made the task all but impossible.

Then, sometime around 2003, researchers at Google developed MapReduce. This programming technique simplifies dealing with large data sets by first mapping the data to a series of key/value pairs, then performing calculations on similar keys to reduce them to a single value, processing each chunk of data in parallel on hundreds or thousands of low-cost machines. This massive parallelism allowed Google to generate faster search results from increasingly larger volumes of data.

Around 2003, Google created the two breakthroughs that made big data possible: One was Hadoop, which consists of two key services:

  • reliable data storage using the Hadoop Distributed File System (HDFS)
  • high-performance parallel data processing using a technique called MapReduce.

Hadoop runs on a collection of commodity, shared-nothing servers. You can add or remove servers in a Hadoop cluster at will; the system detects and compensates for hardware or system problems on any server. Hadoop, in other words, is self-healing. It can deliver data—and run large-scale, high-performance processing jobs—in spite of system changes or failures.

Although Hadoop provides a platform for data storage and parallel processing, the real value comes from add-ons, cross-integration, and custom implementations of the tech- nology. To that end, Hadoop offers subprojects, which add functionality and new capabilities to the platform:

  • Hadoop Common: The common utilities that sup- port the other Hadoop subprojects.
  • Chukwa: A data collection system for managing large distributed systems.
  • HBase: A scalable, distributed database that sup- ports structured data storage for large tables.
  • HDFS: A distributed le system that provides high throughput access to application data.
  • Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying.
  • MapReduce: A software framework for distributed processing of large data sets on compute clusters.
  • Pig: A high-level data- ow language and execution framework for parallel computation.
  • ZooKeeper: A high-performance coordination service for distributed applications.

Most implementations of a Hadoop platform include at least some of these subprojects, as they are often necessary for exploiting big data. For example, most organizations choose to use HDFS as the primary distributed file system and HBase as a database, which can store billions of rows of data. And the use of MapReduce or the more recent Spark is almost a given since they bring speed and agility to the Hadoop platform.

With MapReduce, developers can create programs that process massive amounts of unstructured data in parallel across a distributed cluster of processors or stand-alone computers. The MapReduce framework is broken down into two functional areas:

  • Map, a function that parcels out work to different nodes in the distributed cluster.
  • Reduce, a function that collates the work and resolves the results into a single value.

One of MapReduce’s primary advantages is that it is fault-tolerant, which it accomplishes by monitoring each node in the cluster; each node is expected to report back periodically with completed work and status updates. If a node remains silent for longer than the expected interval, a master node makes note and reassigns the work to other nodes.

Apache Hadoop, an open-source framework that uses MapReduce at its core, was developed two years later. Originally built to index the now-obscure Nutch search engine, Hadoop is now used in virtually every major industry for a wide range of big data jobs. Thanks to Hadoop’s Distributed File System and YARN (Yet Another Resource Negotiator), the software lets users treat massive data sets spread across thousands of devices as if they were all on one enormous machine.

In 2009, University of California at Berkeley researchers developed Apache Spark as an alternative to MapReduce. Because Spark performs calculations in parallel using in-memory storage, it can be up to 100 times faster than MapReduce. Spark can work as a standalone framework or inside Hadoop.

Even with Hadoop, you still need a way to store and access the data. That’s typically done via a NoSQL database like MongoDB, like CouchDB, or Cassandra, which specialize in handling unstructured or semi-structured data distributed across multiple machines. Unlike in data warehousing, where massive amounts and types of data are converged into a unified format and stored in a single data store, these tools don’t change the underlying nature or location of the data—emails are still emails, sensor data is still sensor data—and can be stored virtually anywhere.

Still, having massive amounts of data stored in a NoSQL database across clusters of machines isn’t much good until you do something with it. That’s where big data analytics comes in. Tools like Tableau, Splunk, and Jasper BI let you parse that data to identify patterns, extract meaning, and reveal new insights. What you do from there will vary depending on your needs.

InfoWorld Executive Editor Galen Gruman, InfoWorld Contributing Editor Steve Nunez, and freelance writers Frank Ohlhorst and Dan Tynan contributed to this story.

Source: InfoWorld Big Data

Microsoft Leads In SaaS Market

Microsoft Leads In SaaS Market

New Q2 data from Synergy Research Group shows that the enterprise SaaS market grew 31% year on year to reach almost $15 billion in quarterly revenues, with collaboration being the highest growth segment. Microsoft remains the clear leader in overall enterprise SaaS revenues, having overtaken long-time market leader Salesforce a year ago. Microsoft was already rapidly growing its SaaS revenues, but in Q2 its acquisition of LinkedIn gave its SaaS business a further boost. In terms of overall SaaS market rankings, Microsoft and Salesforce are followed by Adobe, Oracle, and SAP, with other leading companies including ADP, IBM, Workday, Intuit, Cisco, Google, and ServiceNow. It’s notable that the market remains quite fragmented, with different vendors leading each of the main market segments. Among the major SaaS vendors those with the highest overall growth rates are Oracle, Microsoft, and Google.

In many ways the enterprise SaaS market is now mature. However, spending on SaaS remains relatively small compared to on-premise software, meaning that SaaS growth will remain buoyant for many years. Synergy forecasts that the SaaS market will double in size over the next three years, with strong growth across all segments and all geographic regions.

“IaaS and PaaS markets tend to get more attention and are indeed growing more rapidly, but the SaaS market is substantially bigger and will remains so for many years,” said John Dinsdale, a chief analyst and research director at Synergy Research Group. “Traditional enterprise software vendors like Microsoft, SAP, Oracle and IBM still have a huge base of on-premise software customers and they are all now pushing to aggressively convert those customers to a SaaS-based consumption model. At the same time, born-in-the-cloud software vendors like Workday, Zendesk and ServiceNow continue to light a fire under the market and help to propel enterprise spending on SaaS.”

Source: CloudStrategyMag

CloudJumper Named A 2017 Gartner ‘Cool Vendor’ In Unified Workspaces

CloudJumper Named A 2017 Gartner ‘Cool Vendor’ In Unified Workspaces

CloudJumper has announced that the company has been included in the Gartner report titled Cool Vendors in Unified Workspaces, 2017, by Michael A. Silver, Nathan Hill, Federica Troni, Manjunath Bhat, and Stephen Kleynhans. This is the first Cool Vendor report for unified workspaces by Gartner, Inc. and notes the disruptive nature of this technology for IT service providers and the clients they serve.

According to the report1, “The traditional desktop environment that organizations are deploying to users is old and tired. It prevents organizations from being agile, and limits the creativity and innovation of users. In the digital workplace, users need to be able to consume IT-provided applications from different sources, on whatever device they choose. They need to have the ability to innovate using devices and applications that the organization’s IT department has not done a full analysis and regression test on, yet IT must ensure that new ways of working won’t prevent the legacy and well-tested applications they use from being deployed. In our inquiries with clients, organizations are asking how to solve this problem. Unified workspaces allow the mix of IT-provided and user selected technologies to coexist in peace, harmony and productivity.”

nWorkSpace is CloudJumper’s Unified Workspace or Workspace as a Service (WaaS) platform which includes everything required for highly reliable, scalable and efficient service delivery. The solution is widely deployed by channel partners worldwide and allows for the management and monitoring of all client accounts from a single interface. CloudJumper provides IT service providers with all of the software, infrastructure and services necessary to quickly and easily deliver WaaS to end-customers of every configuration.

nWorkSpace allows partners to scale their services based on the client employee count or the unique operational requirements presented to them across customer locations. The high level of workflow automation in nWorkSpace allows CloudJumper delivery partners to shift their resources from the management of the platform to more important strategic activities, including business development and revenue generation.

“Vendors in the unified workspaces market can help organizations provision applications and data across multiple devices in a user-centric manner. I&O leaders responsible for mobile and endpoint strategies should identify innovative products that will help realize their unified workspaces vision,” stated the report2.

“CloudJumper is proud to be named as one of Gartner’s ‘Cool Vendors’ for 2017,” said Max Pruger, chief sales officer, CloudJumper. “We believe this acknowledgment recognizes our dedication to IT service provider-focused WaaS solutions and reinforces our vision of cloud-based workspaces for organizations across every sector. With the upward trajectory of unified workspaces and our industry-proven platform, we look forward to building upon this achievement in 2018 and beyond.”

 

1.Gartner, Cool Vendors in Unified Workspaces, 2017, May 25, 2017, ID: G00327743, https://www.gartner.com/doc/3729317?ref=SiteSearch&sthkw=CloudJumper&fnl=search&srcId=1-3478922254

 2. ibid

Source: CloudStrategyMag

Users review the top cloud data integration tools

Users review the top cloud data integration tools

As the world of cloud computing becomes more globalized, IT professionals need multiple levels of security and transparency to manage cloud relationships. Using a cloud data integration solution, an enterprise can configure a number of disparate application programs sharing data in a diverse network, including cloud-based data repositories. This allows enterprise tech professionals to manage, monitor and cleanse data from various web-based and mobile applications more effectively.

IT Central Station users have identified agile data transformation, a clear, customizable dashboard and efficient data replication as valuable features when looking for a cloud data integration solution. According to their reviews, the IT Central Station community has ranked Informatica Cloud Data Integration, Dell Boomi AtomSphere,  IBM App Connect and SnapLogic as leading cloud data integration solutions in the market.

Here is what our users have to say about working with these solutions, describing which features they find most valuable and offering insight on where they see room for improvement.

Editor’s Note: These reviews of select cloud data integration tools come from the IT Central Station community. They are the opinions of the users and are based on their own experiences.

Informatica Cloud Data Integration

Valuable Features

Data Replication and Data Sync

Hardik P., an Architect at a pharma/biotech company, writes about how Informatica’s data replication and data sync capabilities impact his company:

“I particularly value data replication and data sync jobs. Replication allows us to fully replicate all objects from Shop Floor Data Collection (SFDC) to in-house/on-premises database in one job. I also appreciate the flexibility of the reset target option to reflect the source object structural changes to be implemented on the target database table side.”

Flexible Integration

For this Director, Informatica’s most valuable feature is how it can integrate different applications in a flexible way:

“With recent versions of cloud-based products in use for all applications, it provides flexibility in integrating the different applications. With AWS S3 and Informatica, integration is flexible and loosely coupled. It is quite useful and flexible compared to other vendors in terms of cost of implementation and use case.”

Cloud Mapping Designer

Nick J., a Solution Architect at a software R&D company, highlights Informatica’s Cloud Mapping Designer as particularly useful for his company:

“With my firm, our use of Informatica Cloud is primarily to implement a set of financial integrations from various accounting systems into a Salesforce environment. By leveraging Informatica Cloud Mapping Designer, we were able to create sets of reusable templates that were source agnostic. This made supporting the integration for hundreds of customers feasible with just a small team of integration specialists.”

Room for Improvement

This Oracle Applications Project Manager at a tech services company finds that Informatica’s error reporting and debugging have room for improvement:

“Error reporting and debugging need improvement. They need to improve on the upgrade testing process from their end so that it does not cause any issue to existing functionality/setup.”

Read more Informatica Cloud Data Integration reviews on IT Central Station.

Dell Boomi AtomSphere

Valuable Features

Easy Workflow Creation

In his review, Kevin O., a System Analyst / Programmer at a logistics company, describes different ways Dell Boomi makes it easier to create workflows:

“It is easy to create workflows from one system to another, drawing on multiple systems at the same time. For example:

  • Creating a process from EDI transmissions to WMS
  • Creating marriages of data from multiple systems (time clock/payroll/HR/WMS/financial) to create reports
  • Taking information from one system to update it to another system (legacy EDI to WMS, payroll to financial, time clock to payroll, WMS to Legacy)”

Great alternative to ESB

Aman S., a Enterprise Integration Specialist at a tech services company, writes how Dell Boomi is a good alternative for Enterprise Service Bus solutions:

“I have worked with ESBs, such as MuleSoft. However, based on the usage and the end-user requests, we moved to Dell Boomi. It is mainly a carrier and it provides an integration platform as a service. This, in itself, provides the solution for an easy and mature way to communicate.”

Room for Improvement

He also points out how Dell Boomi can benefit from custom connector options:

“They should create a custom connector option. With this, they could improve where the user can create the connector, based on their usage.”

Read more Dell Boomi AtomSphere reviews on IT Central Station.

IBM API Connect

Valuable Features

Graphical Developer Interface

This System Engineer at a financial services firm discusses the value of IBM API Connect’s graphical developer interface:

“Because it has a graphical developer interface, we can quickly develop solutions that are connecting to anything on the cloud, without having to build those connectors ourselves.”

Reliability and Scalability

Deb W., a Development Manager, IT Business Applications at a tech company, writes about IBM API Connect’s reliability and scalability:

“Its reliability and the large number of endpoints for connectivity are valuable features. It scales well. We are considering moving to the cloud version, as only one or our endpoints is local and all the others are SaaS endpoints.”

Room for Improvement

She also points out where IBM API Connect can improve:

“I would like the ability for more than one developer to work on the same project (source control/branch merge). If the project has more than one orchestration, you should be able to have different people working on each.”

Read more IBM API Connect reviews on IT Central Station.

SnapLogic

Valuable Features

Faster Connections

Evan H., Director – Digital Media and Data Products at a retailer, points out how SnapLogic’s connectors impact his company’s productivity:

“My teams can connect to new partners and data sources in hours, not days. We complete more work and integrate faster, allowing us to prove ROI quicker on new ideas.”

Automatic Contracts

This Business Systems & Operations Manager at a manufacturing company highlights SnapLogic’s ability to send contracts automatically as particularly valuable:

“It automatically sends contracts from Salesforce to Workday, so there was no need for manual data entry. By the end of the year, I had created integrations that saved multiple roles worth of time. This allowed the employees to be effective in other areas.”

Room for Improvement

He also suggests that SnapLogic add more canned integrations:

“The product can include more canned integrations that can be used. In the field of integration apps, I see a spectrum of apps where one side is point-and-click with zero technical ability needed, and the other side is a platform where you basically write code. SnapLogic sits somewhere in the middle. It doesn’t offer enough easy canned integrations for its users like some of the easier to use integration apps.”

Read more SnapLogic reviews on IT Central Station.

To learn more from what real users have to say about other leading solutions in the market, you can read additional cloud data integration reviews by IT Central Station users.

Source: InfoWorld Big Data