A Look Into IBM’s OpenStack Meritocracy

A Look Into IBM's OpenStack Meritocracy

VIDEO: Angel Diaz, IBM vice president of Cloud Architecture and Technology, discusses how Big Blue has earned its place in the OpenStack community.

AUSTIN, Texas–IBM is one of the biggest contributors to the open-source OpenStack platform, which helps to serve as a core component of IBM’s cloud efforts. Helping to lead IBM’s cloud efforts is Angel Diaz, vice president of Cloud Architecture and Technology, who has taken a very developer hands-on approach to make sure IBM leads by example.In a video interview with eWEEK, Diaz discusses the role that IBM plays in the OpenStack community and why it matters to IBM’s overall business. Diaz said that IBM has over 200 developers working upstream in OpenStack. He also noted that IBM helped to start the OpenStack Foundation back in 2012, shaping the organizational and governance structure to help the group succeed.”We helped to start the foundation, but when we did that we weren’t given a single committer,” Diaz told eWEEK. “We had to earn our right in the community through the meritocracy.”In open-source communities, a code committer is a valued and trusted position that is based on an individual’s ability to write and contribute code as well as the person’s commitment to a given project. Diaz noted that IBM now employs multiple project technical leaders (PTLs) of OpenStack projects, and those individuals and IBM itself have earned the respect of the community.

“It’s how we do business–we contribute,” Diaz said. “We’re not open-source leaches.”

Watch the full video interview with Angel Diaz below:

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
Source: eWeek

Movidius Announces Fathom Deep Learning Accelerator Compute Stick

Movidius Announces Fathom Deep Learning Accelerator Compute Stick

Fathom1Movidius, a leader in low-power machine vision technology, today announced both the Fathom Neural Compute Stick – the world’s first deep learning acceleration module, and Fathom deep learning software framework. Both tools hand-in-hand will allow powerful neural networks to be moved out of the cloud, and deployed natively in end-user devices.

The new Fathom Neural Compute Stick is the world’s first embedded neural network accelerator. With the company’s ultra-low power, high performance Myriad 2 processor inside, the Fathom Neural Compute Stick can run fully-trained neural networks at under 1 Watt of power. Thanks to standard USB connectivity, the Fathom Neural Compute Stick can be connected to a range of devices and enhance their neural compute capabilities by orders of magnitude.

Neural Networks are used in many revolutionary applications such as object recognition, natural speech understanding, and autonomous navigation for cars. Rather than engineers programming explicit rules for machines to follow, vast amounts of data are processed offline in self-teaching systems that generate their own rule-sets. Neural networks significantly outperform traditional approaches in tasks such as language comprehension, image recognition and pattern detection.

[embedded content]

When connected to a PC, the Fathom Neural Compute Stick behaves as a neural network profiling and evaluation tool, meaning companies will be able to prototype faster and more efficiently, reducing time to market for products requiring cutting edge artificial intelligence.

As a participant in the deep learning ecosystem, I have been hoping for a long time that something like Fathom would become available,” said Founding Director of New York University Data Science Center, Dr. Yann LeCun. “The Fathom Neural Compute Stick is a compact, low-power convolutional net accelerator for embedded applications that is quite unique. As a tinkerer and builder of various robots and flying contraptions, I’ve been dreaming of getting my hands on something like the Fathom Neural Compute Stick for a long time. With Fathom, every robot, big and small, can now have state-of-the-art vision capabilities.”

Fathom allows developers to take their trained neural networks out of the PC-training phase and automatically deploy a low-power optimized version to devices containing a Myriad 2 processor. Fathom supports the major deep learning frameworks in use today, including Caffe and TensorFlow.

Deep learning has tremendous potential — it’s exciting to see this kind of intelligence working directly in the low-power mobile environment of consumer devices,” Google’s AI Technical Lead Pete Warden. “With TensorFlow supported from the outset, Fathom goes a long way towards helping tune and run these complex neural networks inside devices.”

Fathom Features

  • Plugged into existing systems (ARM host + USB port), Fathom can accelerate performance between 20x and 30x on deep learning tasks, i.e. plug it into a “dumb” drone and then you can run neural network applications on it.
  • It contains the latest Myriad 2 MA2450 chip – the same one Google is using in their undisclosed next generation deep learning devices.
  • It’s ultra-low power (under 1.2W) is ideal for many mobile and smart devices. This is roughly 1/10th of what competitors can achieve today.
  • Can take Tensorflow and Caffe PC networks and put them into embedded silicon at under 1W. Fathom Images/Second/Watt is roughly 2x Nvidia on similar tests.
  • Fathom takes machine intelligence out of the cloud and into actual devices. It can run deep neural networks in real time on the device itself.
  • With Fathom, you are able to finally bridge the gap between training (i.e. server side on GPU blades), and inferencing (running without cloud connection and in user’s devices). Customers can rapidly convert a PC-trained network and deploy to an embedded environment – meaning they are going to be able to put deep learning into end user products way faster, and far more easily than before.
  • Application example: plug Fathom into a GoPro and turn it into a camera with deep learning capabilities.

Availability

General availability will be Q4 of this year. Pricing will be sub $100 per unit.

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

A Brief History of Kafka, LinkedIn’s Messaging Platform

A Brief History of Kafka, LinkedIn’s Messaging Platform

Apache Kafka is a highly scalable messaging system that plays a critical role as LinkedIn’s central data pipeline. But it was not always this way. Over the years, we have had to make hard architecture decisions to arrive at the point where developing Kafka was the right decision for LinkedIn to make. We also had to solve some basic issues to turn this project into something that can support the more than 1.4 trillion messages that pass through the Kafka infrastructure at LinkedIn. What follows is a brief history of Kafka development at LinkedIn and an explanation of how we’ve integrated Kafka into virtually everything we do. Hopefully, this will help others that are making similar technology decisions as their companies grow and scale.

Why did we develop Kafka?

Over six years ago, our engineering team need to completely redesign LinkedIn’s infrastructure. To accommodate our growing membership and increasing site complexity, we had already migrated from a monolithic application infrastructure to one based on microservices. This change allowed our search, profile, communications, and other platforms to scale more efficiently. It also led to the creation of a second set of mid-tier services to provide API access to data models and back-end services to provide consistent access to our databases.

We initially developed several different custom data pipelines for our various streaming and queuing data. The use cases for these platforms ranged from tracking site events like page views to gathering aggregated logs from other services. Other pipelines provided queuing functionality for our InMail messaging system, etc. These needed to scale along with the site. Rather than maintaining and scaling each pipeline individually, we invested in the development of a single, distributed pub-sub platform. Thus, Kafka was born.

Kafka was built with a few key design principles in mind: a simple API for both producers and consumers, designed for high throughput, and a scaled-out architecture from the beginning.

What is Kafka today at LinkedIn?

Kafka became a universal pipeline, built around the concept of a commit log, and was built with speed and scalability in mind. Our early Kafka use cases encompassed both the online and offline worlds, both feeding systems that consume events in real-time and those that perform batch analysis. Some common ways we used Kafka included traditional messaging (publishing data from our content feeds and relevance systems to our online serving stores), to provide metrics for system health (used in dashboards and alerts), and to better understand how members use our products (user activity tracking and feeding data to Hadoop grid for analysis and report generation). In 2011 we open sourced Kafka via the Apache Software Foundation, providing the world with a powerful open source solution for managing streams of information.

Today we run several clusters of Kafka brokers for different purposes in each data center. We generally run off the open source Apache Kafka trunk and put out a new internal release a few times a year. However, as our Kafka usage continued to rapidly grow, we had to solve some significant problems to make all of this happen at scale. In the years since we released Kafka as open source, the Engineering team at LinkedIn has developed an entire ecosystem around Kafka.

As pointed out in this blog post by Todd Palino, a key problem for an operation as big as LinkedIn’s is the need for message consistency across multiple datacenters. Many applications, such as those maintaining the indices that enable search, need a view of what is going on in all of our datacenters around the world. At LinkedIn, we use the Kafka MirrorMaker to make copies of of our clusters. There are multiple mirroring pipelines that run both within data centers and across data centers and are laid out to keep network costs and latency to a minimum.

The Kafka ecosystem

A key innovation that has allowed Kafka to maintain a mostly self-service model has been our integration with Nuage, the self-service portal for online data-infrastructure resources at LinkedIn. This service offers a convenient place for users to manage their topics and associated metadata, abstracting some of the nuances of Kafka’s administrative utilities and making the process easier for topic owners.

Another open source project, Burrow, is our answer to the tricky problem of monitoring Kafka consumer health. It provides a comprehensive view of consumer status, and consumer lag checking as a service without the need to specify thresholds. It monitors committed offsets for all consumers at topic-partition granularity and calculates the status of those consumers on demand.

Scaling Kafka in a time of rapid growth

The scale of Kafka at LinkedIn continues to grow in terms of data transferred, clusters and the number of applications it powers.  As a result we face unique challenges in terms of reliability, availability and cost of our heavily multi-tenant clusters. In this blog post, Kartik Paramasivam explains the various things that we have improved in Kafka and its ecosystem at LinkedIn to address these issues.

Samza is LinkedIn’s stream processing platform that empowers users to get their stream processing jobs up and running in production as quickly as possible. Unlike other stream processing systems that focus on a very broad feature set, we concentrated on making Samza reliable, performant and operable at the scale of LinkedIn. Now that we have a lot of production workloads up and running, we can turn our attention to broadening the feature set. You can read about our use-cases for relevance, analytics, site-monitoring, security, etc., here.

Kafka’s strong durability, low latency, and recently improved security have enabled us to use Kafka to power a number of newer mission-critical use cases. These include replacing MySQL replication with Kafka-based replication in Espresso, our distributed document store. We also plan to support the next generation of Databus, our source-agnostic distributed change data capture system, using Kafka. We are continuing to invest in Kafka to ensure that our messaging backbone stays healthy as we ask more and more from it.

The Kafka Summit in San Francisco was recently held on April 26.

JoelKoshy_LinkedInContributed by: Joel Koshy, a member of the Kafka team within the Data Infrastructure group at LinkedIn and has worked on distributed systems infrastructure and applications for the past eight years. He is also a PMC member and committer for the Apache Kafka project. Prior to LinkedIn, he was with the Yahoo! search team where he worked on web crawlers. Joel received his PhD in Computer Science from UC Davis and his bachelors in Computer Science from IIT Madras.

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

Tech Complexity Giving IT Professionals Headaches

Tech Complexity Giving IT Professionals Headaches

The management challenges IT teams are most worried about include mobile devices and wireless networks, cloud apps and virtualization, according to an Ipswitch survey.

Two-thirds of IT professionals believe increasingly complex technology is making it more difficult for them to do their jobs successfully, according to a global Ipswitch survey.The goal of the research, in which more than 1,300 respondents were surveyed, was to gain insight into the current IT management challenges facing today’s IT teams, specifically regarding what they need to monitor, how they accomplish it and where they believe improvements could be made.”What we found most surprising was that 88 percent of respondents reported that they want IT management software that offers more flexibility with fewer licensing restrictions,” Jeff Loeb, chief marketing officer for Ipswitch, told eWEEK. “It is surprising because with such a high level of dissatisfaction, we think that vendors would have offered alternative solutions earlier. Also, while vendors have focused on single-pane-of-glass solutions that allow you to visualize complex problems, the underlying software license model has not evolved.”The IT management challenges teams report being most worried about include mobile devices and wireless networks (55 percent), cloud applications (50 percent), virtualization (49 percent), bring your own device (BYOD) (43 percent) and high-bandwidth applications (41 percent), such as video or streaming.

“Business needs are constantly changing, so monitoring needs to be flexible to adapt to these changing business priorities,” Loeb said. “IT teams often have one-off challenges, like troubleshooting a unique problem, so having the flexibility to adapt to these one-offs without having to buy new tools is crucial.”

He noted monitoring flexibility is important so businesses can see where software licenses can be fully utilized without becoming shelfware and unused capacity can be split across technology silos, avoiding waste.IT teams reported they were not monitoring everything that they would like to to ensure control. Top reasons for this include budget (28 percent), lack of staff (18 percent) and the complexity of the IT environment they have to deal with (15 percent).Finally, 54 percent said IT management software licensing models are too expensive, inflexible and complicated to deal with.Overall, the research found IT teams are concerned about losing control of their company’s IT environment as new technologies, devices and requirements are added on a regular basis.”BYOD devices consume bandwidth on networks, which can tremendously slow down performance of business applications and introduce security vulnerabilities,” Loeb noted. “It’s much harder for IT teams to enforce security policies for devices they do not control.”
Source: eWeek

Consumer Software Deals Power Tech M&A Market

Consumer Software Deals Power Tech M&A Market

The industry’s largest transaction to date this year is Cisco Systems’ acquisition of Jasper Technologies for $1.4 billion, according to Berkery Noyes.

The software industry merger and acquisition (M&A) deals volume increased 7 percent, with a total of 523 transactions, over the past three months; however, overall value decreased 81 percent to $21.6 billion from $111.5 billion, according to independent mid-market investment bank Berkery Noyes’ “Q1 2016 Software Industry M&A Report.”Of note, the industry’s largest transaction to date this year is Cisco Systems’ acquisition of Jasper Technologies for $1.4 billion.Aggregate value declined 9 percent on a year-over-year basis, and in the past five quarters, deal volume reached its peak in Q3 2015. Deal value reached its peak in Q4 2015.”In general there have been fewer megadeals, but middle-market transaction volume should continue at a steady pace as acquirers look for innovative technologies to help expand their product offerings,” Mary Jo Zandy, managing director at Berkery Noyes, told eWEEK.

Most notable in Q4 was Dell’s announced acquisition of EMC Corp. for $67.5 billion. If these four deals are excluded, deal value would have only decreased by 15 percent.

“Much of the activity in the consumer software sector was driven by mobile application deals,” Zandy said. “High-profile, mobile-based transactions in Q1 included Microsoft’s announced acquisition of Swiftkey, which provides predictive keyboard technology for Android and iOS devices, with a reported purchase price of approximately $250 million; GoPro’s announced acquisition of video editing apps Replay and Splice for $105 million; and Spotify’s acquisitions of Soundwave and Cord Project, as the digital music service looks to bolster its social and messaging capabilities.”Other notable acquirers were Snapchat with the announced acquisition of Bitstrips, which allows users to create personalized emojis and carton avatars, for a reported $100 million, and Facebook, with its announced acquisition of Masquerade, a face-swapping application.The infrastructure software segment’s deal volume decreased 21 percent in Q1 2016. One noteworthy deal was Micro Focus’ announced acquisition of Serena Software for $540 million.The consumer software segment’s deal volume increased 22 percent in Q1 2016 for its third consecutive quarterly rise, while the business software segment’s deal volume increased 18 percent in Q1 2016.”We expect this momentum to carry on throughout the rest of the year and into 2017, with a focus on companies that can provide new customers, new technologies or access to new markets,” Zandy said.
Source: eWeek

Microsoft is making big data really small using DNA

Microsoft is making big data really small using DNA

Microsoft has partnered with a San Francisco-based company to encode information on synthetic DNA to test its potential as a new medium for data storage. 

Twist Bioscience will provide Microsoft with 10 million DNA strands for the purpose of encoding digital data. In other words, Microsoft is trying to figure out how the same molecules that make up humans’ genetic code can be used to encode digital information. 

While a commercial product is still years away, initial tests have shown that it’s possible to encode and recover 100 percent of digital data from synthetic DNA, said Doug Carmean, a Microsoft partner architect, in a statement.

Using DNA could allow massive amounts of data to be stored in a tiny physical footprint. Twist claims a gram of DNA could store almost a trillion gigabytes of data.

Facebook Rides Mobile Ads to 52 Percent Revenue Surge

Facebook Rides Mobile Ads to 52 Percent Revenue Surge

Q1 net income was $1.51 billion, nearly triple that of the $512 million it profited in Q1 last year.

Facebook continues its winning ways, reporting a whopping 52 percent surge in revenue in its Q1 2016 earnings report to the U.S. Securities and Exchange Commission April 27.The bottom-line numbers for the social network spoke for themselves: revenue of $5.38 billion, up 52 percent over $3.54 billion a year ago; and net income of $1.51 billion, nearly tripling the $512 million it profited last year.Shares of Facebook stock, which have risen 33 percent during the past year, spiked up 9.5 percent to $119.28 in after-hours trading following the earnings release.The results were in stark comparison to those of fellow Silicon Valley superstar companies such as Apple, which reported its first quarterly drop in revenue in 13 years; Yahoo, which lost $99 million last quarter; Twitter, which missed first-quarter revenue expectations; and Google parent Alphabet Inc., which also missed analysts’ projections.

Facebook, which now has 1.65 billion monthly users, continues to ride the strength of its mobile ad sales—which it started selling in earnest in 2012—and the rising popularity of its video ads to the new profitability. The rapidly expanding development of its Messenger platform to connect users with businesses also is gaining traction and is expected to start contributing to the bottom line soon.

Video ads are selling as advertisers channel funds from print and television budgets. Video ads on Facebook cost about $4 per 1,000 views during the first quarter, up from $3.44 in 2015 and higher than the $3.14 average across Facebook, according to marketing technology company Kenshoo.The company also announced it is proposing to create a new class of nonvoting capital stock, known as the Class C capital stock. The proposal is designed to create a capital structure that will, among other things, maintain 31-year-old CEO and co-founder Mark Zuckerberg’s leadership role at the company for years to come, according to the company.

If the Class C proposal is OK’d by shareholders, the company said it would issue two shares of Class C capital stock as a one-time dividend for each share of Class A and Class B stock.Facebook’s success isn’t just attributable to the social network. In fact, analysts were extremely impressed with the company’s other platforms. In particular, they were pleased to see Facebook is starting to make money from its 410 million Instagram users, and argued it could help the company generate an additional $4 billion to $5 billion in the next two years.WhatsApp and Facebook Messenger also are growing rapidly, which analysts say will only add to the revenue the company generates.

Source: eWeek

Qubole and Looker Join Forces to Empower Business Users to Make Data-Driven Decisions

Qubole and Looker Join Forces to Empower Business Users to Make Data-Driven Decisions

Qubole_logoQubole, the big data-as-a-service company, and Looker, the company that is powering data-driven businesses, today announced that they are integrating Looker’s business analytics with Qubole’s cloud-based big data platform, giving line of business users across organizations access to powerful, yet easy-to-use big data analytics.

Business units face an uphill battle when it comes to gleaning information from vast and disparate sources. Line of business users find it challenging to extract, shape and present the variety and volume of data to executives to help make informed business decisions. As a result, data scientists are overwhelmed with requests to access data or provide fixed reports to line of business users, diverting their attention from gathering data insights through statistics and modeling techniques. Furthermore, line of business users become frustrated when they are forced to decipher the output of SQL aggregations created by data scientists.

Qubole and Looker are addressing this issue by integrating the Qubole Data Service (QDS) and Looker’s analytics data platform. The combination gives line of business users instant access to automated, scalable, self-service data analytics without having to rely on or overburden the data science team — and without having to build and maintain on-premises infrastructure.

Data has become essential for every business function across the enterprise, but most big data offerings are still too complicated for line of business users to use, substantially reducing the business impact data can have,” said Ashish Thusoo, co-founder and CEO of Qubole. “Qubole and Looker have similar philosophies that it is essential for businesses to make insights accessible to as many people in an organization as possible to stay competitive. The integration of our offerings serves that very purpose.”

Looker-logoQDS is a self-service platform for big data analytics that runs on the three major public clouds: Amazon AWS, Google Compute Engine and Microsoft Azure. QDS automatically provisions, manages and scales up clusters to match the needs of a particular job, and then winds down nodes when they’re no longer needed. QDS is a fully managed big data offering that leverages the latest open source technologies, such as Apache Hadoop, Hive, Presto, Pig, Oozie, Sqoop and Spark, to provide the only comprehensive, “everything-as-a-service” data analytics platform, complete with enterprise security features, an easy to use UI and built-in data governance.

Our customers are using Looker every day to operationalize their data and make better business decisions,” said Keenan Rice, vice president of alliances, Looker. “Now with our support for Qubole’s automated, scalable, big data platform, businesses have greater access to their cloud-based data. At the same time, Qubole’s rapidly growing list of customers utilize our data platform to find, explore and understand the data that runs their business.”

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

Redis Collaborates with Samsung Electronics to Achieve Groundbreaking Database Performance

Redis Collaborates with Samsung Electronics to Achieve Groundbreaking Database Performance

Redis today announced the general availability of Redis on Flash with standard x86 servers, including standard SATA-based SSD instances available on public clouds and more advanced NVMe based SSDs like the Samsung PM1725. Running Redis, the world’s most popular in-memory data structure store, on cost effective persistent memory options enables customers to process and analyze large datasets at near real-time speeds with 70% lower cost.

The Redis on Flash offering has been optimized to run Redis with flash memory used as a RAM extender. Operational processing and analysis of very large datasets in-memory is often limited by the cost of dynamic random access memory (DRAM). By running a combination of Redis on Flash and DRAM, datacenter managers benefit from leveraging the high throughput and low latency characteristics of Redis while achieving substantial cost savings.

New next generation persistent memory technology like Samsung’s NVMe SSD delivers orders of magnitude higher performance at only an incremental added cost compared to standard flash memory. Redis collaborated with Samsung to demonstrate 2 million ops/second with sub-millisecond latency and over 1GB disk bandwidth on a single standard Dell Xeon Server, placing 80 percent of the dataset on the NVMe SSD technology and only 20 percent of it on DRAM.

We are happy to contribute to a new solution for our customers, one that shows a 40X improvement in throughput at sub-millisecond latencies compared to standard SATA-based SSDs,” stated Mike Williams, vice president, product planning, Samsung Device Solutions Americas. “This solution – using our next generation NVMe SSD technology and Redis in-memory processing – can play a key role in the advancement of high performance computing technology for the analysis of extremely large data sets.”

Spot.IM, a next generation on-demand social network that powers social conversations on leading entertainment and media websites such as Entertainment Weekly and CG Media is already reaping the benefits of deploying Redis on Flash. Spot.IM’s cutting-edge architecture seeks minimal latency, so the transition from webpage viewing to interactive dialog appears to be seamless. With Redis automatically scaling, highly responsive database, the service is able to easily handle 400,000 to one million user requests a day, to and from third-party websites at sub-millisecond latencies. As Spot.IM scaled out its architecture in an AWS Virtual Private Cloud (VPC) environment, the company turned to Redis on Flash, delivered as Redis Enterprise Cluster (RLEC), to help optimize the costs of running an extremely demanding, high performance, low latency application without compromising on responsiveness. With RLEC Flash, Spot.IM maintains extremely high throughput (processing several hundred thousands of requests per second) at significantly lower costs compared to a pure RAM solution.

Redis is our main database and a critical component of our highly demanding application because our architecture needs to handle extremely high speed operations with very little complexity and at minimal cost” says Ishay Green, CTO, Spot.IM. “Redis technology satisfies all our requirements around high availability, seamless scalability, high performance and now at a very attractive price point with Redis on Flash.”

Redis on Flash is now available as RLEC (Redis Enterprise Cluster) over standard x86 servers, including SSD backed cloud instances and IBM POWER8 platforms. It is also available to Redis Cloud customers running on a dedicated Virtual Private Cloud environment.

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

BackOffice Associates Releases Data Stewardship Platform 6.5 and dspConduct for Information Stewardship

BackOffice Associates Releases Data Stewardship Platform 6.5 and dspConduct for Information Stewardship

BackOffice_logoBackOffice Associates, a leader of information governance and data modernization solutions, today announced Version 6.5 of its flagship Data Stewardship Platform (DSP) and debuted its newest dspConduct application for comprehensive business process governance and application data management across all data in all systems.

Next-generation information governance is necessary to maximize the value of an enterprise’s data assets, improve the efficiency of business processes and increase the overall value of the organization,” said David Booth, chairman and CEO, BackOffice Associates. “Our continued vision and offerings are designed to help organizations embrace the next wave in data stewardship.”

dspConduct is built on DSP 6.5 – the most powerful data stewardship platform to date.  With this latest release, the DSP continues to drive the consumption and adoption of data stewardship by linking business users and technical experts through the business processes of data.  By introducing new user experience paradigms, executive and management reporting, extended data source connectivity, and improved performance and scale, the 6.5 release continues to expand the platform’s capabilities and reach.

dspConduct helps Global 2000 organizations proactively set and enforce strategic data policies across the enterprise. The solution complements master data management (MDM) strategies by ensuring transactions run as planned in critical business systems such as ERP, CRM, PLM, and others.

We designed dspConduct to extend beyond the traditional capabilities of master data management—bringing today’s business users a single platform that addresses their complex application data landscape with the tools needed to conduct world-class business process governance and achieve measurable business results,” added Rex Ahlstrom, Chief Strategy Officer, BackOffice Associates.

dspConduct helps business users achieve business process governance across all application data found in their organization’s enterprise architecture. The solution empowers users to plan and analyze specific policies for various types of enterprise data—whether customer, supplier, financial, human resources, manufacturing—and then execute and enforce those policies across the organization’s heterogeneous IT system landscape.  Built on BackOffice Associates’ more than 20 years of real-world experience meeting the most complex and critical data challenges, dspConduct and the DSP bring to the market a proven solution to maximize the business value of data.

Additional enhancements available in DSP 6.5 include:

  • Highest performance platform for data stewardship to date
  • Native Excel interoperability through the DSP for a simpler business-user experience
  • Native SAP HANA® connectivity and support for migrations to SAP® Business Suite 4 SAP HANA (SAP S/4HANA)
  • Generic interface layer for complete enterprise architecture interconnectivity
  • Native SAP Fiori® apps for migration and data quality metrics accessible by all stakeholders

BackOffice Associates was recently named a Strong Performer by Forrester Research in its independent report, “The Forrester Wave™: Data Governance Stewardship Applications, Q1 2016.”

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData