IBM Expands Certified Public Cloud Infrastructure For SAP HANA®

IBM Expands Certified Public Cloud Infrastructure For SAP HANA®

IBM has announced the availability of new SAP-certified bare metal servers for SAP HANA® platform deployments in the IBM Cloud and new configurations for VMware environments. The new cloud infrastructure-as-a-service solutions are designed to give enterprises the power and performance they need to manage their mission-critical applications running on SAP HANA.

Each day more than 2.5 quintillion bytes of data are created. Enterprises are rapidly adopting cloud so they can turn this increasing volume of structured and unstructured data into valuable business intelligence and leverage AI and analytics technology to gain new insights from their core business applications.

IBM’s newest cloud infrastructure for SAP HANA solutions on the IBM Cloud is designed to provide clients with larger and more powerful infrastructure to run memory-intensive applications running on SAP HANA. Specifically designed for larger workloads running on SAP HANA, the new solutions are certified by SAP to reassure clients that their production and test systems are running optimally in IBM Cloud and are backed by both IBM and SAP support teams. The new solutions include:

New SAP-Certified Bare Metal Servers for SAP HANA deployment on IBM Cloud: SAP has tested and certified new IBM Cloud bare metal servers with 4 and 8-socket Intel® Xeon® Processors and up to 8TB of memory to help enterprises run larger and more data-intensive in-memory workloads like those in SAP S/4HANA® efficiently. The solutions will be available globally in the IBM Cloud.

SAP-Certified Solutions for Deploying SAP HANA on VMware on IBM Cloud: Enterprises can provision an SAP-certified bare metal server configured with VMware. This helps clients currently running application workloads on SAP HANA in a private VMware environment to more easily transition to the public cloud without having to retool or refactor their application. The solutions will be available globally in the IBM Cloud.

“To win in this arena, enterprises need a cloud platform that can help them maximize their core business applications and gain new insights from increasing volumes of data,” said John Considine, general manager for cloud infrastructure services, IBM. “IBM is building more powerful and cost-effective solutions for SAP HANA applications in the cloud so that enterprises can focus on business innovation instead of underlying infrastructure.”

With IBM’s global network of nearly 60 cloud data centers in 19 countries and six continents, clients can scale globally to run their workloads on SAP HANA on IBM’s public cloud infrastructure when and where they need. IBM Cloud also provides access to more than 150 APIs and services including IBM Watson and analytics. The new solutions are designed to help clients co-locate their development, test, and production environments for applications running on SAP HANA, speed deployments and increase utilization.

IBM provides a full spectrum of cloud solutions to support SAP HANA applications including fully managed services and infrastructure-as-a-service certified by SAP. SAP HANA solutions are rapidly gaining momentum on the IBM Cloud with enterprises in a variety of industries around the world such as PeroxyChem.

PeroxyChem had only 12 months after divesting from its parent company to migrate its mission-critical SAP business systems to a new platform — or face significant out-of-contract hosting fees. By working with IBM to set up and host the company’s new cloud environment for SAP and non-SAP business applications, PeroxyChem is realizing significant cost savings and global growth opportunities.

Source: CloudStrategyMag

Dell EMC And IBM To Offer VMware Solutions On The IBM Cloud

Dell EMC And IBM To Offer VMware Solutions On The IBM Cloud

IBM and Dell EMC has announced an agreement to help accelerate cloud adoption by providing Dell EMC’s commercial customers access to VMware solutions on the IBM Cloud. As part of the relationship, IBM has added Dell EMC infrastructure products into the IBM Cloud, and expands VMware solutions. Customers can now move workloads to the cloud while continuing to leverage the benefits of Dell EMC infrastructure and the VMware platform.

This news builds on IBM’s long-term partnership with VMware, which helps organizations extend existing workloads to the cloud in hours, vs. weeks or months. The partnership is seeing rapid adoption with more than 1,400 clients such as Telstra already leveraging IBM Cloud for VMware solutions which are available across IBM’s global network of nearly 60 cloud data centers in 19 countries.

As part of the agreement, Dell EMC will begin by offering VMware vCenter Server® on the IBM Cloud to its customers later this year, which will make it possible for them to rapidly extend on-premises data center capacity into the public cloud, while taking advantage of their existing investments in VMware tooling and scripts, and training. This will give VMware users a familiar, predictable and valuable experience so they can move their on-premises and public cloud workloads to the IBM Cloud with ease.

“Dell EMC is laser-focused on helping customers quickly and successfully embrace their digital transformations. Dell EMC and IBM realize many customers will approach the challenge differently, but all are interested in realizing the benefits of cloud-like efficiencies. Our relationship with IBM allows Dell’s commercial customers the ability to quickly and easily extend their VMware workloads to the IBM Cloud,” said Armughan Ahmad, senior vice president & general manager, Hybrid Cloud and Ready Solutions, Dell EMC. “Customers will now be able to easily migrate enterprise applications to the single-tenant, IBM-hosted environment, and maintain the same level of control and visibility as if it were part of their own data center.”

“Through the resale of IBM Cloud for VMware Solutions, Dell EMC can offer their commercial customers a hybrid cloud choice with VMware offerings ‘as-a-Service’ as they make the cloud transition,” said Ajay Patel, senior vice president and general manager, cloud provider software business unit, VMware. “VMware Cloud Infrastructure enables customers to run, manage, connect and secure their applications across common operating environment, empowering them with a new level of IT agility and better economics.”

“Companies in every industry need fast and easy ways to deploy and move workloads to the cloud while not compromising security,” said Faiyaz Shahpurwala, general manager, IBM Cloud. “This agreement makes it easier than ever for Dell’s thriving channel of commercial customers to access VMware’s capabilities on IBM Cloud. Now, these organizations can rapidly deploy and scale pre-configured solutions that optimize their existing IT investments, while using the public cloud to extract new insights and value.”

IBM Cloud for VMware solutions targets commercial customers looking for enhanced security. Customers will be able to choose to run VMware solutions on dedicated bare metal servers, encrypt data at rest on attached storage and connect on-premises environments to IBM’s global network of cloud data centers across 19 countries.

Dell EMC will offer its customers choice for their VMware environments — on-premises or in the IBM Cloud — to help central IT more quickly meet the increasing demands from their businesses and access IBM’s cloud-native services including cognitive and analytics.

The speed of deployment and capacity expansion in the IBM Cloud helps IT deliver new services to lines of business more quickly and drive their company’s growth.  Customers will be positioned to easily manage workloads in the cloud using the same familiar VMware-compatible tools while maintaining control with full administration access to the VMware ESXi™ hosts. In addition to the expanded relationship with Dell EMC, IBM and VMware today announced momentum for their strategic partnership, which began in 2016, aimed at accelerating enterprise cloud adoption.

Dell EMC will begin reselling VMware solutions on IBM Cloud to commercial customers beginning in Q4 2017.

Source: CloudStrategyMag

CloudGenix Partners With Telarus

CloudGenix Partners With Telarus

CloudGenix has announced it has become a supplier partner with leading Utah-based technology-driven master agent Telarus. Telarus is well-known for their comprehensive portfolio of leading products and technologies that enable partners to provide a wealth of powerful and flexible services to their customers. The addition of SD-WAN to their list of products and services offered helps further extend their value proposition to partners and thus to customers.

CloudGenix SD-WAN evolves the customer WAN to securely take advantage of any transport including broadband internet, improve application performance and experience, reduce dependency on expensive multi-protocol label-switched (MPLS) private WANs, and reduce remote office infrastructure and cost. CloudGenix SD-WAN is powered by CloudGenix AppFabric, which continually monitors granular transactional performance metrics for each application and WAN link. It also enforces policies for those applications based on user-defined requirements for performance, security, and compliance. CloudGenix focuses on metrics related to application sessions to better understand and enforce policy based on actual application performance, user experience, and business requirements for performance, compliance, and security.

“Telarus is very selective about products and technologies we provide to our partners. We do this to ensure that they are able to provide their customers with the best possible experience and the most robust telecom solutions,” said Amy Bailey, Telarus vice president of marketing. “By partnering with CloudGenix, our partners can now provide their customers with a leading SD-WAN solution to help customers realize the benefits of SD-WAN. AppFabric allows for use of business terms when defining WAN policy and reduces the burden of configuring a buffet of low-level networking rules. With CloudGenix, our partners and their customers can get out of the weeds and focus on their business goals.”

Regarding the partnership with Telarus, Robert Sexton, vice president of channels at CloudGenix said, “We are thrilled to work with Telarus. Adding AppFabric SD-WAN to their portfolio of services and technologies that they provide their partners allows us to collectively transform the way businesses design, implement, and manage their WAN. With CloudGenix, Telarus partners will be able to enable their customers to think about their WAN using business policies rather than technical policies, integrate cloud and SaaS seamlessly, and reduce costs across the board.”

The SD-WAN movement has forced the partner community to come up to speed quickly with this technology. CloudGenix is perfectly positioned to help Telarus partners bring SD-WAN to their customers.

Source: CloudStrategyMag

IDG Contributor Network: Using big data to improve customer experience and financial results

IDG Contributor Network: Using big data to improve customer experience and financial results

Technology provides businesses across all industries with tools and capabilities that optimize operations in order to maximize revenue and minimize waste. Many industries now have access to up-to-date information on customer experience, product acceptance, and financial transactions. Organizations no longer have to wait in the dark until the end of the week, month, or quarter, to find out whether marketing campaigns are producing intended results. Instead, decision makers can view, analyze, and respond to results in real time.

According to a survey from Dell, companies that have added big data strategies have grown by 50 percent. Although adding troves of digital information alone is not likely to be the primary cause of that growth, the correlation appears significant enough to demonstrate that thriving businesses see data as an integral part of their growth-oriented operational asset mix. Because so many successful organizations recognize this, data science—the mining and utilization of that information—has become a new economic cornerstone of enterprise success.

As data science has become a strategic necessity, information has made the corporate landscape more competitive than ever. Businesses of all sizes must find innovative ways to put data to use in order to remain relevant. What follows is a short list of companies that are gaining attention for their data-driven efforts.

MedAware

In the U.S. alone, prescription errors cost thousands of lives a year, and MedAware aims to change that. The company’s data-driven solution helps eliminate those errors through the use of algorithms that analyze electronic medical records. If a prescription deviates from standard treatment, it is flagged for review. MedAware offers several solutions that medical facilities and practices can put in place to catch costly errors. These include seemingly simple features like spell check and reporting that, while taken for granted on many levels, play a significant role in establishing the safety and accuracy of medication administration that can ultimately make the difference between life and death. These functions can also be used to highlight trending issues in certain areas of operations and make continuous quality improvement adjustments as necessary.

AnyRoad

Successful businesses know one of the best ways to build customer engagement today is through in-person events and experiential marketing programs. AnyRoad makes sure those entities are able to measure the return on investment out of each event with specialized data software designed to collect detailed information on participants and their quantifiably recorded degree of loyalty. That information can then be put to use for follow-up emails and surveys that convert attendees into long-term customers. In addition to relationship management, AnyRoad also offers deep insights that can help brands A/B test different experiences and learn from past data to create better experiences in the future.

NBA 

Sports organizations rely on deeply-engaged fans for success. The NBA now uses analytics as part of its drafting process, paying close attention to a player’s stats, as well as stats for entire teams. Data has become such an important part of professional sports, the NBA regularly hosts hackathons where it invites students and developers to spend 24 solid hours crunching numbers and solving problems. Winners get prizes like lunch with the NBA commissioner and a trip to the NBA All-Star game.

Thinknear

Whole Foods (now part of Amazon) is already well aware of its specialized demographic that appears to include more affluent shoppers who appear to be on the lower end of being classified as middle-aged. The company goes one step further in monitoring and winning customers, partnering with Thinknear, a location marketing firm, to deliver advertising to anyone searching for information nearby. The same technology can be used for “geo-conquesting,” which targets customers at nearby stores of any given kind with attractive inducement offers that encourage them to visit a specific competitor in order to find and capture a great deal on whatever it is the customer may want. As a result of implementing Thinknear technology, Whole Foods saw an immediate increase in its conversion rates, and that translated into both increased total revenue and increased profit.

Amazon 

Amazon has long been using analytics and artificial intelligence to better serve its large customer base. The company’s new Echo Look device is only the latest example of this. It gathers data as it provides a service. The camera-equipped style assistant not only offers advice from fashion specialists and machine-learning algorithms, it also collects information that can then be used to better serve customers. By using Echo Look, customers are registering into a system where they agree to share their buying preference information with Amazon. That information is then used to enable Amazon to deliver more accurate marketing messages and product offers the result of which can be measured and analyzed to confirm accuracy of targeting as well as resulting sales and revenue.

Duetto 

Hotels are constantly trying to find new ways in which to win over travelers looking for a place to stay. Duetto specializes in helping hotels maximize profit and minimize waste. The company uses data to determine ideal timing for price surges and discounts that are a natural part of hotel finance. Each price takes dates, channels, room types, and other factors into account, ensuring each rate quote gives the best deal for the customer while also maximizing profitability for the hotel. Since the software uses machine learning, the algorithms are constantly being updated to provide the best results for each hotel using the service. This type of intelligent price and offer optimization provides a transformative breakthrough to an industry that is constantly seeking to refine operational efficiency.

RetailNext

Retailers of all sizes have learned the benefits of tracking consumer behavior in their stores. One tech company behind this tracking is RetailNext, a solutions provider that specializes in monitoring customer traffic. By gathering data on how often a customer visits, how long each stay is, and which displays he or she interacted with, stores can pinpoint areas that need to be improved for a better customer experience. This traffic measurement also provides store managers with traffic measurement data that can be used to augment and improve employee scheduling. As a result, retailers can optimize resourcing-related operating expense in order to maximize profit and minimize waste.

As the examples above demonstrate, data powers many of the operations businesses conduct each day. For enterprises both large and small, it’s important to have a strategy in place to better serve customers using collected information. Once that strategy is implemented, decision makers can lead their organization more effectively through innovative approaches to customer experience optimization and improvement of financial results.

This article is published as part of the IDG Contributor Network. Want to Join?

Source: InfoWorld Big Data

IDG Contributor Network: Ensuring big data and fast data performance with in-memory computing

IDG Contributor Network: Ensuring big data and fast data performance with in-memory computing

In-memory computing (IMC) technologies have been available for years. However, until recently, the cost of memory made IMC impractical for all but the most performance-critical, high value applications.

Over the last few years, however, with memory prices falling and demand for high performance increasing in just about every area of computing, I’ve watched IMC discussions go from causing glazed eyes to generating mild interest, to eliciting genuine excitement: “Please! I need to understand how this technology can help me!”

Why all the excitement? Because companies that understand the technology also understand that if they don’t incorporate it into their architectures, they won’t be able to deliver the applications and the performance their customers demand today and will need tomorrow. In-memory data grids and in-memory databases, both key elements of an in-memory computing platform, have gained recognition and mindshare as more and more companies have deployed them successfully.

All the new developments around in-memory computing shouldn’t fool you into thinking it’s unproven. It’s a mature, mainstream technology that’s been used for more than a decade in applications including fraud detection, high-speed trading and high performance computing.

Consider the challenges caused by the explosion in data being collected and processed as part of the digital transformation. As you go through your day, almost everything you do intersects with some form of data production, collection or processing: text messaging, emailing, social media interaction, event planning, research, digital payments, video streaming, interacting with a digital voice assistant…. Every department in your company relies on more sophisticated, web-scale applications (such as ERP, CRM and HRM), which themselves have ever more sophisticated demands for data and analytics.

Now add in the growing range of consumer IoT applications: smart refrigerators, watches and security systems—with nonstop monitoring and data collection—and connected vehicles with constant data exchange related to traffic and road conditions, power consumption and the health of the car. Industrial IoT is potentially even bigger. I recently read that to improve braking efficiency, a train manufacturer is putting 400 sensors in each train, with plans to increase that number to 4,000 over the next five years. And data from all of these applications must be collected and often analyzed in real time.

That’s where in-memory computing comes in. An IMC platform offers a way to transact and analyze data which resides completely in RAM instead of continually retrieving data from disk-based databases into RAM before processing. In addition, in-memory computing solutions are built on distributed architectures so they can utilize parallel processing to further speed the platform versus single node, disk-based database alternatives. These benefits can be gained by simply inserting an in-memory computing layer between existing application and database layers. Taken together, performance gains can be 1,000X or more.

Also, because in-memory computing solutions are distributed systems, it is easy to increase the RAM pool and the processing power of the system by adding nodes to the cluster. The systems will automatically recognize the new node and rebalance data between the nodes.

Today, IMC use cases continue to expand. Companies are accelerating their operational and customer-facing applications by deploying in-memory data grids between the application and database layers of their systems to cache the data and enable distributed parallel processing across the cluster nodes. Some are using IMC technology for event and stream processing to rapidly ingest, analyze, and filter data on the fly before sending the data elsewhere.

Many large analytic databases and data warehouses are using IMC technology to accelerate complicated queries on large data sets. And companies are beginning to deploy hybrid transactional/analytical processing (HTAP) models which allow them to transact and run queries on the same operational data set, reducing the complexity and cost of their computing infrastructure in use cases such as IoT.

The importance of IMC will continue to increase over the coming years as ongoing development and new technologies become available including:

First-class support for distributed SQL

Strong support for SQL will extend the life of this industry standard, eliminating the need for SQL professionals to learn proprietary languages to create queries—something they can do with a single line of SQL code. Leading in-memory data grids already include ANSI SQL-99 support.

Non-volatile memory (NVM)

NVM retains data during a power loss, eliminating the need for software-based fault-tolerance. A decade from now, NVM will likely be the predominant computing storage model, enabling large-scale, in-memory systems which only use hard disks or flash drives for archival purposes.

Hybrid storage models for large datasets

By supporting a universal interface to all storage media—RAM, flash, disk, and NVM—IMC platforms will give businesses the flexibility to easily adjust storage strategy and processing performance to meet budget requirements without changing data-access mechanisms. 

IMC as a system of record

IMC platforms will increasingly be used by businesses as authoritative data sources for business-critical records. This will in part be driven by IMC support for highly efficient hybrid transactional and analytical processing (HTAP) on the same database as well as the introduction of disk-based persistence layers for high availability and disaster recovery.

Artificial intelligence

Machine learning on small, dense datasets is easily accomplished today, but machine learning on large, sparse data sets requires a data management system that can store terabytes of data and perform fast parallel computations, a perfect IMC use case.

All the new developments around in-memory computing shouldn’t fool you into thinking it’s unproven. It’s a mature, mainstream technology that’s been used for more than a decade in applications including fraud detection, high-speed trading and high performance computing. But it’s now more affordable and vendors are making their IMC platforms easier to use and applicable to more use cases. The sooner you begin exploring IMC, the sooner your company can benefit from it.

This article is published as part of the IDG Contributor Network. Want to Join?

Source: InfoWorld Big Data

Machine learning skills for software engineers

Machine learning skills for software engineers

A long time ago in the mid 1950’s, Robert Heinlein wrote a story called “A Door into Summer” in which a competent mechanical engineer hooked up some “Thorsen tubes” for pattern matching memory and some “side circuits to add judgment” and spawned an entire industry of intelligent robots. To make the story more plausible, it was set well into the future, in 1970. These robots could have a task like dishwashing demonstrated to them and then replicate it flawlessly.

I don’t think I have to tell you, but it didn’t turn out that way. It may have seemed plausible in 1956, but by 1969 it was clear it wouldn’t happen in 1970. And then a bit later it was clear that it wouldn’t happen in 1980, either, nor in 1990 or 2000. Every 10 years, the ability for a normal engineer to build an artificially intelligent machine seemed to retreat at least as fast as time passed. As technology improved, the enormous difficulty of the problem became clear as layer after layer of difficulties were found.

It wasn’t that machine learning wasn’t solving important problems; it was. For example, by the mid-90’s essentially all credit card transactions were being scanned for fraud using neural networks. By the late 90’s Google was analyzing the web for advanced signals to aid in search. But your day to day software engineer didn’t have a chance of building such a system unless they went back to school for a Ph.D. and found a gaggle of like-minded friends who would do the same thing. Machine learning was hard, and each new domain required breaking a significant amount of new ground. Even the best researchers couldn’t crack hard problems like image recognition in the real world.

I am happy to say that this situation has changed dramatically. I don’t think that any of us is about to found a Heinlein-style, auto-magical, all-robotic engineering company in the near future, but it is now possible for a software engineer without any particularly advanced training to make systems that do really amazing stuff. The surprising part is not that computers could do these things. (It has been known since 1956 that this would be possible any day now!) What is surprising is how far we’ve come in the last decade. What would have made really good Ph.D. research 10 years ago is now a cool project for a weekend.

Machine learning is getting easier (or at least more accessible)

In our forthcoming book “Machine Learning Logistics”(coming in late September 2017 from O’Reilly), Ellen Friedman and I describe a system known as TensorChicken that our friend and software engineer, Ian Downard, has built as a fun home project. The problem to be solved was that blue jays were getting into our friend’s chicken coop and pecking the eggs. He wanted to build a computer vision system that could recognize a blue jay so that some kind of action could be taken to stop the pecking.

After seeing a deep learning presentation by Google engineers from the TensorFlow team, Ian got cracking and built just such a system. He was able to do this by starting with a partial model known as Inception-v3 and training it to the task of blue jay spotting with a few thousand new images taken by a webcam in his chicken coop. The result could be deployed on a Raspberry Pi, but plausibly fast response time requires something a bit beefier, such as an Intel Core i7 processor.

And Ian isn’t alone. There are all sorts of people, many of them not trained as data scientists, building cool bots to do all kinds of things. And an increasing number of developers are beginning to work on a variety of different, serious machine learning projects as they recognize that machine learning and even deep learning have become more accessible. Developers are beginning to fill roles as data engineers in a “data ops” style of work, where data-focused skills (data engineering, architect, data scientist) are combined with a devops approach to build things such as machine learning systems. 

It’s impressive that a computer can fairly easily be trained to spot a blue jay, using an image recognition model. In many cases, ordinary folks can sit down and just do this and a whole lot more besides. All you need is a few pointers to useful techniques, and a bit of a reset in your frame of mind, particularly if you’re mainly used to doing software development.

Building models is different from building ordinary software in that it is data-driven instead of design-driven. You have to look at the system from an empirical point of view and rely a bit more than you might like on experimental proofs of function rather than careful implementation of a good design accompanied with unit and integration tests. Also keep in mind that in problem domains where machine learning has become easy, it can be stupidly easy. Right next door, however, are problems that are still very hard and that do require more sophisticated data science skills, including more math. So prototype your solution. Test it. Don’t bet the farm (or the hen house) until you know your problem is in the easy category, or at least in the not-quite-bleeding-edge category. Don’t even bet the farm after it seems to work for the first time. Be suspicious of good looking results just like any good data scientist.

Essential data skills for machine learning beginners

The rest of this article describes some of the skills and tactics that developers need in order to use machine learning effectively.

Let the data speak

In good software engineering, you can often reason out a design, write your software, and validate the correctness of your solution directly and independently. In some cases, you can even mathematically prove that your software is correct. The real world does intrude a bit, especially when humans are involved, but if you have good specifications, you can implement a correct solution.

With machine learning, you generally don’t have a tight specification. You have data that represents the past experience with a system, and you have to build a system that will work in the future. To tell if your system is really working, you have to measure performance in realistic situations. Switching to this data-driven, specification-poor style of development can be hard, but it is a critical step if you want to build systems with machine learning inside. 

Learn to spot the better model

Comparing two numbers is easy. Assuming they are both valid values (not NaN’s), you check which is bigger, and you are done. When it comes to the accuracy of a machine learning model, however, it isn’t so simple. You have lots of outcomes for the models you are comparing, and there isn’t usually a clean-cut answer. Pretty much the most basic skill in building machine learning systems is the ability to look at the history of decisions that two models have made and determine which model is better for your situation. This judgment requires basic techniques to think about values that have an entire cloud of values rather than a single value. It also typically requires that you be able to visualize data well. Histograms and scatter plots and lots of related techniques will be required.

Be suspicious of your conclusions

Along with the ability to determine which variant of a system is doing a better job, it is really important to be suspicious of your conclusions. Are your results a statistical fluke that will go the other way with more data? Has the world changed since your evaluation, thus changing which system is better? Building a system with machine learning inside means that you have to keep an eye on the system to make sure that it is still doing what you thought it was doing to start with. This suspicious nature is required when dealing with fuzzy comparisons in a changing world.

Build many models to throw away

It is a well-worn maxim in software development that you will need to build one version of your system just to throw away. The idea is that until you actually build a working system, you won’t really understand the problem well enough to build that system well. So you build one version in order to learn and then use that learning to design and build the real system.

With machine learning, the situation is the same, but more so. Instead of building just one disposable system, you must be prepared to build dozens or hundreds of variants. Some of these variants might use different learning technologies or even just different settings for the learning engine. Other variants might be completely different restatements of the problem or the data that you use to train the models. For instance, you might determine that there is a surrogate signal that you could use to train the models even if that signal isn’t really what you want to predict. That might give you 10 times more data to train with. Or you might be able to restate the problem in a way that makes it simpler to solve.

The world may well change. This is particularly true, for instance, when you are building models to try to catch fraud. Even after you build a successful system, you will need to change in the future. The fraudsters will spot your countermeasures, and they will change their behavior. You will have to respond with new countermeasures.

So for successful machine learning, plan to build a bunch of models to throw away. Don’t expect to find a golden model that is the answer forever.

Don’t be afraid to change the game

The first question that you try to solve with machine learning is usually not quite the right one. Often it is dramatically the wrong one. The result of asking the wrong question can be a model that is nearly impossible to train, or training data that is impossible to collect. Or it may be a situation where a model that finds the best answer still has little value.

Recasting the problem can sometimes give you a situation where a very simple model to build gives very high value. I had a problem once that was supposed to do with recommendation of sale items. It was really hard to get even trivial gains, even with some pretty heavy techniques. As it turned out, the high value problem was to determine when good items went on sale. Once you knew when, the problem of which products to recommend became trivial because there were many good products to recommend. At the wrong times, there was nothing worth recommending anyway. Changing the question made the problem vastly easier.

Start small

It is extremely valuable to be able to deploy your original system to just a few cases or to just a single sub-problem. This allows you to focus your effort and gain expertise in your problem domain and gain support in your company as you build models.

Start big

Make sure that you get enough training data. In fact, if you can, make sure that you get 10 times more than you think you need.

Domain knowledge still matters

In machine learning, figuring out how a model can make a decision or a prediction is one thing. Figuring out what really are the important questions is much more important. As such, if you already have a lot of domain knowledge, you are much more likely to ask the appropriate questions and to be able to incorporate machine learning into a viable product. Domain knowledge is critical to figuring out where a sense of judgment needs to be added and where it might plausibly be added.

Coding skills still matter

There are a number of tools out there that purport to let you build machine learning models using nothing but drag-and-drop tooling. The fact is, most of the work in building a machine learning system has nothing to do with machine learning or models and has everything to do with gathering training data and building a system to use the output of the models. This makes good coding skills extremely valuable. There is a different flavor to code that is written to manipulate data, but that isn’t hard to pick up. So the basic skills of a developer turn out to be useful skills in many varieties of machine learning.

Many tools and new techniques are becoming available that allow practically any software engineer to build systems that use machine learning to do some amazing things. Basic software engineering skills are highly valuable in building these systems, but you need to augment them with a bit of data focus. The best way to pick up these new skills is to start today in building something fun.

Ted Dunning is chief applications architect at MapR Technologies and a board member for the Apache Software Foundation. He is a PMC member and committer of the Apache Mahout, Apache Zookeeper, and Apache Drill projects and a mentor for various incubator projects. He was chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems and built fraud detection systems for ID Analytics (LifeLock). He has a Ph.D. in computing science from University of Sheffield and 24 issued patents to date. He has co-authored a number of books on big data topics including several published by O’Reilly related to machine learning. Find him on Twitter as @ted_dunning.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Source: InfoWorld Big Data

Review: Domo is good BI, not great BI

Review: Domo is good BI, not great BI

In the last couple of years I have reviewed four of the leading business intelligence (BI) products: Tableau, Qlik Sense, Microsoft Power BI, and Amazon QuickSight. In general terms, Tableau sets the bar for ease of use, and Power BI sets the bar for low price.

Domo is an online BI tool that combines a large assortment of data connectors, an ETL system, a unified data store, a large selection of visualizations, integrated social media, and reporting. Domo claims to be more than a BI tool because its social media tool can lead to “actionable insights,” in practice every BI tool either leads to actions that benefit the business or winds up tossed onto the rubbish heap.

Domo is a very good and capable BI system. It stands out with support for lots of data sources and lots of chart types, and the integrated social media feature is nice (if overblown). However, Domo is harder to learn and use than Tableau, Qlik Sense, and Power BI, and at $2,000 per user per year it is multiples more expensive.

Depending on your needs, Tableau, Qlik Sense, or Power BI is highly likely to be a better choice than Domo.  

Source: InfoWorld Big Data

BMJ Relies On Datapipe For Chinese Expansion With Alibaba Cloud

BMJ Relies On Datapipe For Chinese Expansion With Alibaba Cloud

Datapipe has announced that BMJ, a global health care knowledge provider, has used Datapipe’s expertise to launch its hybrid multi-cloud environment and enter the Chinese market using Alibaba Cloud. The announcement follows last year’s partnership in which BMJ implemented a DevOps culture and virtualized its IT infrastructure with Datapipe’s private cloud.

In 2016, Datapipe announced it was named a global managed service provider (MSP) partner of Alibaba Cloud. Later that year, it was named Asia Pacific Managed Cloud Company of the Year by Frost & Sullivan. BMJ opened a local Beijing office in 2015, and its attraction to engaging Datapipe’s services at the outset was due to Datapipe’s on-the-ground support in China and knowledge of the local Chinese market.

“We see the People’s Republic of China as a key part of our growing international network. Therefore, we needed the technical expertise to be able to expand our services into China, and a partner to help us navigate the complex frameworks required to build services there. Datapipe, with its on-the-ground support in China and knowledge of the market has delivered local public-cloud infrastructure utilizing Alibaba Cloud,” said Sharon Cooper, chief digital officer, BMJ.

Last year, BMJ used Datapipe’s expertise to move to a new, agile way of working. BMJ fully virtualized its infrastructure and automated its release cycle using Datapipe’s private cloud environment. Now, it has implemented a hybrid multi-cloud solution using both AWS and Alibaba Cloud, fully realizing the strategy it started working towards two years ago.

 “It is exciting to be working with a company that has both a long, distinguished history and is also forward-thinking in embracing the cloud,” said Tony Connor, head of EMEA Marketing, Datapipe. “Datapipe partnered with Alibaba Cloud last year in order to better support our clients’ global growth both in and out of China. We are delighted to continue to deliver for BMJ, building upon our private cloud foundations, and taking them to China with public cloud infrastructure through our relationship with AliCloud.”

“We have now fully realized the strategy that we first mapped out two years ago, when we started our cloud journey. In the first year, we were able to fully virtualise our infrastructure using Datapipe’s private cloud, and in the process, move to a new, agile way of working. In this second year, we have embraced public cloud and taken our services over to China,” said Alex Hooper, head of operations, BMJ.

“Previously, we could only offer stand-alone software products in China, which are delivered on physical media and require quarterly updates to be installed by the end-user. With Datapipe’s help, we now have the capability to offer BMJ’s cloud-based services to Chinese businesses,” Hooper added.

This has been made possible by utilizing data centers located in China, using Alibaba Cloud, which links to BMJ’s core infrastructure and gives BMJ all the benefits of public cloud infrastructure, but located within China to satisfy the requirements of the Chinese authorities.

“With Datapipe’s help, it was surprisingly easy to run our services in China and link them to our core infrastructure here in the UK, Datapipe has done an exemplary job,” Hooper said.



BMJ has seen extraordinary change in its time. Within its recent history, it has transitioned from traditional print media to becoming a digital content provider. With Datapipe’s help it now has the infrastructure and culture in place to cement its position as a premier global digital publisher and educator.

Source: CloudStrategyMag