IDG Contributor Network: Big data + AI: Context, trust, and other key secrets to success
When Target deduced that a teenager from Minnesota was pregnant—and told her father about it before she’d broken the news herself—it was a reminder of just how powerful data analytics can be, and how companies must wield that power carefully.
Several years later, big data and machine learning are being used together more and more in business, providing a powerful engine for personalized marketing, fraud detection, cybersecurity and many other uses. A 2017 survey from Accenture found that 85 percent of executives plan to invest extensively in AI-related technologies over the next three years.
But machine learning doesn’t merely take existing problems and solve them more quickly. It’s an entirely new model that can address new types of problems, spur innovation, and uncover opportunities. To take advantage of it, businesses and users need to rethink some of their approaches to analytics and be aware of AI’s strengths and weaknesses.
Machine learning today, like AI in general, is both incredibly smart and incredibly dumb. It can look through vast amounts of data with great speed and accuracy, identifying patterns and connections that might have gone unnoticed before, but it does so without the broader context and awareness that people take for granted. Thus, it can divine that a girl is pregnant, but has no idea how to act on that information in an appropriate way.
As you embed machine learning into your own analytics processes, here are four areas you should be aware of to maximize the opportunities it presents.
Context is king
Machine learning can yield compelling insights within the scope of the information it has, but it lacks the wider context to know which results are truly useful. A query about what clothes a retailer should stock in its Manhattan store, for example, might return ten suggestions based on historic sales and demographic data, but some of those suggestions may be entirely impractical or things you’ve tried before. In addition, machines need people to tell them which data sets will be useful to analyze; if AI isn’t programmed to take a variable into account, it won’t see it. Business users must sometimes provide the context—as well as plain common sense—to know which data to look at and which recommendations are useful.
Cast a wide net
Machine learning can uncover precisely the answer you’re looking for—but it’s far more powerful when it uncovers something you didn’t know to ask. Imagine you’re a bank, trying to identify the right incentives to persuade your customers to take out another loan. Machine learning can crunch the numbers and provide an answer—but is securing more loans really the goal? If your objective is to increase revenue, your AI program might have even better suggestions, like opening a new branch, but you won’t know unless you define your query in a broad enough way to encompass other responses. AI needs latitude to do its best work, so don’t limit your queries based on assumptions.
Trust the process
One of the marvels of AI is that it can figure things out and we never fully understand how it did it. When Google built a neural network and showed it YouTube videos for a week, the program learned to identify cats even though it had never been trained to do so. That type of mystery is fine for an experiment like Google’s, but what about in business?
One of the biggest challenges for AI is trust. People are less likely to trust a new technology if they don’t know how it arrived at an answer, and with machine learning that’s sometimes the case. The insights may be valuable, but business users need to trust the program before they can act on them. That doesn’t mean accepting every result at face value (see “context” above), but users prefer the ability to see how a solution was arrived at. As with people, it takes time to build trust, and it often forms after we’ve seen repeated good results. At first, we feel the need to verify the output, but once an algorithm has proved itself reliable the trust becomes implicit.
Act responsibly
Target isn’t the only company that failed to see how the power of data analytics could backfire. After Facebook failed to predict how its data could be used by a bad actor like Cambridge Analytica, the best excuse it could muster was that it didn’t see it coming. “I was maybe too idealistic,” Mark Zuckerberg said. For all the good it brings, machine learning is a powerful capability and companies must be aware of potential consequences of its use. This can include how analytics results are used by employees, as in Target’s case, and also how data might be used by a third party when it’s shared. Naivety is rarely a good look, especially in business.
The use of AI is expanding as companies seek new opportunities for growth and efficiency, but technologies like machine learning need to be used thoughtfully. Sometimes the technology is embedded deep within applications, and not every employee needs to know that AI is at work behind the scenes. But for some uses, results need to be assessed critically to ensure they make good business sense. Despite its intelligence, artificial intelligence is still just that—artificial—and it takes people to get maximize its use. Keeping the above recommendations in mind will help you do just that.
This article is published as part of the IDG Contributor Network. Want to Join?
Source: InfoWorld Big Data
Recent Comments