IDG Contributor Network: Letting go: trusting AI to do its thing while humans do theirs
Not even a year ago, businesses across industries were still fairly united in their skepticism of artificial intelligence. It took Salesforce announcing its AI platform Einstein for them to really take note of and accept that AI was not only officially here, but here to stay.
Businesses that have since adopted AI quickly realized that part of the unwritten contract is surrendering control of data-related tasks and decision-making to a machine—either at the tactical or entire process level. They’re also learning to let go of their need to interpret and act on data insights. As a result, the decisions they’re making post-AI adoption are drastically different in nature than the ones they were making pre-AI. But somewhere in between AI adoption and full AI integration, these companies are having to cope with their fear of relinquishing control to a machine.
This fear has been here since day one, but as businesses become more educated about how AI works, the “letting go” narrative is now more prolific than ever—even among those who are already using AI to unprecedented success.
The fact of the matter is that humans, by our very nature, want to be in control. I’m sure there’s some Darwinian self-preservationist reasoning behind our need to carefully choreograph the elements around us; something to do with eliminating threat and carrying on as a species. But when it comes to surrendering control to AI—specifically as it relates to machines that automate overwhelmingly complex, data-oriented business processes—it’s likely that our ability to surrender control will influence our ongoing success, rather than prevent it.
In business, AI’s role is to maximize our productivity by taking away the minutea created by data, freeing humans to work on higher-level strategic tasks, thereby making businesses and the individuals behind them “fitter”—in the Darwinian sense—over time.
It’s important to recognize that many people’s resistance to giving up control isn’t just a garden variety power struggle or even their need to micromanage a given situation. It’s a matter of being cautious and establishing a foundation of trust, rather than going in blindly. Having worked with dozens of brands at varying phases of the AI adoption process, there are a few common themes that invariably unfold among them, offering insight into how man and machine can effectively work together, and how to make the ‘letting go’ process easier.
Humans don’t want to do robots’ jobs—or for robots to do theirs
To opponents of AI, machine is the competition. From this perspective, AI is either going to take man’s jobs completely or AI is a literal competitor that must be competed with (and won against). The current trajectory of AI adoption in business reveals neither to be true.
Businesses that take a hybrid approach to the division of responsibilities are seeing better results from AI and humans working together, than either one working independently. When it comes to data gathering, analysis and insights, man trying to keep up with robots is a losing proposition. Man can attempt to become robot, or robot can be forced to adopt the unique characteristics of man: creativity, reasoning, emotion and intuition. But the better approach is for them to work together and produce the best outcome possible as a result.
Humans need proof, and quickly
In my experience working specifically with marketers, they are very interested in AI when they’re introduced to it through the lens of the day-to-day tasks it will alleviate for them. Their enthusiasm dissipates, however, when they realize that the AI tool taking over these tasks for them is going to do it in a way they don’t understand. They don’t want to actively manage and implement the nitty gritty tasks, but in order to give them up they need to know that that the technology will do execute those tasks better than they or their teams can.
One way of combatting their fears here is to introduce highly-targeted, quick turnaround programs as trials that allow the AI to show what it’s made of. (Depending on the solution, “quick” could be a weekend or six months.) The sooner the machine can demonstrate its ability to not only understand what seems to be a complex problem, but also out-produce man’s ability to solve it manually, the sooner humans will be able to relinquish control.
Transparency into what the machine is doing is imperative
Giving up day-to-day execution is a lot easier for humans than giving up their decision-making privileges. For marketers, relying on a wholly autonomous AI to process data and then act on its insights without bothering to ask for its human colleague’s thoughts on approach is the biggest obstacle to letting go of control.
This is important for AI providers to consider as they design the outputs shared with businesses. As it turns out, humans don’t necessarily need or want to be involved decision their AI is making, but they do want to understand how and why the AI made them. It would, of course, be impossible for a human to keep up with the pace of a machine’s decision-making processing, but making them privy to key insights goes a long way in creating trust between robot and man. It also creates a foundation for collaboration and idea sharing, as the human learns from the AI’s insights, and is able to complement the AI’s work as a result.
Ultimately, it is this type of collaboration that makes man and machine allies rather than competitors. It’s also what establishes trust, so that humans can rest assured they’re evolving, rather than endangered.
This article is published as part of the IDG Contributor Network. Want to Join?
Source: InfoWorld Big Data