Artificial Intelligence (AI) may be easy to dismiss given that its portrayal in popular culture—and even in business—seemed futuristic even up to a couple of years ago. For boards, dismissing AI’s near-, mid- and long-term influence on their companies would be a misstep. While we may not be colonized by an army of Terminators any time soon, AI is already here in various business applications and is becoming more sophisticated by the day.
In particular, the use of machine learning to create efficiencies in everyday tasks will have a profound impact on many corporate businesses directly. The question for boards is how these changes will influence a company’s workforce, both in terms of what is needed to train the AI infrastructure itself, as well as how the technology will replace or complement the company’s employee base.
AI is off to the races in terms of its technological development, and while there is some lip service to understanding the opportunities and potential benefits, there is still not a lot of practical discussion at the highest corporate levels for what this will mean two, five or 10 years down the road.
“AI is a funny creature—it’s been around in tech as a concept and has even had a home in business, like automatic check reading and things like that,” said Vivienne Ming, Founder and Executive Chair of Socos Labs and a theoretical neuroscientist. “It’s famous for having boom and bust qualities, and recently it has seen a dramatic upswing and is in a boom season.”
The AI evolution—I’ll hesitate from calling it a “revolution” at this point, considering its infancy, if not non-existence, at most companies—is a natural progression from the current technology environment. It’s tempting to look to Hollywood or science fiction and work backwards, looking at personified robots via Arnold Schwarzenegger or Haley Joel Osment and have some difficulty envisioning the jump from now to then. But more simply—and immediately—business technology applications have given us more data along with the ability to analyze more information than there are human resources to do so. A promise that automation can provide better customer insights and more efficient workflows is a tangible outcome for virtually any business.
“Every business today is a tech business, because everything is delivered via IT interfaces,” said Anastassia Lauterbach, Founder & CEO of Lauterbach Ventures and a board member at Dun & Bradstreet. “There is a huge gap in education in the U.S. for top leaders, and this status quo has to change.”
“If a company can leverage data better, they will be more competitive, and there should be an AI plan in every company,” Lauterbach added. “Boards have to ask ‘What is our data, how valuable is it and what could we do with it?’ When that data is clear, then you can say what you would change where data might be helpful.”
AI makes things faster, better and cheaper. And sometimes, rather than just enhancing the ability to execute, it changes the strategy itself.”
- Ajay Agrawal, Founder of the Creative Destruction Lab
As an example, according to a 2017 Accenture study of how AI is being implemented at more than 1,000 global companies, many firms are at least experimenting with AI in critical corporate functions. These include customer service, marketing and sales, and managing of “non-customer external relationships” (Graph 1).
A Harvard Business Review article authored by H. James Wilson, Managing Director of Information Technology & Business Research at Accenture Research, Paul Daugherty, the Chief Technology Officer of Accenture and Nicola Morini Bianzino, the Global Lead of the Artificial Intelligence practice at Accenture, summed up the imperative for companies. In particular, the article implies why it’s critical boards take seriously their approach to oversight of emerging technologies and associated processes as a fiduciary duty.
“With many new innovations, the technology often gets ahead of businesses’ ability to address the various ethical, societal and legal concerns involved,” the authors wrote. “With AI, any issues become all the more pressing as those systems increasingly become the face of many company brands.”
Ajay Agrawal, a professor of entrepreneurship and strategic management at the University of Toronto, founder of the Creative Destruction Lab and the co-founder of The Next 36, Next AI and Kindred, gave a keynote at the recent National Association of Corporate Directors (NACD) Conference, underscoring the salience of this topic at the highest corporate levels. In front of more than 1,500 board members, he noted that “everyone who serves on a corporate board in this room needs to understand how fast the knob is turning in your market [with respect to AI]. And don’t assume it’s going to turn at a linear pace.”
Over time, the cost of AI will drop as it becomes more sophisticated. What AI does best, Agrawal noted, is predict, and when the cost of prediction falls, two things will happen:
Number one is easy to understand. Businesses that use demand forecasting, inventory modeling or rely on other predictive measures that are critical to cost containment or revenue forecasting may have simplified and more cost-effective modeling.
AI works very straightforwardly here: A human provides necessary inputs, teaches the machine to understand those inputs, and the machine learns to produce outputs with much greater efficiency—and arguably, accuracy—than a human would be able to manually. It’s not unlike what we’ve already been doing for 40+ years with personal computers.
The second outcome, while not necessarily difficult to understand, is a little further off because it requires more investment in a number of ways. Agrawal provided an example that we’re all familiar with conceptually, but perhaps not practically. Autonomous vehicles have been around for a while, but initially their “training grounds” were driving around a factory floor rather than experiencing the real world. So they could learn to dodge a shelf, or make a left turn, or stop or go in a controlled environment where the variables were known.
But those vehicles could never be on a city street due to an infinite number of “if this, then that” scenarios, which is why experts said integration of autonomous vehicles in society was an intractable problem, and there wouldn’t be one on the street for 30 years.
When autonomous vehicle engineers recast driving as a prediction problem, the equation changed. The prediction became “What would a human do?”—and not only any human, the best possible human.
So they put humans in the car and told them to drive with an AI camera next to them. As they drove, the engineers received “data”—what the people saw and heard, and how they took one of a few actions. And we really only make a few choices while driving: accelerate, slow down, turn right, turn left, reverse, etc. At first, the AI will make a lot of mistakes in guessing what the right action is based on a stimulus, but as it watches the predictions get better and better and better, and confidence grows. Eventually, it can predict what the best human drivers would do, and also is not subject to fatigue, distraction, inebriation or any of the other factors that bring down driver efficiency.
“AI makes things faster, better and cheaper,” Agrawal said. “And sometimes, rather than just enhancing the ability to execute, it changes the strategy itself.”
Prediction is valuable because it’s a critical input into decision-making, and AI will bring down the cost of prediction, as well as make predictions more efficient. As the human role of prediction goes down because of AI, judgment and taking action—the things that humans do and will continue to do—will become more valuable, Agrawal said.
There is not enough conversation in the boardroom and among top leadership teams. There is a very large gap between what they should know and what they actually know.”
- Anastassia Lauterbach, Board Member, Dun & Bradstreet
Theoretically, the concept of artificial intelligence offering a brave new world of flawless predictions in controlled environments sounds fine and dandy. And realistically, if that were the final outcome, boards probably would feel fine and dandy about the implications of AI for their companies. Unfortunately, it’s not that simple.
“There is not enough conversation in the boardroom and among top leadership teams. There is a very large gap between what they should know and what they actually know,” said Lauterbach. “Everyone who is leading a large division has to understand financials. That doesn’t mean everyone is a certified accountant, but they have a working knowledge. The imperative to understand certain concepts in technology is not the same, but it should be.”
Of the many concerns around AI, one stands out that ties directly to boards’ role in oversight of risk: safety.
According to Ilya Sutskever, co-founder and director of research at OpenAI, and his colleague Dario Amodei, who leads safety research at the firm, there is a lot more work ahead before AI should be unleashed on the business world at large. In an article penned for The Wall Street Journal special publication, “The Future of Everything” (November/December 2017), the two outlined major risks that face corporations.
For example, an AI that has a job to optimize profits could purposefully manipulate information in order to try and benefit from a market shift—think automated “fake news.” While the people who designed the system may not have intended that outcome, the speeds required for these systems to learn information could exceed the pace at which humans can monitor them.
“By the time we notice something troubling, it could be too late,” Sutskever and Amodei wrote. “The world today would be a much safer place if the internet had been designed with security in mind—but it wasn’t.”
Theoretically, these outcomes are avoidable, and it’s the board’s job to provide oversight so that they are avoided. Asking the right questions from the right people in the organization—and, of course, ensuring that the right people are in the right roles to handle emerging technologies and risks—are among the key factors boards must identify.
This is as much a human story as it is a tech story. The technology itself isn’t magic, and won’t literally change everything overnight. The limitations are how we adapt it.”
- Vivienne Ming, Founder and Executive Chair of Socos Labs
“Most innovations make problems they are trying to solve worse. It’s brutally difficult, and it’s understandable that boards are skeptical of how these systems might perform business practices,” said Ming. “Some of this requires good governance. The board and the C-suite are responsible to make ethical judgments on how systems are used.”
Within companies, Lauterbach says, there needs to be direct coordination among the CEO, COO, chief strategy officer and other leadership. Everything is interdependent, she said, and a holistic framework is required to integrate new systems across teams.
“Corporate governance is very backward-looking, analyzing what happened and what we learned and what we need to do now,” she said. “Meanwhile, risk is very much forward-looking—the future is more important than the past because you can do something about the future.”
Ming offered a similar perspective, noting that if all AI oversight falls only under the chief information officer, it’s likely to fail. That isn’t intended to be an indictment of the CIO, but she stressed that AI systems should not only be viewed in the context of their danger and risk—and they certainly should not be siloed. Creative teams need access to AI tools in order to foster technological innovation, with the understanding of the risks that come along with it, of course. Hence, the holistic perspective Lauterbach noted. And that’s where the board comes in.
“All of these things are tools, and a good tool can make a good idea scale in a way that can be transformative,” Ming said. “But AI is as much a human story as it is a tech story. The technology itself isn’t magic, and won’t literally change everything overnight. The limitations are how we adapt it.”
Dan Marcec is Director of Content at Equilar and the Editor-in-Chief of C-Suite. He can be reached at dmarcec@equilar.com.