Business school programs have been accelerating the push to harness the power of artificial intelligence (AI) tools and technologies. Executive education programs especially have been introducing new shorter-term programs that focus specifically on AI. HBS, MIT Sloan, and Oxford’s Said business school have all created specialized AI courses that typically span six to eight weeks. Stanford GSB’s Golub Social Impact Lab has initiated several research projects utilizing AI and machine learning tools with a focus on health and education related programs.
Despite this emphasis, Harvard Business Review recently noted that increased investments in artificial intelligence have not led to increased strategic insights for executive teams. Rather, the gut-check still predominates for corporate strategic decision-making. Increased investment in AI has led to some benefits though – mostly tactical, lower-level decision-making for more menial tasks such as credit scoring, upselling recommendations, chatbots, or managing machine performance.
A Deloitte survey reported more than two in three executives say they are “not comfortable” accessing or using data from advanced analytic systems. Even in companies with strong data-driven cultures, 37% of respondents still express discomfort. A similar KPMG survey found 67% of CEOs indicated they often prefer to make decisions based on their own intuition and experience over insights generated through data analytics. Finally, a SAS survey found 42% of data scientists said their results were not used by business decision makers.
On a brighter note, HBR provides four actions that could spur greater executive confidence in making AI-assisted decisions:
- Create reliable models – in the old enterprise model, structured data was predominant, and the data was classified in an organized, digestible way from the source. Today, almost every failed AI project has a common denominator — a lack of data quality. AI typically uses vast amounts of unstructured data to create machine learning models. That unstructured data, while easy to collect, is unusable unless it is properly classified, labeled, and cleansed. Consequently, data fed into AI systems may be outdated, redundant, limited or inaccurate. For AI to have a seat at the executive table, the data needs to be in context and consistently reliable.
- Avoid data biases – executive hesitancy continues to stem from AI results that are leading to discrimination within their organizations or affecting customers. AI models and the decisions can be only as good as non-bias in the data. Data used in higher-level decision-making needs to be vetted to reassure executives that it is authenticated from reliable sources.
- Make decisions that are ethical and moral – businesses have come under pressure as never before to ensure they operate morally and ethically, and AI-assisted decisions need to reflect those values. Legal liabilities are a major risk that may arise from making wrong decisions that can be challenged in courts – especially if the decision were either AI made or AI assisted. As an example of ongoing work to apply human values to AI systems, Stuart Russell, professor of computer science at UC Berkeley, pioneered an idea known as the Value Alignment Principle that “rewards” AI systems for more acceptable behavior.
- Be able to explain AI decisions – most AI decisions do not have explanations built in. Typically, when an action is taken that could risk millions of dollars for an enterprise, saying AI made the decision is not good enough. Emerging methods such as blockchain could provide a means for immutable and auditable storage. A third-party governance framework could also be put in place to ensure that AI decisions are not only explainable but also based on facts and data. It should be possible to prove that a human expert, given the same data set, would have arrived at the same results as the AI.
For the full HBR article, click here. If you’re inspired to begin your MBA applications, contact Admitify today!