AI is being adopted quickly. It’s still early, but the use of Gen AI is finding its way in all aspects of business. We will experience an epoch level change in how we get things done. Individuals will have the capability of teams. Small companies will have the ability to compete with the large, and new discoveries will be uncovered. McKinsey & Co. just released a paper based on a survey that I took part in (April 2023). Having spent much time working with AI since then, I notice that data from Mid-April seems a bit dusty and antiquated. I am exaggerating, of course, but there is some truth to it, it is moving that fast.
The survey shows that Gen AI has captured interest across the business population: Regardless of region, industry, or seniority level, AI adoption is occurring. 79% of all respondents say they’ve had at least some exposure to Gen AI, either in or out of work, and 22% responded that they are regularly using it at work. To be expected, those in the technology sector are the early adopters with the highest usage.
“Organizations, too, are now commonly using Gen AI. One-third of all respondents say their organizations are already regularly using generative AI in at least one function”
That’s a 60% AI adoption rate. While 40% haven’t adopted AI yet, they plan to do so soon.The future got here quickly. Going back to the 60% who are working with AI right now, almost half of them plan on doubling down with greater levels of investment. 28% of board members polled said it was on their agenda.
No other business function has embraced AI with as much gusto as marketing and sales, quickly followed by product development, service development, and service operations. Even customer care and back-office support is on the list. There is still a great deal of room for applying AI. However, the adoption is highest in the companies where the recognized value of AI is understood and can be put to work.
Here is what I found a bit troubling. There are many organizations that look like they just see upside and are overlooking the potential risks involved with widespread adoption of AI. You may have heard of the lawyer who used ChatGPT in the brief of a lawsuit citing bogus legal precedents. The lawyer was charged with presenting fictitious legal research in his argument. In Argentina we would say to the lawyer who didn’t check his work, “jodete” (very loosely translated as “too bad”).
Back to the survey, just 21% of the respondents said that their companies had a formal plan or process to govern the usage of AI technology at work. To me that implies separate efforts using AI without company guidelines. When the survey drilled down deeper, they found only a few people mentioned that their companies are actively trying to eliminate the risk of inaccuracy.
Now here is what is exciting about all this. Leading companies in the survey are using AI, receiving 20% of their EBIT (Earnings Before Interest and Taxes) from AI usage. They are the high performers. They are in the adoption vanguard of this new technology, both in terms of Gen AI and conventional AI capabilities. These high performers seem to have a deeper understanding of what is possible when integrating artificial intelligence into a broader range of business functions, especially in product and service development, where the speed to market can mean success or failure. The high performing firms that use AI are less concerned about cost cutting (that will come later, to be sure) but more on leveraging AI in order to speed up the product-development cycle.
Many are just jumping into the AI pool, hoping not to miss this new wave of technology. Thinking they will figure it out as they go. Senior partner at McKinsey & Co. Alexander Sukharevsky, believes that there is a broad awareness about the risks associated with AI. While only 20% of companies have a risk policy in place for AI, those policies tend to focus on protecting IP, proprietary information like data and other company knowledge. He states that it is indeed important to address these risks, but that they can be done so making changes in the business’s tech architecture based on policies that exist already. The point his is making and one I agree with is that the overall view of definition of risk is far too narrow. Social, humanitarian, sustainability along a portfolio of other areas need to be included in the risk assessment of AI use. This is admittedly new ground that came up on us all very quickly.
“Being deliberate, structured, and holistic about understanding the nature of the new risks—and opportunities—emerging is crucial to the responsible and productive growth of generative AI.”
I couldn’t have said it better.