Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Generative AI has been making headlines frequently in recent months, but how much are people really using the technology in their work?
A fair bit, as it turns out: 46% of all employees have “experimented” with generative AI at least once, according to a new survey of 12,800 employed people in 18 countries conducted by Boston Consulting Group (BCG), one of the “big three” major global consulting firms alongside McKinsey & Company.
“It’s a really big number, but I’m not totally surprised either, because of how we’ve seen the user numbers of these generative AI products take off like nothing before,” Steve Mills, a managing director, partner and chief AI ethics officer at BCG said in an interview with VentureBeat.
More impressively, 26% of respondents reported using generative AI several times a week.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
AI optimism on the rise
And in perhaps the best news yet for AI programmers, companies and tool vendors — the BCG report found that the percentage of respondents who viewed AI optimistically grew as they used the technology more, while their concerns about the tech dropped.
More than half (62%) of survey respondents who said they were regular users of generative AI ranked optimism as one of their top two sentiments toward it, compared to just 36% of non-users. Overall, optimism toward AI from respondents jumped 17% from the last time BCG surveyed people about the technology (five years ago), while concern dropped from 40% in 2018 to just 30% this year.
As the report puts it: “Optimism grows with familiarity, and respondents who use generative AI regularly are far more bullish than those who use have never tried it.”
Leaders favor AI more than front-line workers, so far
There are stark differences in uptake and in attitudes toward the technology, depending on the employee’s level within their organizational hierarchy.
BCG broke down the 12,800 survey respondents into three main categories: front-line employees, managers and leaders. While the report does not specify how many employees are in each category, it does say that the respondents were selected to mirror the average 85% to 10% to 5% split of front-line employees, managers and leaders, indicating that most respondents were front-line employees.
The majority (80%) of leaders said they were using generative AI tools regularly, compared to just 20% of front-line employees.
Nearly two-thirds (62%) of leaders expressed optimism about AI, but only 42% of front-line employees share this sentiment, revealing a significant disconnect between leadership and their staff. With reports of AI already replacing some jobs, it shouldn’t really be a big shock that front-line employees are most concerned and least optimistic about the tech.
Broad support for AI regulations
The survey also revealed broad support across all employee groups for AI-specific regulations. The majority (79%) of respondents said such regulations are necessary, with the Middle East expressing the highest demand for regulation at 89%, and Germany the lowest at 64%.
The report concludes with three key recommendations for leaders. Firstly, it encourages organizations to create spaces for responsible experimentation with AI. Secondly, it emphasizes the need for continuous upskilling to help employees adapt to the ways AI will change their jobs. Lastly, it underscores the importance of building a responsible AI program, as employees seek guidance and reassurance that their organizations are approaching AI ethically.
As AI continues to evolve at a rapid pace, this report underscores the need for businesses to not only embrace the technology but also ensure its ethical and responsible use. It’s a call to action for leaders to bridge the gap in AI sentiment and understanding within their organizations, and to actively participate in shaping the emerging AI regulations.