Boston Consulting Group: To unlock enterprise AI value, start with the data you’ve been ignoring
VentureBeat Transform 2025 / Michael O’Donnell Photography
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
When building enterprise AI, some companies are finding the hardest part is sometimes deciding what to build and how to address the various processes involved.
At VentureBeat Transform 2025, data quality and governance were front and center as companies look beyond the experimental phase of AI and explore ways to productize and scale agents and other applications.
>>See all our Transform 2025 coverage here<<
Organizations are dealing with the pain of thinking through how tech intersects with people, processes and design, said Braden Holstege, managing director and partner at Boston Consulting Group. He added that companies need to think about a range of complexities related to data exposure, per-person AI budgets, access permissions and how to manage external and internal risks.
Sometimes, new solutions involve ways of using previously unusable data. Speaking onstage Tuesday afternoon, Holstege gave an example of one client that used large language models (LLMs) to analyze millions of insights about people churn, product complaints and positive feedback — and discovering insights that weren’t possible a few years ago with natural language processing (NLP).
“The broader lesson here is that data are not monolithic,” Holstege said. “You have everything from transaction records to documents to customer feedback to trace data which is produced in the course of application development and a million other types of data.”
Some of these new possibilities are thanks to improvements in AI-ready data, said Susan Etlinger, Microsoft’s senior director of strategy and thought leadership of Azure AI.
“Once you’re in it, you start getting that sense of the art of the possible,” Etlinger said. “It’s a balancing act between that and coming in with a clear sense of what you’re trying to solve for. Let’s say you’re trying to solve for customer experience. This isn’t an appropriate case, but you don’t always know. You may find something else in the process.”
Why AI-ready data is critical for enterprise adoption
AI-ready data is a critical step to adopting AI projects. In a separate Gartner survey, more than half of 500 midsize enterprise CIOs and tech leaders said they expect that adoption of AI-ready infrastructures will help with faster and more flexible data processes.
That could be a slow process. Through 2026, Gartner predicts organizations will abandon 60% of AI projects that aren’t supported by AI-ready data. When the research firm surveyed data management leaders last summer, 63% of respondents said their organizations didn’t have the right data management practices in place, or that they weren’t sure about the practices.
As deployments become more mature, it’s important to consider ways to address ongoing challenges like AI model drift over time, said Awais Sher Bajwa, head of data and AI banking at Bank of America. He added that enterprises don’t always need to rush something to end users who are already fairly advanced in how they think about the potential of chat-based applications.
“We all in our daily lives are users of chat applications out there,” said Sher Bajwa. “Users have become quite sophisticated. In terms of training, you don’t need to push it to the end users, but it also means it becomes a very collaborative process. You need to figure out the elements of implementation and scaling, which become the challenge.”
The growing pains and complexities of AI compute
Companies also need to consider the opportunities and challenges of cloud-based, on-prem and hybrid applications. Cloud-enabled AI applications allow for testing of different technologies and scaling in a more abstracted way, said Sher Bajwa. However, he added that companies need to consider various infrastructure issues like security and cost — and that vendors like Nvidia and AMD are making it easier for companies to test different models and different deployment modalities
Decisions around cloud providers have become more complex than they were a few years ago, said Holstege. While newer options like NeoClouds (offering GPU-backed servers and virtual machines) can sometimes offer cheaper alternatives to traditional hyperscalers, he noted that many clients will likely deploy AI where their data already reside — which will make major infrastructure shifts less likely. But even with cheaper alternatives, Holstege sees a trade-off with computing, cost and optimization. For example, he pointed out that open-source models like Llama and Mistral can have higher computing demands.
“Does the compute cost make it worth it to you to incur the headache of using open-source models and of migrating your data?” Holstege asked. “Just the frontier of choices that people confront now is a lot wider than it was three years ago.”
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.