The surge in digital transformation initiatives across businesses and the heightened need for real-time insights has led to an explosion in data creation. But few organisations have a proper understanding of where all their data exists in the first place. Every company has different siloed data sets running on-premises and across multiple public and private clouds and various servers.
A recent global survey commissioned by IBM with Morning Consult found 9 out of 10 IT professionals in India reporting that their company draws from 20 or more different data sources to inform its AI, BI, and analytics systems. “This has led to data silos and complexity and as a result most data remains unanalysed, inaccessible or untrusted,” says Siddhesh Naik, Data, AI & Automation sales leader, IBM Technology Sales, IBM India/South Asia.
A quick look at the global scenario would be in place here. Global AI adoption, as per the IBM study, is growing steadily and most companies already use or plan to use AI – 35% of them reported using AI to further their business plans. Compared with 2021, organisations are 13% more likely to have adopted AI in 2022.
Additionally, 42% of companies reported exploring use of AI. Large companies are more likely than smaller companies to use AI. Chinese and Indian companies are leading the way, with nearly 60% of IT professionals in the two Asian countries saying their organisation actively uses AI, compared with lagging markets like South Korea (22%), Australia (24%), the US (25%), and the UK (26%). IT professionals in the financial services, media, energy, automotive, oil, and aerospace industries are most likely to report active deployment of AI by their company, while organisations in industries like retail, travel, healthcare and government/federal services are the least likely.
Decoding the pain-points, Naik reveals that many AI projects languish after a promising proof-of-concept, become difficult to scale, with about half of them failing. The main reason for this is data – it could be data complexity, data quality, or data variety. “To get the most value from AI, a robust data strategy is recommended that includes identifying multiple data types required to tackle the business problem and enrich the solution – structured and unstructured, internal and external, qualitative and quantitative data. This should be followed by permission-based governance that establishes data provenance to build trust in the data and AI insights. And lastly, plan for the challenges of rigorous data preparation and the complexities of merging disparate data sources and adopt the right tools,” he adds.
To help organisations address challenges related to data complexity, IBM proposes an approach called a data fabric. “A data fabric is a strategy and architectural approach that allows businesses to use the disparate data sources and storage repositories (databases, data lakes, data warehouses) and simplifies data access,” says Naik. IBM Cloud Pak for Data delivers data fabric architecture that allows an enterprise to connect and access siloed data, across distributed environments without ever having to copy or move it – and with embedded governance and privacy.
Naik reckons that difficulties in AI deployment arise when businesses don’t have the data, their employees don’t have the technical skills, and when they cannot trust—or understand – the decisions AI makes. “We see three trends clearly emerging from the study’s findings: First, automation use cases are at the forefront of AI adoption as businesses are using AI to stay competitive and operate more efficiently. Second, effective data management and AI deployment go hand in hand because without the right tools, it is difficult to leverage data across the business. And third, it is critical to ensure trust in AI by explaining how AI arrived at a decision.”