Industry insights
Why AI is changing the sequencing of Supply Chain transformation
Why AI is changing the sequencing of Supply Chain transformation

Key insights from a discussion with 25 supply chain and logistics leaders hosted by OMMAX and Felgendreher & Friends in Munich.
For decades, supply chain transformation initiatives have often been delayed by the assumption that master data must first be fully standardized and harmonized. During a recent OMMAX and Felgendreher & Friends podcast discussion in Munich with more than 25 supply chain and logistics leaders, this sequencing was repeatedly challenged. Not because the importance of data quality was disputed, but because practical AI use cases demonstrated that meaningful transformation can already begin in parallel with ongoing data improvement.
Across the discussion, two themes emerged consistently: the growing role of AI-assisted application development (“vibe coding”) and the evolving role of master data in transformation initiatives.
Why AI transformation keeps getting postponed
The discussion opened with a widely shared challenge in supply chain transformation: the assumption that data quality must first be fully improved before AI initiatives can be implemented at scale. The concern itself is well-founded, as inaccurate master data can lead to planning inefficiencies, incorrect lead times may contribute to stockouts, and inconsistent product classifications can negatively affect replenishment and forecasting logic.
However, many organizations continue to treat data harmonization as a prerequisite for transformation rather than as part of the transformation process itself. As a result, AI initiatives are frequently delayed while large-scale data cleanup programs remain ongoing.
What has changed in the last six months is that the technology cycle has compressed, challenging traditional sequencing. Traditional planning systems solve only 50 to 70% of real-world operational requirements. Many remaining processes continue to be managed through Excel-based workarounds, fragmented planning approaches, manual coordination loops, and highly specialized operational processes. This is precisely where vibe coding is beginning to create significant value.
Vibe coding and AI-assisted application development
“Vibe coding” refers to a new form of AI-assisted application development in which users describe operational problems and desired outputs in natural language, while generative AI models such as Claude, ChatGPT, or Gemini generate the underlying code. Throughout the process, the domain expert remains actively involved: refining requirements, reviewing outputs, validating assumptions, identifying inconsistencies, and iterating on analyses or visualizations in real time. The underlying code generation, often in Python, takes place largely in the background.
A recurring theme in the discussion was the accessibility of this interaction model for business users. Rather than relying on traditional development cycles, supply chain experts can work interactively with AI systems to explore scenarios, test hypotheses, generate visualizations, and develop lightweight operational tools with significantly shorter iteration cycles.
The practical implication is a substantial acceleration in the feedback loop between identifying an operational challenge and developing a usable solution. In many cases, this enables the individuals closest to the operational problem to play a much more direct role in shaping and refining applications aligned with business requirements.
Use cases: Early operational applications in supply chain
The most relevant insights emerged when the conversation shifted from what AI could enable to what organizations had already built and operationalized.
One example discussed involved a planner at a manufacturing site who developed an AI-assisted Python application to optimize setup times on highly automated production equipment with constrained tool availability. Following initial success, the solution was subsequently rolled out across the company’s global production network.
Another production planning team developed a solution within three weeks that addressed a substantial portion of recurring planning challenges previously managed manually in Excel. In parallel, the initiative generated a clearly structured functional specification grounded in real operational requirements, enabling more efficient collaboration with external software providers and internal IT teams.
A further example focused on a long-standing master data harmonization challenge: identifying physically identical components across acquired entities despite differing product descriptions and item codes. Using large language models, the organization was able to generate similarity assessments supported by contextual reasoning, enabling domain experts to validate and prioritize potential matches more efficiently.
A repeated observation across these examples was that none of the initiatives depended on fully standardized or harmonized data environments from the outset. In several cases, the applications themselves helped surface inconsistencies and improve underlying data quality over time.
The fix-first assumption and why it no longer holds
The “fix-it-first” approach is based on the assumption that organizations can clearly identify data issues in advance and resolve them before beginning broader transformation initiatives. In practice, however, many data quality problems remain hidden within operational workarounds and legacy processes for extended periods of time. Lead times may no longer reflect actual supplier performance, while outdated ABC classifications can persist despite significant shifts in demand behavior and inventory dynamics.
Vibe coding and the broader class of agentic AI tools invert the sequence. Rather than waiting for fully harmonized data environments, organizations can begin by developing targeted operational applications on top of existing datasets. In many cases, these tools help surface inconsistencies, outliers, and classification issues directly within day-to-day workflows, enabling domain experts to validate and improve data iteratively as part of the operational process.
As a result, data quality improvement increasingly becomes embedded within the transformation itself, rather than existing solely as a separate prerequisite initiative.
From data prerequisite to continuous improvement
This shift extends beyond incremental productivity gains. AI-enabled systems are not simply accelerating existing data quality processes but enabling work that would previously have been impractical at scale, such as similarity matching across hundreds of thousands of items, long-term drift detection within demand histories, and reconciliation of unstructured supplier or product descriptions across fragmented datasets.
The implications also extend to ongoing data governance and maintenance models. Traditional master data initiatives have often relied on large-scale, time-bound cleanup programs that improve data quality temporarily without fundamentally addressing the operational processes that continue to generate inconsistencies over time.
Agentic AI introduces a more continuous governance approach. AI-enabled systems can identify discrepancies, propose corrections, and route recommendations for human validation. In this model, data quality improvement becomes increasingly embedded within operational workflows rather than isolated within standalone remediation initiatives.
As a result, iterative improvement models are becoming a more viable and scalable approach to transformation than traditional sequential “fix-first” programs.
Four principles for scaling AI-enabled operational applications
None of these developments eliminate the role of enterprise systems. ERP and APS platforms continue to address a significant share of core planning and operational requirements. However, many organizations still rely on spreadsheets, manual workflows, and localized process adaptations to manage operational scenarios that remain outside standardized system capabilities. This is where AI-assisted application development is increasingly creating value: enabling business teams to address highly specific operational use cases that are often difficult to prioritize within traditional IT roadmaps.
Future architectures are likely to evolve toward a layered model: enterprise systems as the system of record, complemented by generative AI interfaces that support translation, interaction, and decision support, alongside targeted applications developed collaboratively between domain experts and IT teams.
The principles that emerged as critical success factors:
- Use enterprise-grade LLM environments: AI experimentation should take place exclusively within approved enterprise environments. Confidential company data should not be processed through personal accounts or unmanaged public tools.
- Read access is easy; write access is not: The cleanest pattern is to read from a consolidated reporting layer rather than directly from live systems. When write-back is needed, an intermediate service performing standard transactions is safer than direct writes from a prototype.
- Combine business-led prototyping with IT-led scaling: Domain experts are often best positioned to develop and validate early-stage operational applications. However, once use cases demonstrate value, IT functions play a critical role in ensuring scalability, security, monitoring, governance, and lifecycle management.
- Maintain a human-in-the-loop governance model: Despite rapid advances in generative AI capabilities, domain expertise remains essential. Developed applications continue to require human validation and quality assurance to ensure operational reliability and business relevance.
The cultural shift is top-down, execution is bottom-up
An important insight was that successful adoption depends as much on organizational mindset as on technology itself. A strong indicator of whether organizations can effectively scale AI initiatives is the extent to which leadership teams understand the practical capabilities and limitations of these technologies. Even limited hands-on exposure can significantly improve leadership teams’ ability to prioritize use cases, establish governance boundaries, and support targeted experimentation initiatives.
Several participants also highlighted the importance of establishing small cross-functional AI champion groups within the business. Rather than attempting large-scale transformation programs from the outset, organizations are seeing greater success through focused experimentation, iterative learning, and incremental scaling of proven use cases.
At the same time, the discussion emphasized that widespread decentralization of AI application development is unlikely to be effective without clear operating models. The emerging pattern described was a layered approach in which business domain experts prototype operational applications, IT functions ensure scalability and governance, and leadership teams provide strategic direction and sponsorship.
Ultimately, one of the clearest conclusions from the discussion was that organizations should avoid treating perfect data readiness as a prerequisite for beginning targeted AI transformation initiatives. In many cases, meaningful operational value and improvements in underlying data quality itself can emerge through iterative implementation and continuous refinement over time.
Do you want to learn more about our supply chain expertise and services? Contact the authors of this article.