Informatica’s Last, Big Solo Dance
A reporter's notebook on Informatica World 2025 in the days before the company's $8 billion acquisition deal with Salesforce
Welcome everyone! I’m John Foley, a long-time tech journalist who has also worked in strategic comms at Oracle, IBM, and MongoDB. Now I’m an independent tech writer. I attended Informatica World on assignment for Method Communications.
What happens in Vegas stays in Vegas — unless you’re there to write about it.
I attended Informatica World 2025 two weeks ago at the Mandalay Bay Convention Center. That was 12 days before Salesforce disclosed an agreement to acquire Informatica for $8 billion. Now, everything Informatica revealed about its product direction pre-Salesforce is cast in a new light.
I wrote about Informatica World 2025 in an earlier blog post, shared here as Part 1 for easy reference.
What follows is Part 2, a reporter’s notebook with more from my conversations and observations from the event.
Agents, integrations, source of truth
First thing to know is that Informatica made a handful of announcements at Informatica World, including plans to develop AI agents for data management, and product advances with AWS, Databricks, Microsoft, Nvidia, and Oracle. You can get the details in the Informatica newsroom.
Notably, Informatica is also teaming with Salesforce to bring Informatica’s master data management (MDM) SaaS “actions” to Salesforce’s AgentExchange marketplace later this year. This is a way to create single-source-of-the-truth customer records for use by AI agents, an important step forward for reasons explained below.
My overarching takeaway at Informatica World was this: Agentic AI is coming fast, but enterprise data isn’t ready.
Here are a couple data points that illustrate the challenge:
67% of data leaders are struggling to transition GenAI pilots to production, according to Informatica’s CDO Insights report
43% say data quality, completeness, and readiness are among the biggest obstacles
With this in mind, I took the long escalator up to the convention center ballroom. Here’s what I heard.
The foundation isn’t ready
Speaking at the Partners Summit, Richard Ganley, Informatica SVP of Global Partners, said, “Of course everybody wants AI today. Two things are holding people back. One, they have a museum of technology they need to modernize. And two, the data foundation isn’t ready, one they can trust and rely on for use cases.”
Ganley suggested some businesses will get a wake-up call when they see competitors getting ahead with AI. “We’re getting closer and closer,” he said. The laggards “will start to panic.”
I talked to Ganley afterwards. He said the hurdle many organizations face with GenAI is the leap from small, internal projects to full production using enterprise data. “Now with agents, it’s the same thing,” he said. “The data is the underlying thing that has to be right, otherwise sketchy things are going to happen.”
Ganley compared the idea of using AI agents with low-quality data to bringing untrained, unskilled people off the street to run your business.
Data management is complex and getting more so, he added. The only way to keep up is with a platform that automates as much as possible.
The next speaker was Rik Tamm-Daniels, GVP of Technology Alliances. Underscoring the power of Informatica’s partner relationships, Tamm-Daniels said the company is 2.7 times more likely to win an opportunity working with a partner or GSI/SI, and 5 times more likely when both are involved.
He highlighted a key point about the Informatica-Salesforce product integration that seems prescient in hindsight, given the $8 billion acquisition agreement that followed. “Master data is a really good source of grounding data for AI and agentic AI,” Tamm-Daniels said. And a bullseye with Salesforce’s Agentforce platform.
We talked about Informatica’s Blueprints, which are handy architectural guides for AWS, Databricks, Google Cloud, Microsoft, Oracle, and Snowflake. Originally created for retrieval augmented generation (RAG), the Blueprints now also take into account agents and interoperability protocols such as MCP. “It’s a reference architecture to solve some very specific problems,” he explained.
I asked about the challenge of data-readiness for AI. “Data quality is still a massive issue for many enterprises,” Tamm-Daniels said. “Bad data slips in; it gets in there one way or another.”
The solution? “Data foundation is a critical enabler.”
A wider ‘blast radius’
No one was more convincing about the urgency around data quality for AI than Jason du Preez, Informatica VP and an expert in data privacy and safety. “If GenAI fails, we see potential operational failure,” he told the audience. “The blast radius is much wider.”
“The reality is AI can’t be fully trusted today. We see examples every week,” said du Preez, pointing to the example of Cursor’s AI coding assistant refusing to complete a task, resulting in drama for the company. The Achille’s Heel for many is lack of robust governance.
In other words, it’s not enough that data be clean and accurate; it also needs to be protected by policy and compliance guardrails. “Everybody has access to the same models and infrastructure,” du Preez said. “Data is the competitive advantage.”
Must-haves of AI architecture
Next I grabbed a seat at the Data & AI Architecture session, where Kevin Petrie, a research VP with analyst firm BARC, said 30% of organizations are already involved with agents.
In a BARC survey, data quality/lineage was the #1 obstacle to success, cited by 45% of respondents. The other challenges/obstacles were data privacy and regulatory concerns (43%); lack of AI skills (43%); incompatible tools or systems (27%); and data access/availability (27%).
To get AI right, Petrie said, you need MDM, cataloging, data quality observability, data pipeline performance, and governance. The must-haves of modern architecture include the ability to integrate old and new technologies, modular design, open APIs/formats, and AI-assisted data tasks.
How to get there? Start small and scale, Petrie said. Automate data management. Adopt a factory design (standardized, modular). Invest in AI talent. Evolve governance to manage AI risks and regulatory requirements.
The next speaker was Suresh Gandhirajan, Senior Director of Enterprise Data Transformation with GE Aerospace, who shared his experience deploying an enterprise data fabric in support of AI programs. “This is a hard one to crack,” he said. “It’s not simple.”
After six months of development and trial and error, GE Aerospace was able to support five data products with its data fabric, and it aims to add 10 more by year’s end. The framework allows for role-based access to certified data products. “Keep the architecture as modular as possible,” Gandhirajan advised.
10x more complexity
Now it was time for the opening keynote at this 25th annual Informatica World: “Ready, set, AI.”
On the main stage, CEO Amit Walia started by reminiscing about Informatica’s early days when “the only thing we did was a thing called ETL.” Now, GenAI is coming on fast. Even so, “a lot of those projects are not there in terms of completion,” he said. “Your model is only as good as the data you put into it.”
There was a short interruption as performers outfitted in AI Agent t-shirts jumped on stage in a choreographed dance with fog machines. “That’s how excited we are,” said Walia. Little did the thousands of data engineers and architects in the audience know it would be Informatica’s last, big solo dance before joining with Salesforce.
Next Walia introduced AI Agent Engineering, where automated agents help with the workflow of data management, quality, governance, etc. “The use case is very simple,” he said. “Your world will be 10x more complex. AI agents will help you manage the complexity of the data workflow.”
Gaurav Pathak, VP of Product Management, showed how agent engineering will work: The agents collected data from Oracle and SAP apps, automatically cataloged the data, performed a quality check on the staged data, then created and applied data-quality rules to increase the quality score on the data set.
“It’s so dramatically simplified,” said Walia. Yes, but it was just a preview. Informatica’s first agents are due later this year.
Sumeet Agrawal, VP of Product Management, gave a second demo, which included a “delivery delay agent” to solve a supply chain issue. The point was to show how to build more accurate agents, connect agents across an enterprise, and manage them. All done with no-code development. “We’re starting with three agents,” Walia said, “but there will be many more.”
Excitement & worry
Informatica CIO Graeme Thompson hosted a customer panel with Proofpoint, Independent Financial, and Sutter Health.
The execs talked about the challenge of complexity, need for data quality, impact AI will have on workplace roles (including doctors), cost & ROI, metadata, interoperability, security. In other words, a lot of heavy lifting goes into building enterprise-class AI.
Thompson asked the panelists to rate their excitement about AI on a scale of 1 to 10. Each said 10. Then he asked about their level of worry. They upped it to 11.
Data expanding, outcomes shrinking
Informatica hired a new EVP/Chief Product Officer, Krish Vitaldevara, in March. Vitaldevara has an impressive background with Microsoft, Google, NetApp, and all the way back to Loud Cloud.
I asked him about the issues around data complexity that I was hearing, and he explained data silos continue to be a reality in many places, which is a challenge when agents need access to holistic data. Unstructured data is another piece of it. “You need to unify data across the silos,” he said. “The data management layer becomes incredibly important.”
I wondered: Why is data quality such a challenge for CIOs when they have had years to figure it out? Vitaldevara offered a plausible explanation. It’s that traditional data warehouses were built with a small subset of corporate data, and it was mostly structured data. Now they’re dealing with 100% of enterprise data of all types. “The job has become an order of magnitude harder.”
Today, data management requires both connectivity and data hygiene, Vitaldevara said. Then he added this insight: “The data is expanding, but the path to outcomes is shrinking.”
Finally, we talked about measuring data quality and how precise data really needs to be for enterprise AI. Is 100% data accuracy, even if it was obtainable, the objective? Vitaldevara made three points:
Data lineage is hugely important. “Any quality problems get multiplied as they go downstream.”
The required level of data quality depends on the app. For example, there’s no room for error with patient records.
Some use cases don’t require 100% completeness. He pointed to Google Maps, a product he has experience with. “We don’t need 100% of the data. If we lost 2%, we could get you to the right place anyway.”
Lessons learned
The good news, and the most important takeaway from Informatica World, is that some early adopters are having success with their GenAI and agentic AI strategies. Tech leaders from Citizens Bank, Royal Caribbean, Indeed, Gilead, Liberty Mutual, JetBlue, Nutanix, and Sutter Health were among those sharing what they have learned and accomplished.
The task at hand — cleaning up and prepping terabytes and petabytes of data for AI — may seem overwhelming, but it can be done. “You don't have to do everything in one day, start with MDM or data quality,” Vitaldevara said. “It’s a journey.”
Informatica and Salesforce will now be joined at the hip on that journey.