
Artificial intelligence has moved quickly from novelty to infrastructure. In a matter of years, it has embedded itself inside research workflows, reporting processes, customer communication, and strategic decision-making across industries. In M&A, it is increasingly present in everything from market mapping to diligence preparation.
But as AI becomes more capable, a quieter issue is beginning to surface – one that has less to do with performance and more to do with credibility.
A recent study by PAN Communications, reported via Business Wire, found that approximately 30 percent of AI-generated citations were either misattributed or entirely fabricated. In other words, nearly one in three cited sources did not reliably point to the material they claimed to reference.
— PAN Communications via Business Wire
That finding should give any professional operating in a high-trust environment pause.
Large language models are designed to generate language that is statistically plausible. They predict the next most likely sequence of words based on patterns in data. Most of the time, this produces remarkably coherent answers. Often, it feels indistinguishable from expert writing.
But coherence is not the same as verification.
When asked to provide citations, models may confidently produce links that look legitimate but do not correspond to real sources. They may blend elements from multiple articles into a single, fabricated reference. They may misattribute research to credible organizations in ways that are difficult to detect without manual review.
In casual contexts, this may be inconvenient. In complex B2B environments, it becomes consequential.
Mergers and acquisitions operate on precision. A single data point can influence valuation, negotiation posture, or investment committee confidence. Advisors are trusted not simply for access or execution capability, but for judgment and accuracy.
If AI-generated research introduces subtle errors into a market overview, a buyer list, or an industry analysis, the damage may not be immediately obvious. The output may look polished. The language may be persuasive. The formatting may feel authoritative.
The risk is that the underlying source integrity is compromised.
In M&A, credibility compounds. A reputation is built over years and can be weakened by a single instance where information does not withstand scrutiny. Clients rarely blame the software. They question the advisor.
What makes this issue particularly important is not just the existence of citation errors. It is the amplification effect.
As AI tools are embedded deeper into workflows, their outputs increasingly influence client-facing materials and strategic recommendations. If flawed sourcing enters a confidential information memorandum, a diligence summary, or a strategic briefing, the reputational impact extends beyond the immediate mistake.
Trust-sensitive workflows require more than speed. They require auditability.
The PAN study does not suggest that AI is unusable. It highlights that AI systems, left unchecked, can introduce credibility risk in subtle ways. And subtle risks are often the hardest to manage because they are invisible until tested.
The conversation around AI in M&A often focuses on capability. Can it draft a CIM. Can it summarize financial statements. Can it identify acquisition targets. Can it surface strategic insights faster than a human team.
Those are important questions. But they are no longer sufficient.
A more durable question is this: how does the system verify what it produces.
Does it clearly link to traceable sources. Does it handle uncertainty transparently. Is there a mechanism for human review before information is relied upon externally. Does the workflow assume fallibility, or does it assume perfection.
As regulators globally begin paying closer attention to how AI is marketed and deployed in business settings, scrutiny will likely expand beyond performance claims to include reliability and governance. Firms that treat AI as a black box may find themselves exposed in ways they did not anticipate.
The findings from PAN Communications are less a condemnation of AI and more a reminder of its nature. These systems are powerful pattern engines. They are not inherently grounded in fact without guardrails.
In high-stakes industries like M&A, that distinction matters.
AI can meaningfully increase productivity. It can surface insights that would otherwise require weeks of manual research. It can augment professional judgment in powerful ways.
But credibility remains a human asset.
As AI continues to take hold inside advisory firms, private equity groups, and corporate development teams, the firms that succeed will not simply be those that adopt the technology fastest. They will be those that integrate it with discipline – ensuring that speed does not come at the expense of trust.
For professionals whose currency is reputation, that balance is not optional.