← Back to Insights

The Industrialization of Bullshit: Why AI is No Excuse for Professional Negligence

February 12, 2026

5 min read
Share:
The Industrialization of Bullshit: Why AI is No Excuse for Professional Negligence

I recently reviewed a strategy document that looked, at first glance, like a slam dunk. It was a feature proposal supported by a section of vivid user quotes with specific, emotional feedback that perfectly justified the build.

There was just one problem.

When I started checking the source links to understand the full context of these conversations, the trail went cold. The quotes were mostly misinterpreted. Some were misattributed. And a few were completely hallucinated text between quotation marks, attributed to people who do not exist or found in documents that contained no such words.

This wasn't a draft by a nervous intern. This was the work of a senior professional with a solid track record. They had likely asked an LLM to "summarize user sentiment," pasted the output into a doc, and signed their name to it.

This incident highlights a growing crisis in our industry. The problem isn't that LLMs hallucinate; we know they do. The problem is that professionals are beginning to treat these hallucinations as "good enough," and in doing so, they are crossing the line from efficiency into negligence.

Amplifying Brandolini's Law

There is a concept known as Brandolini's Law (or the Bullshit Asymmetry Principle), which states:

"The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it."

Generative AI has dramatically amplified this asymmetry. In the past, if you wanted to fabricate user insights or fake data, you had to manually invent it. It took time, creativity, and effort to lie. Because the friction was high, it was rare.

Today, the cost of generating plausible-sounding nonsense has dropped to zero.

The author of that doc likely spent seconds prompting a bot. I, however, spent 30 minutes verifying sources just to prove it was wrong. This is the hidden tax of AI in the workplace: the cost of generating bad requirements has undercut the cost of detecting them.

The Rise of "Synthetic Insights"

The cost asymmetry is one problem. But there's a subtler issue: AI doesn't just make bad work cheaper; it makes bad work look good.

We are witnessing the rise of what I call “Synthetic Insights” which are statements that are statistically plausible but empirically baseless.

When you ask an LLM to "tell me what users hate about login flows," it doesn't survey your actual customers. It predicts what a complaint about login flows usually sounds like based on its training data.

The danger is that these outputs look and feel like real research. They confirm our biases. But if you build product strategy on them, you aren't building for your users; you are building for the average of the internet.

Is This Just "Sloppy Work"?

You could argue that sloppy work isn't new but AI Slop at work is! People have always skimmed documents or cherry-picked data, but AI changes the scale and detectability of the sloppiness.

In the past, a lazy analysis was usually vague or thin. Today, a lazy analysis looks robust, detailed, and authoritative. It comes with (hallucinated) citations and (invented) quotes. It masquerades as high-effort work, which makes it much harder to catch during a standard review.

Verification Is the Job

In the era of Generative AI, the definition of "professional work" has shifted.

The value you bring to your organization is no longer just the ability to generate text or summarize documents since a machine can do that now. Your value is your ability to vouch for that text.

When you submit a document, you are acting as the Editor-in-Chief. Your signature is a certification that claims: "I have reviewed this, and I stand behind its accuracy."

Using "The AI got it wrong" as a defense is no longer valid. We saw this legally in the case of Mata v. Avianca, where a lawyer was sanctioned for submitting a brief full of hallucinated case law. The court's ruling was clear: the error wasn't using the tool; the error was failing to verify the output.

The AI Hygiene Standard

We can't just ban these tools, but we need a framework for using them responsibly. If you are using LLMs for product work, here are three rules to live by:

  1. Zero-Trust Drafting. Treat every LLM output as a hostile draft. It is a starting point, not a deliverable.
  2. The CTRL+F Rule. If an LLM gives you a quote or a specific fact, you must locate that exact string in the source material before using it. If you can't find it, you can't use it.
  3. Synthesis, Not Discovery. Use LLMs to summarize data you have already collected and vetted. Do not use them to generate the data itself.

The Case for Consequences

Finally, we need to talk about policy. As an industry, we are currently too lenient with AI-generated negligence.

If a salesperson fabricated a client conversation to close a deal, it would be treated as fraud. If a Product Manager allows an LLM to fabricate user quotes to get a feature built, it should be treated with similar severity.

Organizations need to set clear boundaries:

  • Disclosure: Significant AI contributions to internal documents should be cited, just as we cite any other source.
  • Liability: Policies should state clearly that the human submitter retains full liability for accuracy. "The bot did it" is not a mitigating factor.
  • Accountability: Submitting fabricated data, whether AI-generated or not should be treated as a performance issue, not a learning moment. Repeated negligence should carry the same career consequences as any other integrity violation.

The efficiency gains of AI are real, but they cannot come at the cost of our integrity. I'm genuinely curious how other leaders are handling this. Has your organization implemented specific policies for AI-generated negligence? Or are we all still figuring this out as we go?