Arguing in the Unknown: What the Meta AI Copyright Case Teaches Us About Strategic Blind Spots in the AI Age

Early last week, Meta was granted partial summary judgment in a copyright lawsuit, with the court concluding that the plaintiffs failed to present evidence of a functioning licensing market disrupted by Meta’s use of their books for AI training. But the ruling wasn’t just about what evidence was missing—it highlighted a more profound issue: the plaintiffs failed to shape the legal terrain strategically.

That reference to both fair use and market harm is what initially caught my attention. However, upon reading the judgment, what stood out even more was the judge’s implicit acknowledgment of strategic gaps and deeper blind spots in the plaintiffs’ case. Not legal errors, but missed opportunities to assert the viability or emerging shape of a licensing market, to quantify the nature of harm, and to confront the known unknowns of AI development and regulation.

This reflects a critical lesson for leaders navigating complex, rapidly evolving landscapes: we must learn to think both simply and multidimensionally. This is why executives, technologists, and legal teams must master both sides of emerging AI debates and confront the uncomfortable knowledge they’ve ignored.

At its most basic level, Meta needed to satisfy the requirements of fair use, particularly the criterion of whether the use was transformative, and demonstrate that their use did not cause substantial harm to the original authors. That harm must be rooted in how the outputs derived from the training of LLaMA either replicated or interfered with the authors’ ability to benefit from their work.

Conversely, the plaintiffs needed to challenge the claim of transformative use and present concrete evidence of economic harm.

The Known Unknowns of AI Law

In 2002, U.S. Secretary of Defense Donald Rumsfeld famously described uncertainty using a now-iconic framework:

“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.”

The quote was mocked at the time for its rhetorical contortions, yet it has since gained recognition for its conceptual clarity. Sociologist Steve Rayner later proposed a critical addition: the “unknown knowns,” those truths we tacitly understand but refuse to acknowledge. These are the inconvenient realities buried within our institutions, strategies, and, often, legal arguments.

We are now facing a Rumsfeldian dilemma in the age of generative AI. The recent partial summary judgment in Kadrey et al. v. Meta isn’t just about copyright. It’s a warning shot for organizations that haven’t trained themselves to think through all four corners of uncertainty.

The Case at a Glance: Meta v. 13 Authors

Thirteen authors, including Richard Kadrey, sued Meta for allegedly using their copyrighted books to train the LLaMA large language models. The key claims centered on:

  • Whether Meta’s use constituted fair use under copyright law
  • Whether the authors had a viable market to license their content to AI models

The court granted Meta partial summary judgment on the issue of market harm, stating that the plaintiffs failed to present evidence of a functioning licensing market disrupted by Meta’s use.

Both sides faced a similar core requirement: to move beyond surface-level assertions and dissect the relationship between the original work and the AI’s output. Meta had to prove that the outputs were transformative, not merely by intent, but in nature and market effect. The plaintiffs, in turn, needed to argue that any such transformation was insufficient and that their creative works had been used in a manner that directly impeded their ability to derive future benefits. 

This interplay between use, output, and economic consequence is the heart of the fair use argument, and it demands both simplicity in logic and breadth in anticipation. That’s where the known unknowns emerge: each side can predict the categories of argument the other might use, but not which specific levers they’ll pull or how credibly they’ll land.

Here is a reframed version of Rumsfeld and Rayner’s framework, adapted for AI litigation:

CategoryAI Law ExampleStrategic Imperative
Known KnownsMeta used copyrighted data for trainingDocument use, secure permissions, log provenance
Known UnknownsUncertainty of fair use interpretation by courtsScenario planning across jurisdictions
Unknown UnknownsFuture global regulation on AI attribution and data rightsBuild flexible governance and legal contingencies
Unknown KnownsWarhol precedent and the unspoken assumption of market harm without arguing itSurface inconvenient truths before they’re exploited by the opposing party

Simplified and Multidimensional Reasoning

It is not enough to argue the merits of a case in isolation; we must also model the consequences of its possible interpretations. You can be sure that those with trillions of dollars at stake will do just that, anticipate every angle, every precedent, and every potential vulnerability long before they step into court. That’s where the known unknowns emerge: each side can predict the categories of argument the other might use, but not which specific levers they’ll pull or how credibly they’ll land.

That is why you need to use Rumsfeld’s framework to ensure that you have not only identified the unknowns but have also taken steps to surface and test them. You must ask: What is the probability that the opposing side will exploit an overlooked angle? What happens if they do?

On the one hand, the basics of a sound legal strategy, evidence, precedent, and market impact, must be clearly articulated. However, multidimensional thinking requires more than simply recognizing that there are multiple angles. It requires structuring those angles through a legal and strategic framework. Was the use of the work covered under various interpretations of fair use? Was it truly transformative? Did it harm the creator’s ability to benefit economically from their work in the future?

This interplay between use, output, and economic consequence is the heart of the fair use argument, and it demands both simplicity in logic and breadth in anticipation.

This is the simplicity: a direct comparison between inputs, outputs, and economic consequences.

However, the multidimensional strategy stems from recognizing that both sides will seek, and often weaponize, their own interpretations of transformation and harm. This creates a landscape of known unknowns: we can anticipate that opposing counsel will explore specific arguments, and we might even assess the statistical probability of which they’ll use, but until they are deployed, these lines of attack remain strategically undefined.

What the Plaintiffs Missed

In my intellectual property law courses, I learned that to prevail in a copyright lawsuit, particularly when fair use is invoked as a defense, the plaintiff must do more than show unauthorized use. They must offer compelling, concrete evidence that the use causes or poses a credible threat of market harm to the original work.

In simpler terms: the works were consumed without permission or compensation and used in a way that undermines our ability to monetize them.

Without both conditions satisfied, as demonstrated in the Meta case, courts are likely to rule in favor of defendants, primarily when a fair use argument is strategically constructed and factually narrow.

Their silence created precedent. By failing to argue that AI training requires a licensing framework, the plaintiffs left a dangerous vacuum.

They didn’t argue that Meta’s actions prevented them from monetizing their works. They didn’t offer data showing dilution, substitution, or reduced downstream licensing. Nor did they point to a functional or emerging market that Meta disrupted.

Transformative – Warhol Was Right There – But Only Meta Used It

Another missed opportunity came in the form of legal precedent.

The Supreme Court’s 2023 decision in Warhol v. Goldsmith significantly narrowed the definition of “transformative use” under the fair use doctrine. This ruling clarified that simply changing the purpose of a work is not enough; courts must consider whether the new use is meaningfully distinct in character and does not substitute for the original.

Yet in Kadrey v. Meta, it was not the plaintiffs who invoked Warhol; it was Meta.

Meta’s legal team leveraged Warhol to carefully argue that their use of copyrighted material was sufficiently transformative, as LLaMA produces varied and functional outputs, such as summaries, translations, and ideation, rather than derivative books.

The judge cited Warhol at least 14 times in the opinion, almost entirely based on Meta’s framing. The plaintiffs could have used it to contest the idea that ingesting an entire book to extract patterns for predictive output is genuinely transformative. But they didn’t. In a legal battle where the boundaries of AI, copyright, and originality are still being drawn, omitting a recent Supreme Court case directly relevant to the central issue is more than an oversight; it’s a strategic failure.

In the Rumsfeld model, this is referred to as a “unknown known.” The plaintiffs and their counsel likely understood Warhol but failed to operationalize it in their arguments. Once again, 

The Corporate Parallel: Thinking in Tension

Too often, engineering teams race ahead with what’s possible, while legal teams play defense after launch. This leads to the risk of whiplash and reputational damage. The companies that will win in the AI era will:

  • Assemble red teams to argue the counterpoint.
  • Create scenario decks for unknown rulings.
  • Invest in legal R&D, not just compliance.

Your legal strategy is no longer a fence. It’s a chessboard.

I recall a time when, while documenting a process our team developed with one of IBM’s patent attorneys, she asked me: “If you were aware of this patent, how would you design around it?”

She explained that her role was not just to protect an invention through a patent, but also to anticipate how others might circumvent it. She introduced me to the concept of “bracketing,” where competitors file adjacent or incremental patents, hoping to negotiate royalty-free cross-licensing instead of paying to use the original innovation.

This kind of anticipatory legal strategy is precisely what was missing in the plaintiffs’ approach. They focused on defending what was taken but failed to proactively shape the rules of engagement for how their work should be accessed, licensed, or compensated in the age of AI.

Strategic Uncertainty and Cognitive Range

We are not just in an age of innovation. We are in an age of strategic uncertainty. The Meta case is not just a copyright issue; it is a referendum on how prepared organizations are to argue in the unknown.

To quote Harvard Professor Constance Bagley:

“Law is not a dry set of rules. It is a dynamic tool for shaping competitive advantage.”

To prevail in a copyright lawsuit, especially against technology companies using works for AI training, plaintiffs must do more than show unauthorized use. They must provide convincing, concrete evidence that such use causes or threatens real market harm to their works. Without this, as seen in the Meta case, courts are likely to rule in favor of defendants, especially when a fair use argument is credible and strategically positioned.

So ask yourself:
Does your organization only argue its side?
Or is it building the capacity to anticipate, articulate, and even strengthen the other side’s position before it’s forced to?

That, in the AI era, is not just a matter of legal foresight. It’s strategic survival.

Optional Reading:  Constance Bagley and the Strategic Use of Law

A great book recommended for my International IP Law course was Winning Legally” by Harvard Professor Constance Bagley.  She argues that the law should not be seen merely as a constraint but as a tool to:

  • Create value
  • Marshal resources
  • Manage risk

Applying Bagley’s lens to this case:

  • Could the authors or their publishers have developed a pilot licensing framework with AI labs to demonstrate market demand preemptively?
  • Could they have marshaled broader industry support to demonstrate commercial intent, rather than just grievance?
  • Could they have modeled the risk of silence on attribution, economic harm, and long-term erosion of rights?

Disclaimer:
The opinions expressed in this article are my own and are based on public case information, strategic analysis, and personal experience. I am not an attorney, and this article does not constitute legal advice. Readers should consult legal counsel for guidance on specific legal matters.