compounding pharmacies lose a round with Lilly on personalized medicine and GLP-1 comparison claims

Eli Lilly & Co. v. Mochi Health Corp., 2026 WL 1076831,
No. 25-cv-03534-JSC (N.D. Cal. Apr. 20, 2026)

Eli
Lilly’s claims were previously dismissed,
and Lilly tried again with claims
under California’s UCL, Lanham Act false advertising, and civil conspiracy.
Civil conspiracy failed but Lilly was allowed to proceed with the advertising
claims.

Lilly makes two FDA-approved weight-loss medications
containing tirzepatide. “Mochi Health is a telehealth company that connects
consumers with physicians who can prescribe weight-loss medications, including
compounded versions of tirzepatide.”

Lilly’s first UCL claim arose from Mochi Health’s alleged
corporate practice of medicine. It allegedly changed patient doses en masse
without consulting patients or receiving a clinical indication from a physician—several
times over the course of a year. The changes were allegedly based Mochi’s
developing business relationships with various pharmacies: whether compounded
medications included niacinamide, glycine, and pyridoxine depended on the pharmacy.
Lilly alleged that these additives were not meant to achieve a therapeutic
effect, but rather reflected Mochi’s financial considerations. Thus, Mochi allegedly
made medical decisions for patients based on profit motives rather than
clinical need. It also allegedly “steer[s] its patients to compounded products
over Lilly’s FDA-approved tirzepatide medicines” through its hiring of Mochi
physicians, its development of obesity treatment protocols, and training of
Mochi medical staff.

Lanham Act: Lilly alleged that Mochi misrepresented its compounded
tirzepatide medications as safe and effective based on studies conducted of
Lilly’s products; misrepresented its products as FDA-approved; and misrepresented
its tirzepatide drugs as “personalized.”

Along with lost sales, Lilly alleged reputational harm
because Mochi compared an inferior, compounded product to Lilly’s FDA-approved
medicine, causing consumers to conflate the higher incidence of adverse events
found in compounded medications with Lilly’s drugs. Lilly cited studies
indicating a higher risk of adverse events from utilizing compounded versions
of tirzepatide, such as “abdominal pain, diarrhea, nausea, suicidality, and
cholecystitis.”

Mochi once again challenged Article III standing. But this
time Lilly successfully alleged both sales diversion and reputational harm. “Coupled
with Mochi Health’s alleged unilateral ability to modify existing compounded
medication doses for customers, Lilly asserts Mochi Health exercises control
over the Mochi Medical practice to reduce patients’ ability to choose MOUNJARO®
or ZEPBOUND® over a compounded option.” Its ads about the safety and
personalization of compounded tirzepatide also plausibly steered consumers in
the market for weight-loss medication away from Lilly’s products.

As for reputational injury, Lilly connected its allegation
about higher side effects for compounded medications to research findings from
the National Consumers League that show consumer confusion about the difference
between compounded medications and FDA-approved medications, and conflation of
the two. “Combined, these allegations permit a reasonable inference of harm to
Lilly’s reputation through public perception that FDA-approved tirzepatide
medications have similar rates of adverse side effects compared to compounded
medications.”

Defendants argued that consumers of compounded tirzepatide were
different from consumers of MOUNJARO or ZEPBOUND, relying on statements Lilly
made in a separate case involving the FDA’s determination that there was no
longer a nation-wide “shortage” of tirzepatide-based drugs, where Lilly said
that “there were good reasons to think much of the market for compounded
tirzepatide would not translate to future demand for Lilly’s FDA-approved
products. Compounded products are often promoted for uses different from the
indications FDA has approved, including by affiliated telehealth providers, so
patients may be less likely to get a prescription from a physician for
FDA-approved medicine. There also might not be insurance coverage for those
off-label uses, and some compounded products use a different formulation than
Lilly’s products.”

This didn’t estop Lilly from alleging harm here. Lilly did
not make any claims about Mochi Health’s marketing and customer base. And its prior
statement that “much of the market for compounded tirzepatide” may not overlap was
consistent with its allegations in this case of some consumers being
diverted.

Even though they operated in different market strata and
Mochi doesn’t prescribe, manufacture, or sell the compounded tirzepatide
medications, Lilly still plausibly alleged that misleading advertisements about
the safety and personalization of Mochi’s medicines attracted customers in the
market for weight-loss medication that may have otherwise purchased a Lilly
medication and that Mochi patients were steered away from Lilly’s products. “It
is not necessary that Mochi Health personally profited from the diverted sales;
the relevant inquiry is whether Lilly has plausibly alleged it suffered an
economic injury caused by Mochi Health’s conduct. Accordingly, Lilly’s and
Mochi Health’s relative positions in the market are not dispositive of the
economic injury question here.”

Mochi further argued that the causal chain was interrupted
by the requirement that any consumer receive a valid prescription from a
treating physician before purchasing compounded tirzepatide. But a single
third-party’s actions do not necessarily upend traceability given the
requirement is “less demanding than proximate causation.” And Lilly alleged
that Mochi influenced the prescription process, including by changing the
formulation of compounded tirzepatide medications for all patients en masse
without advanced notice or a clinical indication. That was plausible
traceability.

As for redressability, Mochi argued that an injunction could
not force physicians—who are not parties to this case—to prescribe Lilly’s
products instead of a compounded drug. But damages are available, and any
equitable relief would redress Mochi’s alleged false advertising practices and
corporate intervention in the practice of medicine.

UCL claim: Lilly plausibly alleged that it was injured as a
result of the allegedly unlawful corporate practice of medicine. The California
Medical Practice Act is violated if a “non-physician exercises ‘control or
discretion’ over a medical practice.” And that was sufficiently alleged.

Lanham Act: Statutory standing was present both through
sales diversion and reputational damage.

Indeed, Mochi Health allegedly
deployed search-engine optimization to show Mochi Health’s compounded
tirzepatide medication advertisements to consumers searching for Lilly
products. Moreover, Mochi Health directly compares its own compounded
medications to Lilly’s products in social media advertising. These allegations
permit a reasonable inference that any alleged misrepresentations by Mochi
Health put Lilly at a competitive disadvantage in the market—either by losing
customers or suffering damage to its reputation. So, Lilly’s allegations permit
a reasonable inference that any misrepresentation by Mochi Health proximately
caused its injuries.

Reputational injury doesn’t require direct competition, and
diverted sales also counted even without a supposed 1:1 relationship of lost
sales. Lexmark found the 1:1 relationship important because “Lexmark’s
anticompetitive actions primarily targeted remanufacturers, not [plaintiff]
Static Control.” Here, Mochi allegedly operates in the weight-loss market by
advertising directly to those consumers. “The relevant allegations here permit
a plausible inference that any false or misleading statements issued by Mochi
Health injured Lilly because they targeted the same segment of the market from
which Lilly stood to profit.”

What about the intervening cause of a doctor’s prescription?
Not intervening enough to defeat proximate cause. Eli
Lilly & Co. v. Willow Health Servs., Inc.
, No. 2:25-CV-03570-AB-MAR,
2025 WL 2631620, at *6 (C.D. Cal. Aug. 29, 2025), found the prescriber’s
conduct to defeat proximate cause. The court here disagreed. First, Lilly here
alleged direct interference with patient prescriptions. “Second, drawing
inferences in Lilly’s favor, that a medication requires a prescription does not
prevent a consumer from relying on advertising to request one product over
another from their physician. Since both products at issue contain tirzepatide,
it is a reasonable inference that a consumer would have some basis for asking
her physician to prescribe a specific medication.”

Falsity: Mochi allegedly misled consumers by (1) citing to
Lilly’s clinical trials to support its claims and (2) advertising that
“tirzepatide is a safe medication that has been approved by FDA.” The court
agreed that these were plausibly misleading, accepting Lilly’s allegation that “the
FDA does not approve an active pharmaceutical ingredient for treatment of
patients, but rather approves specific formulations of that ingredient that
have been subjected to rigorous study.” Mochi cited the Lilly studies to tout “tirzepatide,”
then connected that to “compounded tirzepatide,” and didn’t mention the
difference between compounded and FDA-approved formulations, but instead
suggested the medicines were interchangeable.

Mochi argued that was a mere lack of substantiation theory.
While some
district courts have agreed
, the court reasoned that it was plausible that
Mochi’s statements misled consumers into believing that the Lilly studies
actually considered compounded medication. “The issue is not whether Mochi
Health had a basis for its statements, but rather, whether Mochi Health
misrepresented the contents of the studies.” That’s a workable theory.

Likewise, Mochi’s statements could be reasonably understood
to indicate that compounded tirzepatide medications are FDA-approved:
“Tirzepatide is a safe medication that has been approved by the FDA” followed by
a representation that Mochi’s compounded medication is “safe,” citing only the
Lilly studies and the FDA approval of Lilly’s drugs.

“Personalized” medicine claims: Mochi offered “much more
accessible alternatives to brand-name medications that are customized to the
medical needs of the patient” and claimed that “[c]ompounded medications are
custom-prepared to meet an individual patient’s specific needs.” But Lilly
alleged that’s not what happened. If Mochi changes the formulation and dosage
of its compounded medication en masse based on its business relationships with
pharmacies, not medical indication, that would directly contradict the ad
claims. Mochi’s interpretation that all it advertised was “customized” or
“personalized” care plans was meritless.

Nor did the court apply FDCA preclusion. The “personalized”
theory didn’t conflict with the FDCA’s regulatory scheme. Mochi argued that the
FDCA allows compounding; that compounded medications are “personalized” by
definition; and that Lilly’s theory contradicts a permissible practice of
creating “batches of compounded medications for subsequent dispensing.” But Lilly’s
falsity theory was about advertising that Mochi “personalized” medications but
then did not tailor changes in dosage or formulation of the compounded drug to
individual patients’ medical needs. “Whether Mochi Health or Aequita Pharmacy
prepared the medication in “batches” is ultimately beside the point: the
falsity derives from Lilly’s allegations that Mochi Health changed the
formulation of patients’ medications based on business interests and evolving
relationships with certain pharmacies rather than patient needs. Defendants
have not identified any FDCA provision or FDA policy directly in conflict with
this misrepresentation theory.”

What about safety claims: better left to the FDA? The court
won’t have to determine the scientific validity of citing the Lilly studies to
support safety claims about compounded medications. It would only have to
determine whether Mochi misled consumers into believing that the Lilly studies tested
the effects of compounded tirzepatide medications. “This misrepresentation
theory presents a binary question of whether the studies considered any
compounded tirzepatide formulation.” Nor would resolving the claim about misrepresentation
of FDA approval impinge on the FDA’s policy choices. Defendants could renew
their preclusion argument if discovery warranted it.

from Blogger https://tushnet.blogspot.com/2026/04/compounding-pharmacies-lose-round-with.html

Posted in Uncategorized | Tagged , , , | Leave a comment

Bayer can’t enjoin J&J’s cancer superiority claims by showing methodological disputes

Bayer Healthcare LLC v. Johnson & Johnson, Inc., 2026 WL
1045917, No. 26 Civ. 1479 (DEH) (S.D.N.Y. Apr. 17, 2026)

The court denied Bayer’s request for a preliminary
injunction against its competitor J&J’s advertising of a drug used in the
treatment of metastatic castration-sensitive prostate cancer. In a presentation
and a press release, J&J described a retrospective observational study that
purportedly showed a roughly 50% reduction in the risk of death for patients
prescribed its drug, apalutamide (ERLEADA), compared to Bayer’s drug,
darolutamide (NUBEQUA). Bayer alleged severe methodological flaws rendering J&J’s
claims literally false or false by necessary implication in violation of the
Lanham Act and NY state law.

The court found that Bayer failed to show methodological
errors substantial enough to render J&J’s claims literally false or even
misleading. Instead, J&J accurately described the results, the methodology,
and the study’s limitations.

Super interesting methodological questions (but possibly
much more appropriate for doctors to debate than courts): Bayer argued that studied
patients receiving its drug were mostly prescribed it off-label (given the
study period); that such patients would generally only get an off-label
prescription when patient-specific issues warranted avoidance of the on-label
options (J&J’s) already on the market; and that J&J’s product’s side
effects made it risk for patients with seizure history, fall and fracture risk,
independent treatment with anticoagulants, general frailty, or other
comorbidities, whereas Bayer’s product wasn’t associated with those side
effects and thus the uncertainty of off-label use was justified for them. Thus,
patients prescribed Bayer’s drug would disproportionately have these other conditions,
which were already associated with higher mortality, confounding any
association based on the drugs themselves.

Likewise, Bayer offered testimony that its drug was
prescribed to patients who were seen as possibly needing chemotherapy at some
point because at least some doctors thought it was the better treatment option
for patients receiving chemotherapy. But, Bayer argued, such patients were
likely to be suffering from a more advanced disease or otherwise more frail,
thus introducing further bias in the respective study populations.

J&J had responses, including that the off-label prescription
of Bayer’s drug was “ubiquitous[],” in Bayer’s own words, at the relevant time;
and that patients must have a baseline level of health to receive chemotherapy,
so possible chemotherapy was not a sign of significant frailty. J&J also presented
testimony that its statistical controls adequately accounted for any potential
bias from differences in the treatment cohorts by controlling for age and other
comorbidities. “Bayer’s experts admitted that their criticisms regarding the
treatment cohorts were essentially hypothetical, because they had no empirical
data showing that off-label darolutamide doublet patients were sicker, more
frail, or more likely to have non-cancer comorbidities than on-label
apalutamide patients.” At this stage, Bayer failed to show that study patients
who received its drug were sicker than patients who received J&J’s.

Bayer’s attacks on the control methodology also failed. J&J’s
expert testified that the necessary magnitude of an unmeasured confounder “to
explain away the [51%] observed difference found in the study” would be
“enormous”: to “explain away” the observed difference across cohorts, unmeasured
confounders would have to simultaneously make a patient 350% more likely to
receive darolutamide and 350% more likely to die. That would be a stronger relationship
than that between heart disease and smoking. Bayer didn’t rebut this.

Bayer also criticized the underlying data sources of the
study. “For example, in one Bayer study, as many as 40% of patients that
initially appeared to be eligible to be included in the study based on [the
data source used] were, in reality, ineligible once researchers examined the
patients’ underlying charts.” But Bayer has used the same datasets in the same
way in their own retrospective studies on multiple occasions. In addition, both
the conclusions slide of the PowerPoint and the overview slide of J&J’s
presentation acknowledged the possibility of data errors, acknowledging
possible “misclassification bias” and “that not all death or treatment data
[were] captured” and that, because “the study used clinical records, some
information may be missing or incorrect.”

Nor did Bayer’s attack on the “overall hazard ratio” reported
by J&J succeed. “A hazard ratio is generally accepted as the standard
method of reporting comparative survival results for oncology studies. The
measured ratio here is 0.49, meaning a patient being treated with apalutamide
was 0.49x as likely to die during the observed period as a patient receiving
darolutamide. Thus, the Study’s top line result stated a 51% reduction in the
risk of death between the cohorts, ‘another way of saying the same thing.’”
Bayer argued that it was inappropriate to calculate a hazard ratio calculated
over the 24-month study period. “Because a hazard ratio presents a single
measurement for the entire period, where outcomes may differ over time, a
hazard ratio may over- or understate the likelihood of an event at a given
moment.” But this was “a generally-accepted method for reporting retrospective
comparative study,” Bayer had used the same reporting methodology in its own
research. Bayer presented no statistical analysis to estimate varying hazard
ratios using different time periods.

So much for the challenges to the study itself. Did J&J’s
statements misrepresent the methodology and results? There were no consumer-facing
advertisements at issue, but Bayer argued that the press release was picked up
by search engines and AI-generated results to answer general public questions,
and offered evidence that patients can often influence prescribing decisions.

But J&J’s evidence suggested that only doctors, not
patients, were the target audience for the challenged communications. Two
treating physicians testified that they were not aware of a single instance of
a patient identifying either drug during an appointment, and in this particular
context, it was highly unlikely that a patient would be driving a treatment
decision.

51% risk of death reduction: Study patients receiving
Bayer’s product had a roughly 86% survival rate, while those receiving
J&J’s apalutamide had a roughly 92% survival rate—statistics that are
disclosed in the overview slide. Bayer argued that the public seeing “92.1
percent for J&J’s product and a ‘51 percent reduction in risk of death’ would
plausibly infer that Bayer’s product has a survival rate of approximately 60
percent.” (Why not 46%?) But failure to include the 86% absolute survival
measure didn’t misrepresent the results, and J&J used sufficient disclaimers.
“It would be obvious to any medical practitioner that a hazard ratio reflects a
relative, rather than absolute, difference.”

Bayer also challenged the use of the claim that J&J’s
product “reduces” mortality risk rather than merely being “associated with”
decreased mortality. This was closer: “associated with a reduction in X” would
be a more apt description of the results of a retrospective, observational
study like the one here, whereas the causation implied by “reduces” generally
can be shown only through a randomized trial. But the word wasn’t literally
false for the target audience. “Bayer failed to present any evidence that
doctors would not understand the press release’s headline claim in light of the
release’s repeated references to the real-world and observational nature of the
Study.” And J&J’s witnesses “repeatedly emphasized that doctors would look
closely at the underlying study rather than relying just on one word in a
headline.”

Bayer also challenged the use of the phrase “through 24
months.” Many patients were “in” the study for only a portion of that time and
therefore were tracked for a shorter duration. But “through 24 months”
accurately (and literally) describes the period in which patients were included
in the study, and there was testimony that a reasonable doctor would recognize
that it was impossible that every patient in the study was followed for a full
24-month period. For example, patients died during the period. “Readers
familiar with health outcomes studies understand that the stated follow-up
period is not universal.”

Bayer also challenged a press release’s statement that the
study “replicat[ed] the conditions of a randomized clinical trial.” True, retrospective
observational studies are generally inferior to randomized trials. In isolation,
this statement could be misleading, but not in the full context. Disclosure of
the underlying methodological approach, including noting that the study was a
“real-world” study rather than a randomized clinical trial at least 14 times
throughout the press release sufficed. While “no observational study can
actually duplicate the effect of a randomized trial,” “the audience of medical
professionals to whom the communications were targeted would know that.”

The court also referred to the Second Circuit’s decision in ONY,
which held on relevantly similar facts that, “to the extent a speaker or author
draws conclusions from non-fraudulent data, based on accurate descriptions of
the data and methodology underlying those conclusions, on subjects about which
there is legitimate ongoing scientific disagreement, those statements are not
grounds for a claim of false advertising under the Lanham Act.”

J&J argued that, under ONY, Bayer had to prove
that the study was based on fraudulent or false data, or that J&J had
falsely described the underlying methodology, but the court wasn’t quite
willing to go that far. ONY dealt with statements made “in a scientific
article reporting research results,” and also in “a press release touting [the
article’s] conclusions.” Other courts have declined to grant broad immunity to “statements
made outside of an academic context.” The court also pointed to a series of
opinions standing for the proposition that “statements about a study’s results
may still be challenged as false under the Lanham Act if the underlying study
can be shown to suffer from severe methodological defects such that the study
cannot be said to support the statements in question.”

The court didn’t need to resolve the issue, because Bayer
couldn’t win under either standard: fraud or showing that the study compared
apples to oranges. (It did comment that ONY involved not just a paper
but a press release, and that it wasn’t clear that “the extent of First
Amendment protections for statements of scientific research deemed applicable
by the Second Circuit in ONY could properly be limited to academic fora.”)

from Blogger https://tushnet.blogspot.com/2026/04/bayer-cant-enjoin-j-cancer-superiority.html

Posted in Uncategorized | Tagged | Leave a comment

“higher standard of safety” is puffery even as to child car seats

ElSayed v. Columbus Trading Partners USA Inc., No.
25-cv-01347 (FB) (TAM), 2026 WL 1042209 (E.D.N.Y. Apr. 17, 2026)

ElSayed alleged that CTP’s infant car seat were faulty and
defective in violation of NY consumer protection law. The court dismissed the
complaint because “safety” claims were too vague to be actionable.

CTP advertised that its car seat conforms to a “higher
standard of safety” because it was “engineered in Germany—where safety
standards are among the highest in the world,” among other claims. But it was
voluntarily recalled because one of the harness system anchor pins tended to
break. It also offered a free remedy kit, though that wasn’t available when the
complaint was filed, at which time CTP advised consumers that they should check
the anchor pins for damage before every use until the remedy kits became
available.

CTP argued that “New York law requires a manifested defect
for a plaintiff to recover on any claim.” But unlike the products described in
the cited cases, the car seat didn’t perform satisfactorily:

The recall explicitly instructs
caregivers to check the Aton G’s harness pins before every use, because they
were prone to bend or break. This is not a situation of theoretical harm caused
by a potential defect; at issue here is an actual defect manifested in every
Aton G subject to the recall. Accordingly, Plaintiff did not get the benefit of
her bargain, instead finding herself saddled with a faulty and dangerous CRS
which she could not use as expected and which she had to manually examine
before every use. This is not how a car seat is supposed to be used, and it is
therefore defective by definition.

However, the false advertising claims failed because they
were too vague. Along with the phrases above, CTP also said that the car seat
had “advanced safety features;” “combines advanced technologies with luxurious
details to deliver an exceptional first car seat for your child”; “marries the
highest standard of safety with a focus on child comfort”; and “[o]ffer[s]
maximum convenience and safety without comprising on design.”

But general statements about a product’s safety “do not
create an enforceable promise.” The court pointed to judicial divisions over
whether Uber’s claims to have “the strictest safety standards possible” and
“the safest rides on the road” were puffery—some said they were actionable
because superiority over other methods was verifiable and others said they were
“too boastful, self-congratulatory, aspirational, or vague to amount to
misrepresentation.” Under this “vague and inexact” standard, the plaintiff failed
to state a claim. CTP’s “highest standards of safety” claim was not paired with
any superlative statements and stayed general and vague statements. The court
also found a “meaningful difference between a company claiming that they offer
the safest product and claiming that they set the highest safety standards.
Standards in the abstract are necessarily aspirational, as they describe a
policy or plan and not the actual outcome or product.” [Requiring consumers to
read like lawyers always goes well!]

from Blogger https://tushnet.blogspot.com/2026/04/higher-standard-of-safety-is-puffery.html

Posted in Uncategorized | Tagged , | Leave a comment

phthalates could be “ingredient” for purposes of falsifying “only natural ingredients”

Wysocki v. Chobani, LLC, — F.Supp.3d —-, 25-cv-00907-JES-VET,
2026 WL 926713 (S.D. Cal. Apr. 6, 2026)

Wysocki alleged that Chobani’s Greek Yogurt had dangerous phthalates
in it. Phthalates are “a group of chemicals [the U.S. Food and Drug
Administration (“FDA”) has deemed to be used safely] in hundreds of products,
such as … food packaging, pharmaceuticals, blood bags and tubing, and
personal care products.”  But plaintiffs
alleged that they were bad for people.

The court rejected various challenges to the pleadings,
including that the cited testing didn’t show that the actual product Wysocki
purchased actually contained phthalates because the tested products differed in
size (32 oz vs. 5.3 oz), which could reasonably affect phthalate levels, as
each size container calls for a different amount of #5 plastic. That is, under Wysocki’s
leaching theory, phthalate levels in the 5.3 oz product would likely be lower
than those detected in the 32 oz product. Moreover, half of the cited tests
detected no phthalates and the testing entity’s own caveat was that results
“may not be representative of actual product contents.” These were all factual
disputes, and plaintiff pled enough to get past Rule 9(b), with the exception
of one phthalate that was not specifically mentioned in the allegations about
testing. Allegations that phthalates readily leach into surrounding surfaces
and food and are commonly used as a catalyst to make the # 5 plastic container
that Chobani predominately uses for its products also helped.

The court rejected the argument that Chobani’s “only natural
ingredients” claims weren’t misleading because there was no allegation that
phthalates are used, or act, as ingredients in the products. But Wysocki
plausibly alleged that allegations of “only natural ingredients,” while
affirmatively disclaiming the presence of any “artificial flavors,” “artificial
sweeteners,” or “preservatives”, represented to her and other reasonable
consumers that the product is free of unsafe, unnatural, toxic substances, such
as phthalates. At the motion to dismiss stage, a reasonable consumer could
understand representations that use terms such as “100% natural” or “natural,”
modified by other terms connoting that it is “all natural,” to mean “that a
product does not contain any non-natural ingredients.” And “only” was just such
a modifier.

A reasonable consumer was also likely to interpret the
meaning of the term, “ingredient,” by its ordinary definition: “something that
enters into a compound or is a component part of any combination or mixture.” If
phthalates’ presence in the yogurt was shown, that would plausibly lead a
reasonable consumer to find that the yogurt’s ingredients include phthalates,
rendering “only natural ingredients” false.

It didn’t matter that phthalates aren’t on the ingredient
list; reasonable consumers don’t have to cross-check the ingredients list when
a claim is clear on the face of the product. (And here, the ingredient list
wouldn’t help!) Given the “only” representation, “even trace amounts of a
non-natural substance, like phthalates, would exponentially alter the
previously stated percentages, which in turn results in a misleading ‘natural’
claim.”

Chobani also argued that Wysocki failed to allege that the
levels of phthalates in the products render them unhealthy or unsafe to
consume. While some courts have required plaintiffs to allege the presence of
the alleged harmful substance, at a particular level, to support a
misrepresentation claim, that was a question of fact. Wysocki alleged that “natural
ingredients are one of the most important aspects of healthy food,” and that,
when food packaging does not contain the word “natural,” over half of reasonable
consumers assume the product must contain chemicals.” And she alleged a risk of
“unsafe levels” of phthalates, and that disruptions of the endocrine,
respiratory, and nervous systems can result from both high and low dose
exposure.

However, Wysocki’s partial omission theory failed: she
alleged literal falsity, not that a representation was misleading absent
further disclosure.

Chobani’s argument that it was insulated by Proposition 65’s
warning thresholds was premature. Prop. 65 provides that “no person in the
course of doing business shall knowingly and intentionally expose any
individual to a chemical known to the state to cause cancer or reproductive
toxicity without first giving clear and reasonable warning to such individual
where the amount exceeds the [agency-established] no significant risk level.” But,
pursuant to a statutory safe harbor, this duty to warn does not apply to
business operators when Prop. 65-regulated chemicals exposure levels are equal
to or less than the “no significant risk level.” And private plaintiffs who sue
to enforce its private right of action have to give pre-suit notice, an
unwaivable requirement.

But Wysocki argued that she wasn’t bringing claims under
Prop. 65, even though two of the alleged phthalates in the products are on the
Prop. 65 chemical list. Though Prop. 65 is concerned with cancer or
“reproductive toxicity,” she alleged endocrine disruption, developmental harm,
immunological and renal harm, and hormone disruption, “outside the scope of
Proposition 65.” Resolving this would require more factfinding than appropriate
at this stage.

However, equitable relief and express warranty claims were
dismissed.

from Blogger https://tushnet.blogspot.com/2026/04/phthalates-could-be-ingredient-for.html

Posted in Uncategorized | Tagged , | Leave a comment

Brita’s clearly qualified filtration claims couldn’t mislead reasonable consumers as to lack of qualification

Brown v. Brita Products Company, — F.4th —-, 2026 WL
1028347 No. 24-6678 (9th Cir. Apr. 16, 2026)

Unlike 800-thread count sheets (see previous post), a reasonable
consumer would not expect a fifteen-dollar water filter to “remove or reduce to
below lab detectable limits common contaminants hazardous to health” in tap
water, notwithstanding clear disclosures to the contrary. Brown brought the usual
California claims
against Brita.

The Standard Filter, Brita’s lowest cost filter, is
certified to reduce five contaminants—copper, mercury, cadmium, chlorine, and
zinc—to below the levels recommended by the NSF and EPA. [At least, for now; I
assume those recommendations will soon be lifted.] The Elite Filter, a more
expensive model, reduces more than a dozen other contaminants to less than or
equal to NSF/EPA recommended levels.

The package advertises that the filter “reduces” certain
harmful contaminants. The Brita Everyday Water Pitcher, which includes the
Standard Filter, claims: “Reduces Chlorine (taste & odor), Mercury, Copper
and more” and directs consumers to “see back panel for details.” The back label
likewise claims to “reduce” “Copper,” “Mercury,” “Cadmium,” “Chlorine (taste
and odor),” and “Zinc (metallic taste).” The product labels offer links to
additional sources of information known as “Performance Data Sheets,” which
provide more information. Performance Data Sheets contain more detailed
information on exactly which contaminants are filtered by Brita’s Products, and
to what extent. For example, the Standard Filter’s Performance Data Sheet
discloses the following information:

Brown bought the Brita Everyday Water Pitcher with the
Standard Filter and alleged that he received the misleading message that the product
“removes or reduce[s] common contaminants hazardous to health … to below lab
detectable limits.” He pointed to the claims: “BRITA WATER FILTRATION SYSTEM”; “Cleaner,
Great-Tasting Water”; “Healthier, Great-Tasting Water”; “The #1 FILTER”; “REDUCES
Chlorine (taste and odor) and more!”; “REDUCES Chlorine (taste and odor),
Mercury, Copper and more”; and “Reduces 3X Contaminants.” He alleged that the
filter didn’t reduce to below lab detectable levels various hazardous
contaminants, including arsenic, chromium-6, nitrate and nitrites,
perfluorooctanoic acid (PFOA), perfluorooctane sulfonate (PFOS), radium, total
trihalomethanes (TTHMs), and uranium.

Material omission claims: Absent a contrary
misrepresentation, a duty to disclose arises under California law if either (1)
a product contains a defect that poses an unreasonable safety risk; or (2) a
product contains a defect that defeats its central function. The omission must
also be material. The reasonable consumer standard is not satisfied where
plaintiffs allege only “a mere possibility that [the] label might conceivably
be misunderstood by some few consumers viewing it in an unreasonable manner.” Even
if there was an unreasonable safety hazard or defect in central function, Brita
lacked a duty to disclose that its filters didn’t completely remove or reduce
to below lab detectable levels all of the alleged contaminants. “Such a
disclosure would not be important to a reasonable consumer in light of Brita’s
other disclosures on its Products’ packaging and the objective unreasonableness
of such an expectation.”

“As a matter of law, no reasonable consumer would expect
Brita’s low-cost filters to completely remove or reduce to below lab detectable
levels all contaminants present in tap water, particularly in light of Brita’s
extensive disclosures to the contrary.” Brita discloses that its filters
“reduce” contaminants from tap water, not that they remove contaminants
entirely, and specifically discloses the contaminants that are reduced. It also
provided “easily accessible information” (the Performance Data Sheets) about
the extent of the reductions. Thus, “[b]ecause a reasonable consumer has been
made aware of the Products’ limitations, we cannot say that a reasonable
consumer would have been misled by Brita’s omission of these limitations on its
Products’ packaging.

from Blogger https://tushnet.blogspot.com/2026/04/britas-clearly-qualified-filtration.html

Posted in Uncategorized | Tagged , , | Leave a comment

an impossible claim is literally false and actionable if believing it is reasonable

Panelli v. Target Corp., — F.4th —-, 2026 WL 1042441,
No. 24-6640 (9th Cir. Apr. 17, 2026)

Something that I don’t yet have a full handle on is
happening in 9th Circuit consumer protection cases around literal
falsity v. ambiguity. It could be good, but I’m nervous about the potential for
weird Lanham Act interactions since “literal falsity” and “ambiguity” sound
like the Lanham Act concepts but currently have important differences. FWIW,
the emerging consumer protection approach has some things going for it—and if
Lanham Act cases started to recognize that consumer surveys shouldn’t rigidly
be required in cases of “ambiguity,” that would be a very good thing indeed.

Anyway, Panelli alleged that Target sells some of its “100%
cotton” bedsheets with claimed thread counts of 600 or greater, but that it is
impossible to achieve that high of level of thread counts with 100% cotton
textile. The court of appeals held that the district court erroneously
concluded that Panelli could not be deceived as a matter of law by an
impossible claim under the usual
California consumer protection laws
.

Panelli alleged that independent testing showed the sheets
he purchased had a thread count of only 288—not 800, as claimed on the sheet’s
label. Indeed, he alleged, “it is physically impossible for cotton threads to
be fine enough to allow for 600 or more threads in a single square inch of 100%
cotton fabric.” The district court relied on Moore v. Trader Joe’s Co., 4 F.4th
874 (9th Cir. 2021), a badly reasoned case holding, in this opinion’s words,
that “a reasonable consumer would be dissuaded by contextual information from
reaching an implausible interpretation of the claims on the front label of the
challenged product.” If it was physically impossible to achieve 800 thread
count, the district court reasoned, then no reasonable consumer would interpret
the ad as promising an impossibility.

The court of appeals distinguished Moore because
there, “100% New Zealand Manuka Honey” was ambiguous: it didn’t necessarily
mean that the bees making the honey fed only on the manuka flower. (This is not
the poorly reasoned part, which is the stuff the court says a reasonable
consumer should know about honey grading and pricing.) As a result, “reasonable
consumers would necessarily require more information before they could
reasonably conclude Trader Joe’s label promised a honey that was 100% derived
from a single, floral source.” And “(1) the impossibility of making a honey
that is 100% derived from one floral source, (2) the low price of Trader Joe’s
Manuka Honey, and (3) the presence of the ‘10+’ on the label [which apparently
signifies a relatively low manuka content] … would quickly dissuade a
reasonable consumer from the belief that Trader Joe’s Manuka Honey was derived
from 100% Manuka flower nectar.”

Here, the district court “skipped a step by not analyzing
whether the label was ambiguous and therefore required the reasonable consumer
to account for outside information to interpret the label’s claim.” The
challenged claim here was not ambiguous. It “purports to communicate an
objective measurement of a physical aspect of the product.”

Target argued that there are multiple possible measures of
thread count—but it doesn’t produce consumer protection law ambiguity, which
asks only whether a substantial number of reasonable consumers could think
their questions about the feature had been answered without further
information, not whether all reasonable consumers would necessarily
think that. Note that the multiple possible measures of thread count would
produce Lanham Act ambiguity, if the non-false possibilities are reasonable.
Here, “it is unlikely that a reasonable consumer would know there are multiple
thread-counting methodologies.” Indeed, consumers are not “expected to look
beyond misleading representations on the front of the box” to discover the
truth of the representations being asserted, and are “likely to exhibit a low
degree of care when purchasing low-priced, everyday items,” “like bed sheets
sold by a mass-market retailer.”

A reasonable consumer is “unlikely to be familiar with the
intricacies of textile manufacturing.” [Moore said that reasonable
consumers know how honey is made; its error was to assume that knowledge “bees
collect pollen” would somehow translate to “and therefore they’d likely collect
lots of different kinds of pollen” when people generally don’t give that much
thought to that kind of background information.] “Realistically, a reasonable
consumer’s knowledge of textile manufacturing is likely limited to the fact
that a higher thread count listed on packaging indicates a higher quality
sheet.”

The court added: “Allegations of literal falsity are the
most actionable variety of consumer protection claims on California’s spectrum
of actionability.” True, some claims can be so clearly false as to avoid
deception. But Panelli’s claims weren’t unreasonable or fanciful:

While a vast majority of consumers
are, for instance, familiar with the biological nature of bees so that it would
be unreasonable for a consumer to think honey was sourced from a single type of
flower, they likely would not have that same kind of baseline knowledge about
textile manufacturing. Neither common knowledge nor common sense would cause a
Target shopper to question the veracity of the claim on the bed sheet’s label
that the product was of 800 thread count.

The court declined to create a situation where “manufacturers
would face no liability for false advertising so long as the claims were wholly
false—regardless of whether this falsity is generally knowable to consumers.”

from Blogger https://tushnet.blogspot.com/2026/04/an-impossible-claim-is-literally-false.html

Posted in Uncategorized | Tagged , , | Leave a comment

Panel 6: Unanticipated Consequences of New Technologies and Practices

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution,
and Possible Futures of the 1976 Copyright Act

Jennifer Urban, UC Berkeley Law (Speaker and Moderator)

Daniel Gervais, Vanderbilt Law: Copyright act as
undergirding licensing architectures for AI. © rights are inert without
exchange. A reproduction right is sterile if the transaction costs of licensing
exceed the value of any license. Ghost architecture of the statute: the
licensing machinery built around it by antitrust enforcement/courts, and
extended by subsequent legislative initiative. Why a mix of compulsory
licenses, court-supervised blanket licenses, CMOs, and congressionally
sponsored organizations? Reflects judgments about when markets will work to
create licensing regimes on their own and when they won’t.

Congress understood that certain uses would produce market
failures if left entirely to the private system—difficulty of advance licensing
millions of daily transactions, supervising individual uses. Compulsory licenses
are not concessions to users at the expense of rightsholders; they are a
mechanism to have market activities occur when otherwise they’d be unlikely to
occur at all—tech would be frozen out of the market or rightsowners would be
uncompensated. ASCAP, BMI, SESAC allowed for licensing without compulsory licensing.

The initial compulsory license was created to prevent monopolization,
not to subsidize record companies. The streaming eras revealed some weaknesses,
including “address unknown” filings to the Copyright Office, demonstrating a
systemic breakdown. The MMA in 2018 tried to address that failure with a
mandatory administrator of a blanket license, reducing the loophole and
creating a matching database to find authors & deal with unclaimed
royalties.

SoundExchange is neither voluntary nor a traditional
intermediary—does not require opt-in. The compulsory license is one half of the
architecture. The other is voluntary licensing in text & images, showing
judicial calibration of licensing market. This played out with the CCC and fair
use litigation—the early fortunes of CCC were modest without a judicial determination
that licensing was important. Texaco (2d Cir. 1994) changed that landscape by
holding that systematic copying of journal articles was not fair use.

AI is a stress test b/c of the scale of reproduction beyond any
existing licensing system. International system: no national licensing scheme
can avoid the possibility of arbitrage. The licensing system is starting to
respond for high-value sources like NYT. CCC has expanded to cover AI uses.
Other countries are introducing AI specific licenses. Voluntary arrangements
can try to fill that space even before legislation.

History in US: incremental expansion of compulsory license
as scale increases. American experience counsels against using a levy to
respond: AHRA’s statutory royalty on digital audio recording devices and blank
media seemed designed well but the tech passed through the market like a comet.

How can a system built on territoriality deal with cross-border
content? Reciprocal agreements, through voluntary licensing. Each adaptation is
slower and imperfect but it does happen. AI: most demanding test b/c of scale,
speed, and international complexity.

Matthew Sag, Emory Law: Nonconsumptive uses. © is built on
the metaphor of the printing press. Copyright provides incentives to authors
whose works would otherwise be reely copied on first publication. Thus,
reproduction is the locus of exchange b/t reader and author, where the toll can
be imposed. But what if there are no readers?

We have seen a series of copy-reliant technologies—search engines,
plagiarism detection, machine learning, generative AI. They necessarily copy
works but usually don’t deliver prior original expression to any human reader. This
issue wasn’t anticipated in 1976, even if AI authorship clearly was.

Should hidden intermediate copies be permissible if no one
ever reads them? Tension b/t 2 intuitions—copying (the technical act) is
infringement versus copyright’s purpose is to protect expression communicated
to audiences—consider how we judge substantial similarity, or give rights over
public performance.

His solution: nonexpressive use is fair use. When he
started, he mostly had software reverse engineering in mind, then plagiarism
detection and Google Books. Gen AI produces outputs that might compete with
human-made expressive works, which changes the politics entirely, if not the
law.

Courts have generally held that technical copying is fair
use when the copying isn’t communicating to the public. Bartz & Kadrey both
found model training to be highly transformative fair use; Ross Intelligence
disagreed and currently under review by 3d Circuit. If that case goes the other
way, it may be on narrow grounds related to the 4th factor.

Where is this heading? Courts have done a pretty reasonable
job with the nonexpressive use cases. But we don’t have to rely on courts.
Netcom: an analogous issue; court did a great job recognizing insanity of
holding infrastructure providers liable for passive passthrough, and
articulated volitional conduct requirement. Congress also stepped in and gave
us 512, modeled on Netcom but more predictable than the volitional/nonvolitional
conduct line. A functional Congress could provide additional clarity.

To that end: proposes revising 107 to recognize that copying
works to extract unprotected information or enable nonexpressive computational
functions is highly transformative—not fair use b/c there should be room for
courts to evaluate the whole picture.

Lots of people perceive licensing as a solution for LLM
training. ASCAP is amazing, efficient, but they don’t pay anyone a check for
less than $100 or direct deposit for less than $1. It works b/c the authors w/
works of negligible value don’t get paid. But we have no way of tracing which
individual works are important to the system. We’d have to divide revenues
among a lot of people, not just songwriters, book authors, but everyone who
ever posted on social media or commented on Stack Overflow. That’s billions—a very
large sum of money divided by billions turns into a lot of transaction costs.
You could still send checks to large content owners, but those are precisely
the folks who can do deals w/large companies. This would just be a tax system.
If you want to tax LLMs and redistribute $ for worthy causes, that’s a great
idea, but tax!

Rebecca Tushnet, Harvard Law School: And now for something
completely different!

When I started my career writing about fan fiction, which
involves fans writing, for example, the further adventures of Kirk and Spock from
Star Trek or Mulder and Scully from the X-Files, people in the legal community were
often surprised that I cared—wasn’t this a bunch of infringing derivative works? Now, when
I talk about fan fiction, people in the legal community are often surprised
that I care because noncommercial fanworks seem obviously transformative and
fair, or at least obviously not going to come under legal threat. Chloe Zhao
directs movies for Marvel and talks about her fan fiction; the actress who
plays Dr. Javadi on The Pitt says that her character is a regular girl and gives as a key
example that she’s on AO3, which she expects you to know means the Archive of
Our Own. My students have never known a world in which fan fiction was hard to
find. I’m more pleased to be in the latter situation, but it does make me feel
a bit old! And given that noncommercial fanworks were not on the radar of the
drafters of the Copyright Act—even if some of them almost certainly knew about
science fiction fan culture—my placement on this panel makes sense.

A bit about my relationship with fanworks: a founder and
presently co-chair legal committee of the Organization for Transformative Works,
or OTW. Mission, to support and defend noncommercial fanworks, explicitly
framed as transformative both in the legal copyright sense and in the broader
sense of being different in exciting ways. One of the ideas was that we’d try
to show up in the rooms where it happens to give fans a voice in policy and
legal discussions as creators, the way the EFF does for general internet
freedom.

Today, the OTW’s Archive of Our Own hosts over seventeen
million fanworks—works based on existing media. We’re a Library of Congress
American heritage site. The OTW also supports a wiki, Fanlore, dedicated to
fan-related topics; a peer-reviewed open-access journal named Transformative
Works and Cultures; and a legal advocacy project to help protect and defend fan
works from legal challenge and commercial exploitation. The OTW routinely
submits amicus briefs and policy comments to courts, legislatures, and
regulators regarding copyright, trademark, and right-of-publicity issues.

One of our most longstanding projects has been seeking and
obtaining exemptions from 1201 for noncommercial remix videomakers—vidders or
fan editors. Our exemption currently allows noncommercial remixers to rip clips
of video from DVDs, Blu-Ray and streaming video in order to make their own
transformative works.  In the 1201
exemption process the Copyright Office perceives its job to be narrowing your
requested exemption as much as possible. Still, we showed that noncommercial
fan videos were regularly fair use and that 1201 hampered fans’ ability to make
those fair uses. We’ve obtained renewal of those exemptions several times.

Some lessons:

First, there is no substitute in the modern state for
organizations that can speak the language of regulation. Citizens must organize
or they will be ignored. But a small group of people can effectively do that!
Very few of the more radical anti-copyright, anti-capitalist people who think
the OTW is a liberal (derogatory) organization are in this room, but I think we’ve
had a productive effect on the overall conversation that includes them.

Second: It is not good for everyday practices to get
fundamentally out of sync with formal law. If the everyday practices are acceptable
and even good, the formal law ought to recognize that, and we can use fair use to
do so.

There are those who say that fanworks are tolerated
infringement. Some of those people are probably in this room. This is at best
an argument that the formal law sweeps way too broadly under any justification
you want to give for copyright rights—yes, the main “tolerators” are big
conglomerates, simply because as we heard yesterday they’re the source of most
of the widely disseminated for-profit copyrighted works we have today, but
there’s a reason that even the individual authors who say they oppose fanworks
haven’t actually sued over noncommercial fanworks.

In addition, the “tolerated infringement” argument is a
profound indictment of statutory damages specifically. If damage to the
exclusivity in a copyrighted work is both infringed by a noncommercial,
nonreproductive work and subject to up to $150,000 in damages, that damage
ought to be bad, not just an annoyance. Pam Samuelson has always had the
right of it and we heard yesterday various forms of agreement with her position
that statutory damages have been harmful to the rest of the copyright scheme.

Third and More broadly, noncommercial fanworks are good
because they offer a distinct field for creative endeavors, separate from the
copyright-enabled commercial system. They are both artisanal and widely
distributed, making them an important alternative form of expression. Noncommercial
works are fundamentally different in the aggregate from commercial works. They
can be Poetry; 100-word drabbles; short stories; 20,000 word stories;
million-word stories; other things there’s not much commercial market for. This
is part of what makes fanworks worth preserving and protecting: they are part
of the background of a thriving modern creative ecosystem.

Noncommerciality complicates questions around blanket
licensing: don’t want money, don’t want to participate in the commercial
system.

In addition and relatedly, fan cultures have a long
connection to queer writing: fan fiction is inherently about difference/the
fact that the story could be different/possibility—encourages both repetition
with difference and experimentation, which allows some people to open
themselves to various possibilities in the rest of their lives. If you want to
cry about the power of creativity, read the stories we collected for our
submission to the NTIA’s inquiry into the legal
framework for remixes
: the power of making stories and other creative works
within a community that is excited to hear everyone speak has literally saved
lives.

Beyond its transformative effects on people,
noncommercial fandom is a huge boon to creativity generally. Professors Andrew
Torrance and Eric von Hippel have identified “innovation wetlands”: largely
noncommercial spaces in which individuals innovate that can easily be destroyed
by laws aimed at large, commercial entities, unless those individuals are
specifically considered in the process of legal reform.   Their description fits remix cultures well:

The practice of innovation by
individuals prominently involves factors important to “human flourishing,” such
as exercise of competence, meaningful engagement, and self-expression. In
addition, the innovations individuals create often diffuse to peers who gain
value from them …. 

Innovation requires that individuals have rights to make,
use, and share their new creations, collaborating with others to improve them,
as remix authors do.  Given the small
scale and limited resources of most individuals, “[a]nything that raises their
innovation costs can therefore have a major deterrent effect.” 

Things I have personally been around for: the adoption of curated
folksonomy/AO3-style tags in publishing. New story types and tropes: five
things that never happened for exploring different scenarios for characters
that together illustrate something about the fan author’s view of the
characters; the fan-invented “omegaverse” tropes about humans with certain
animalistic characteristics.

If you forget about noncommercial works in your creativity
policy, you enable the destruction of vital diversity and seed corn for the
next generation.

Finally, a coda with another view of internationalism: The
US was at the time of the OTW’s founding, nearly twenty years ago, the only
place we could count on a strong and flexible fair use defense. This has somewhat
changed, including by adoption of fair use in several other jurisdictions,
Canada’s noncommercial user-generated content exception, and most recently by
greater European flexibility on pastiche, but fair use’s impact is still really
notable. American hegemony meant that we didn’t even need a term like “the Brussels
effect” for the effect of American fair use and safe harbor laws, but it really
did seem like the internet was another American territory. That’s changing,
more every day, but we are probably going to miss it when it’s gone.

Jennifer Urban: In-formalization, term extension, and orphan
works. Although there was a near-consensus and energy to address it,
c2004-2015, efforts were ultimately not a rousing success.

Orphan works: policy questions are related to your sense of
who is an author & what authors generally want. Orphan=owner can’t be
identified and someone wants to make use of a work in a manner that requires
the owner’s permission. 76 Act increased the number of orphan works by removing
the formalities.  Widespread agreement
thus on the definition and scope of the problem

Solution space: limitations on remedies of injunctive relief,
especially when a significant amount of original expression was added; limitations
on damage remedies (US proposals); statutory exceptions (EU directive w/r/t
making available and reproduction rights); compensation to later-appearing ©
owners (reasonable compensation, extended collective licensing).

Conditions on relief: proposed: reasonably diligent search; identify
use as orphan work on the use itself (notice requirement); register use,
potentially with waiting period before use; takedown/stop use upon appearance
of © owners; pay compensation to later-appearing owner; provide attribution to
later-appearing owner; categorical limitation on type of users (e.g., EU ©
Directive: educational, library, & public heritage institutions &
public broadcasters).

Why so complicated? Different uses are different:
archive/library digitization are sensitive to search costs; takedown on notice
is more feasible; licensing fees may be prohibitive at scale. Derivative
works/smaller scale: more extensive search may be more feasible but
takedown/removal not feasible and injunctive relief is prohibitive. Where you
were willing to compromise depends on where you sit.

Similarly for copyright owners, © owners like
photographers/illustrators were worried they’d be hard to find & usually
don’t need to use orphan works themselves. Filmmakers are easier to find and
more likely to want to use orphan works.

Limited effectiveness: administrative/centralized licensing
adopted in Canada, Japan, Korea, Hungary, UK—fewer than 1000 licenses total by
2015 since 1999. Expensive, not productive. [CASE Act looks better than that!]

2021 EU directive followup found very limited use of EUIPO
database and very limited use overall by most eligible organization. 70% of
entries in database registered by British Library, and number dropped hugely
after Brexit. Lots of complaints about strict search requirements.

Fair use case law also developed to allow a lot of the big
data uses; a risk management question. People worried about orphan works
protection cabining fair use, even with a savings clause, and that slowed
momentum.

Where are we now? Substantial strides in digitization of
Office records, which is helpful. But records remaining are in the “sour” spot
of 1945-1978. Later-appearing © owner can still register and then sue. Risk
aversion is still an issue. Gatekeepers for small creators, libraries—people making
decisions about risk aren’t necessarily fully economically rational but have
practical effects. Same things with fair use. Occasionally, courts have
considered market unavailability in the fair use analysis, but that brings in
gatekeepers/risk aversion, leading to “clearance required” policies. And the
definition of an orphan work is that it can’t be cleared.

AI raises similar but maybe harder problems.

Urban to RT: how does AI training compensation come into
this?

A: it’s incommensurable. It’s like offering me payment after
I had you over for dinner at my house. There’s nothing immoral about
restaurants but that’s not the kind of relationship I was seeking.

Q about 103(b) and fanworks: if they’re fair use, then 103(b) doesn’t come into it. Fan authors sometimes worry about commercial misappropriation: they have a copyright in their fair use fanworks, so they can try to shut down unauthorized commercial uses, and they also aren’t responsible for such unauthorized uses. Goldsmith even makes this a bit clearer by establishing that the analysis goes use by use; a fanwork created for noncommercial purposes is fair regardless of whether deliberate monetization by the creator would be unfair.

Urban to Sag: how does international nature of training
affect this?

Sag: the international scene is quite complicated. Peter Yu
& Sag survey the global scene—different jurisdictions take very different
approaches, but each trying to (1) make a pathway for legal text data mining,
(2) have some protections for © owners. What you see is difference in
regulatory style. EU is far more prescriptive in DSM directive. There’s clarity
there; some others go further than fair use, but may require, e.g., not just
noncommerciality but affiliation w/a library or university. People who think we
can put the genie back in the bottle are likely wrong, but even if that’s what
you wanted to do, a lot of this activity is portable—you can go to other
jurisdictions to train. And that fact of int’l competition should be
recognized. Hard to see how a licensing system or tax & redistribution system
could work on an int’l basis. We don’t have the political competence to do it here
on a national basis, but they might be able to do it in the EU. Only a handful
of jurisdictions have TDM protections, but it’s 52% of the world’s GDP. The
fact that we allow it in the US isn’t an outlier among our peers.

Gervais: voluntary licensing can deal with crossborder
issues. Collective or individual licenses can say something like “parties don’t
agree on current scope of fair use” but contracts can manage that risk up to a
point, waiting until there’s more coherence in the courts.

RT: maybe we should bring Kalshi in and just use prediction
markets. [joke!]

Urban: if there’s nobody to pay, then the orphan works
schemes involving collection don’t support the © system.

Q: about licenses b/t major copyright owners and AI
companies: will they narrow the scope of fair use?

Sag: I don’t think those licenses should narrow the scope of
fair use, though the editor of the Atlantic did say that he entered into one
such license to prove the existence/validity of the licensing market. A few
notes: most of the licenses, as far as he can tell, are not just for AI
training but for retrieval-augmented generation—the economics and copyright
implications of sending an AI agent onto the web and assemble them into a
report are quite different from the AI training cases and it makes sense to
license that activity. Mostly they’re licensing access, which you can see most
easily with Reddit, which doesn’t own © in content but charges $60 million/year
for firehose access. That’s fine, though it shows need to update robot.txt protocol,
but they don’t prove that licensing is a general training solution. We’ll see
more of those licensing deals and they’re good, but hopes courts don’t jump to “market
for training.”

Litman to Sag: instead of amending fair use to presume training
highly transformative, consider moving away from fair use and avoid “transformative,”
which attracts additional political, emotional, religious opposition that you
don’t need.

from Blogger https://tushnet.blogspot.com/2026/04/panel-6-unanticipated-consequences-of.html

Posted in Uncategorized | Tagged , , | Leave a comment

Panel 5: Copyrightable Subject Matter and the Special Problem of Software

29th
Annual BTLJ-BCLT Spring Symposium: Origins, Evolution, and Possible Futures of
the 1976 Copyright Act

Pamela Samuelson, UC Berkeley Law (Moderator and Speaker):
discusses history (in which she was intimately involved as an intellectual powerhouse).
From uncertainty over whether software was protectable to Whelan which gave
very broad protection; took 6 years for the Second Circuit to respond and start
with Baker v. Selden to keep functional elements out of © protection. Merger,
scenes a faire, 102(b), fair use—doctrinal cocktails, in the words of Molly van
Houweling.

Samuelson initially thought sui generis protection for
software would be better, but admits error: © did a really good job and gave an
international standard that’s enabled some stability.

Jule Sigall, former Microsoft: CONTU was doing its work as
Microsoft was just getting started. Trade secrets, patents, and copyrights do
different work at different eras of software. 1980: PC era—rapid rise of
copyright’s relevance. Business model: product licenses. Practical control:
EULA, shrinkwrap, key disc/dongle. Copyright’s salience for executives was high
for how they were going to recover fixed cost investment. This was the model
CONTU had in mind when it decided to embrace software ©: you make a product
& send it out through distribution channels not unlike books.

1990s: WWW. Easier to send software as bits. Business model
if people won’t necessarily pay for copies: hardware bundling (Apple; PC with independent
OEMs); ad supported. Practical control: B2B contracts. Copyright salience:
medium.

2000s: cloud and OSS: business model: subscription/SaaS/consulting.
Practical control: server access control/OSS license; not much a pirate copy
will do for you. Copyright salience: medium. Antipiracy efforts shifted to
antifraud—scammers would purport to sell subscriptions. Open source was a
different path—add consulting services to OS or build services using OS. That
does depend on © but the most prevalent ©-based model was making software as
accessible as possible and using © to ensure it was only used/redistributed in
certain ways.

2010s: mobile era/app ecosystem. Business model: app store
sales/subscriptions—you can, as in the 80s, get paid for a copy. Practical
control: platform control/cloud services. © salience: low.

2020s: AI. Business model: ?? Practical control: ??
Copyright salience: None? [Real underpants gnomes vibes.] More software will be
developed by more people than ever before. The tools allow people of all kinds
to make software, and they allow software to make software. Maybe we are back
where we started before CONTU with unclear © coverage.

Clark Asay, BYU Law: reasons for concern, but countervailing
forces/reasons for optimism. FOSS licenses presuppose copyrightable code:
copyleft, attribution, etc. W/o © the governance architecture becomes much less
reliable. In the context of other developments that threaten open source—MongoDB
and Elastisearch have abandoned OS; monetization has always been a question for
companies that can’t directly monetize software. AI agents: those agents are creating
tons of software and making pull requests/contributions to OS products w/o
human review, which are being overwhelmed in some cases. Some projects are
closing off in response. Open collaboration norms may be eroding from multiple
directions simultaneously.

Might push us more in direction of trade secrecy and
possibly patents. A more closed, fragmented software ecosystem and possibly AI
system. But developers desire to influence the AI stack, which is likely to
keep the ecosystem at least partially open.

A. Feder Cooper, Yale University (co-author Mark Lemley,
Stanford Law School): Model weights that give a possibility but not a certainty
of generating infringing output: is that a “copy”? Relates to memorization
debate. It’s common to describe models as learning statistical correlations or
patterns: that’s not wrong but it oversimplifies how info is represented.
Another important part: how the LLM is used. Some methods of selecting outputs
are deterministic—same input, same output; many are stochastic. Variability in
outputs doesn’t derive from model but how the model is used in decoding.

Memorization is, when based on training, the model produces
really high concentration of probability on particular sequences. The model is
still probabilistic, but the distribution is so sharply peaked that one
sequence (or small number of sequences) dominates. This is related to
compression: memorization means that Ted Chiang’s “blurry jpg of the web” is sometimes
not blurry at all for certain chunks. Memorization is pretty mysterious still—keeps
giving new insights about LLM behavior. Not a bug; it’s far too interesting and
complicated.

What is a copy? The statute’s answer is pretty incoherent: copies
are material objects in which a work is fixed. (The “by or under the authority
of the © owner” can’t be taken seriously for infringement by copying. We used
the same definitions for protectability and infringement, so courts just ignore
that part for infringement.) In litigation, parties take extreme positions—no memorization,
or models are just a collage. Neither of these are right and sometimes not even
partially right.

We can extract a near reproduction of Harry Potter from a
short prompt from Meta’s Llama: that prompt is deterministic. That’s an extreme
result—extraction is possible from some models for some works and not others.
Most of our experiments measure whether verbatim memorization is occurring; we
can get more if we accept small changes like extra spaces or commas in place of
semicolons. Sometimes we needed adversarial strategies but sometimes not. None
of that work changes model weights, but you can also do that to extract more
works.

Jane Ginsburg et al. have shown that fine-tuning on public
domain works can reveal memorization from previously-trained-on © works.

So is a model a copy fixed in a tangible medium of
expression? That’s still complicated! You can make a copy by storing parts in
ones & zeros. But you can’t say that Microsoft Word encodes War &
Peace. Models aren’t like either of those things. Some of the memorization isn’t
deterministic—you might only get a memorized copy one in 1000 times. Are the
other 999 “stored” in the model? That would involve more copies stored than
there are atoms in the universe.

Closest examples in existing law: Kelly v. Chicago Park
District—garden isn’t fixed b/c it isn’t deterministic; video games where
content is generated from a number of fixed options. Micro Star: the new levels
aren’t really “in the game.” Nor would we say that all the possibilities
currently exist. So maybe the answer is predictability: if the model weights
can easily generate the work, functionally there’s a copy in the model. If it’s
merely possible to extract the work through effort, it’s not a copy. Why it
matters: if there’s a copy in the model, then copying the model is making a
copy of the work. Maybe that’s fair use (via intermediate use) but we’d have to
figure it out.

Doesn’t love the conclusion, but this is where the empirical
evidence leads.

Samuelson for Sigall: you didn’t say much about patents—Whelan
might be affected by the idea that patents weren’t available; then patents
started becoming available, making thick © less attractive.

Sigall: late 90s was a marriage of two historical trends: if
you want to go the IP route for software, patents might be more efficient/useful
b/c there’s also a risk with seeking ©. Patents and © come with embedded
strategic choices about your business. Book: Capitalism
w/o Capital
: many of the most successful companies today have intangible
assets, not tangible assets—a lot of the benefit is taking advantage of
synergies and spillovers in intangible assets. IP can interrupt and interfere
w/those synergies & spillovers so it might not be optimal—businesses can
capitalize on other aspects instead of IP.

Samuelson for Asay: what do you do w/the Office’s policy
requiring you to ID the parts that are AI-generated and disclaim authorship?
Will people do that or just pretend that they authored the whole thing?

Asay: Unworkable! Possible that developers will just
continue as usual and ignore © complications, slapping license on even if code
is AI-generated; that’s somebody else’s problem.

Sag: how do you deal with misuse of your work as evidence
that LLMs don’t learn, they copy?

Cooper: Not great feeling! The research I do is careful and the
papers are long; that’s not an accurate gloss of what models are doing. But it’s
important to do the work to show information about model behavior that we didn’t
know before.

Q: is Harry Potter an outlier given how many copies there
are online?

A: It’s astonishing still to get a book from a fragmentary
prompt; not all models do this and certainly not all the time, but other books
can be derived; it’s hard to connect the dots from training data. Tried to do
it with Coates’ “The Case for Reparations”—also got that from the same model—it’s
very famous but not HP famous.

Cathy Gellis: isn’t © a background assumption for these
business models even if you aren’t “relying” on it? If © didn’t exist, would
these business models work?

Sigall: it’s a behavioral Q—what behavior is © shaping and
it’s certainly possible that affects what businesses do with particular
software. It’s there, but the Q is how do you use that fact as a business in
your strategic choices? Microsoft housed its antipiracy department in the
marketing department, not legal, because the goal wasn’t really to stop piracy
but to get them to use Microsoft software. Other industries put antipiracy
efforts in legal. Trying to understand actual behavior of users of their works
and adapt to that. [This may also be relevant to the shift to streaming
video/music!]

Brauneis: suggests that Office’s disclosure form isn’t
onerous; doesn’t require you to ID which lines are AI-generated, so you should
disclose and figure it out later.

Asay: may be true, but issue in the industry is
norms/perceptions about copyrightability—that’s more important to behavior than
technicalities of registration. [So what he’s saying is that coders have …
always gone on vibes?]

Samuelson: A bit of an old problem with SaaS. Oracle started
with a PD work and then made a derivative work from it; trying to sort which
parts were protected from which weren’t was already a task.

Bracha: you said that you were wrong about sui generis
protection for software because after that didn’t happen, courts rolled up
their sleeves and did their job of developing relevant principles. Do you think
that courts would do the same thing today?

Samuelson: good point—we sort of got sui generis protection
w/in copyright.

Nimmer: works that
incorporate works from the USG should in theory disclose that, even if it’s a
paragraph quote; they don’t and it’s been a nonissue. So it could also work for
AI.  

from Blogger https://tushnet.blogspot.com/2026/04/panel-5-copyrightable-subject-matter.html

Posted in Uncategorized | Tagged , | Leave a comment

Copyright Act Panel 4: The Shifting Line Between Federal and State Protection

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution,
and Possible Futures of the 1976 

R. Anthony Reese, UCI School of Law

Fundamental change: eliminating common-law copyright for
unpublished works and unifying the regime at creation. Contemporaries like Ralph
Sharp Brown saw it as a huge, pivotal change. Now we take it as easy background
principle.

Federal law did provide a cause of action before 1909 for
unpublished copying of unpublished manuscripts. But it was a procedural door,
not substantive. More significantly, 1909 Act dropped that but did allow
certain types of unpublished works to obtain federal © by registration. Categories
where works were commonly performed/exhibited rather than being published. Most
people think of this as a footnote, but this new option turned out to mark a
significant shift in the state/federal protection divide: for lecture, dramatic/musical
composition, motion picture, photograph, work of art, drawing.

Registration data 1929-77: 25% of total nonrenewal
registrations were for unpublished works. That’s a big deal! Understates
importance because of the limit on classes of eligible works. 28% of all
registrations were musical works, and 83% of those were unpublished works by
1977 though it took many years to get there. Similar story with drama
registrations (88% of those were unpublished). For other classes it was less
significant (except for lectures, at 100%); 46% of scientific drawings and 29%
of photographs from 1954-1977 that were registered were unpublished.

Office turned down lots of requests to register unpublished
books. There was also uncertainty about what constituted publication. And
perpetual state law protection through broadcast, performance, and even
potentially distributing phonorecords was a worry: owners could economically
exploit their works in front of millions w/o a © bargain/any duration endpoint.

Considered alternatives: extend voluntary registration to
all classes; make public dissemination not publication the dividing line (“communicating
a work to the public visually by any acoustically by any method and in any
form); or eliminate state protection for unpublished works and provide federal ©
from creation. There were various views. Learned Hand favored the middle
alternative for undisseminated works, provided there was some time limit on
state law protection. Concern w/infinite duration was supplemented w/concern
that there was no fair use under state law (which no case ever held but there
was speculation), and concerns about evading the compulsory mechanical license.

When they chose the final option, they did not apply any
national origin rules; 105 was adjusted to extend to both published and
unpublished works of the US gov’t. And of course duration rules had to change;
added to the push for life+50 once dissemination could no longer serve as the
universal starting point for a fixed term. Had to figure out what to do with
pre-1972 sound recordings too; didn’t get taken up into federal © then b/c of
larger lack of certainty about the topic.

Unchanged substantive law: © attached automatically on
creation, w/o formalities, but publication would enter public domain unless
formalities were complied with. Transfers also changed: divisible, writing
required, recordation, subject to termination. Improved nonmonetary remedies.
Every unpublished work by every person who died long enough ago now enters the
public domain—the initial group in 2002 was the largest ever expansion of the
public domain. All the statutory limits now apply to unpublished works.
Statutory damages and attorneys’ fees for post-infringement registration too. The
rights are 106 rights despite suggestions in common law that “any use” would
infringe; idea/expression applies despite suggestion in English law that describing
the Queen’s unpublished drawings would infringe; transferring the only copy
would no longer transfer the ©.

There are still a lot of registrations of unpublished works:
38% of all registrations from 1978-2022. Mostly monographs, 27%; 65% of performing
arts, 38% for visual arts; 64% for sound recordings. These registrations are no
longer necessary but provide the advantages of registration.

Subsequent developments: clarified that fair use covered
unpublished works; resolved split about whether sale of pre-1972 phonographs
was publication, for musical works first and then literary/dramatic works.
Finally brought pre-1972 sound recordings into sui generis protection, closing
the circle/finally removing the last bit of rubble from the 76 Act’s
destruction of the wall between published/unpublished works—nothing left for
common law copyright to protect in fixed works.

Marketa Trimble, UNLV William S. Boyd School of Law: Would we
expect state law diversity? Legislative laboratory?

Some preempted, often after a long time: Cal resale
royalties enacted 76, held preempted 2018. NY standardized testing act; PA
Feature Motion Picture Fair Business Practices Law. State statutes protecting
rights to unfixed performances ok. Also, gov’t edicts doctrine matters to state
law—state can’t claim © of materials produced in course of duties of judges and
legislatures.

State laws are very outdated. Typically often list “copyrighting,”
“causing to be copyrighted,” “acquiring copyright,” or “securing copyright” as
distinct acts, rather than “registering.” Plenty of state laws assert “copyright”
to state laws, and other types of works such as works developed by a county
board of education (CA), data processing software created by gov’t agencies;
and geological/topological surveys of PA. Also an Arkansas history textbook;
official insignia for MD farm products; highway maps of Ohio; and the Oregon
Blue Book—insight into what legislators feel a need to claim.

Since Ga v. Public.Resource.Org, some states have eliminated
their provisions—NY, MD, ME, and Montana and OK already didn’t. VA authorizes releasee
of all potentially ©able materials under a CC or Open Source Initiative
license.

17 USC 1401 specifies that the person who has the exclusive
right to reproduce a sound recording under the laws of any State as of the date
before the date of enactment is the federal owner of pre-1972 sound recording
rights. Recognizes that state laws varied on ownership. But what if the state
laws conflict? There is no choice of law provision, and state laws do vary.
Mostly it’s the label, whether framed that way in state decisions (CA) or
statutes (AZ). WV has a different rule: label unless there’s no written
contract, in which case it’s performers.

State law can also enhance protection of authors: Cal law
purports to require generative AI developers to post a summary of the datasets
used to develop a system, including sources/owners. MD bill prohibits inclusion
in contract for state public art a requirement that artist waive moral rights
under VARA. Another bill limits admissibility in criminal/juvenile proceedings
of uses of “creative expression of a defendant or respondent.” Court is
supposed to figure out whether expression was literal or fictional. Not
admissible for mens rea, but could be used to decide on referral to mental health
services/diversion programs.

Could protect users too, for example by preventing © as means
to prevent access to public records: a series of new state bills require, e.g.,
public access to “learning materials.” Digital lending of e-books; MD’s bill
was invalidated as preempted, but CT adopted a new bill prohibiting libraries
from entering into contracts or license agreements for ebooks and digital
audiobooks that contain certain restrictions.

Some states have considered laws on what demand letters
should look like—abuse of rights provisions. And NV has an act creating a
requirement for law enforcement agencies to adopt written policies and
procedures governing performance of © works by peace officers while on duty. (B/c
of a bizarre technique used where cops blasted music in the false belief that
this would cause videos of their behavior to be unpostable online.)

Generative AI legislation: owners of the model
training/generated content by the person who provides input; NY’s AI
transparency for journalism act: disclosure obligation for content accessed by
AI developer crawlers, for identity of crawlers.

Last example: help enforcing copyright outside of the US.
Wash, LA, Utah—not clear how they fare after recent SCt decisions.

Are state legislators becoming more ambitious? Public
becoming more attuned to ©

Guy Rub, Temple University School of Law: Contracts—“breach
of contract” was removed from the text of 301, which now needs interpretation.
Leg. History says that “equivalent to ©” was intended to be “the clearest and
most unequivocal language possible, so as to foreclose any conceivable misinterpretation.”
Whoops.

What is so hard about “equivalent”? © is entangled w/state
commercial law. Contracts can ignore subject matter, rights, and defenses. Also
depends on view of purpose of ©: delicate balance of competing interests of
society and authors, v. exclusive right for benefit of authors.

Most common litigated pattern is idea submission; others are
B2B transactions. After that there’s a lot of variety—contracting around fair
use is extremely rare and mostly limited to reverse engineering. Is a promise
an extra element? It’s a formalistic test. Courts also say that not every technical
difference will suffice—must create meaningful distinction. Has to be about the
nature of the cause of action.

Two approaches: majority: contracts are bilateral and thus
not equivalent to property rights; versus minority approach: no, contracts can’t
regulate actions that are exclusive rights under ©. ProCD is the most famous
majority approach case, but not the first. Most appellate courts adopt it one
way or another; only the 6th Circuit explicitly rejected until
recently.

Why so popular? It’s easy to apply and there are no great
alternatives. But then the Second Circuit decided the Genius/Google case where
Genius had browsewrap saying you can’t scrape; Google won. The contract limits
reproduction/public display and is thus equivalent to © and preempted. When
Genius sought cert, the SG argued that browsewrap was different (implicitly,
not real consent).

Why does it matter? B/c of plenty of other attempts to limit
scraping with browsewrap.

Conflict preemption might be the answer! Section 301(a)
might ask the wrong question; interference w/the (c) system/obstacle to the
goals of Congress. In re Jackson (2d Cir. 2020) (citing yours truly and
Jennifer Rothman). Look at what the state is trying to promote: privacy or
creativity/commercialization of information? Look at whether there’s harm to
(c) law.

X Corp v. Bright Data (ND Cal 2024) found claims based on
mining and sale of data are conflict preempted, b/c the interest is monetization,
which is the same as ©; the harm is clear b/c it prevents users from
commercializing their posts and b/c it circumvents fair use.

Conflict preemption is better than formalism: you can ask
whether the contract was individually negotiated; you can ask about market
power; you can ask about the purposes of the contract and of the alleged
breacher.

Shyamkrishna Balganesh, Columbia Law School (Speaker and
Moderator): Hot news had an outsized influence on 301. A misleading account continues
to influence how courts talk about misappropriation to this day.

Misappropriation falls into disfavor after INS v. AP: effect
of Brandeis’s dissent/Learned Hand’s refusal to expand or adopt the decision
along with Erie v. Tompkin’s rejection of general federal common law.
But states either through statute or common law began to absorb it. Legislative
history suggests that misappropriation is structurally different from © and
would allow equity to go down new paths. But the enacted version of the Act
doesn’t contain the list containing misappropriation referred to in the
legislative history.

Register’s 1965 report saw misappropriation as “the virtual
equivalent of ©.” So proposed exclusions for contract, breach of trust,
privacy, defamation, and deceptive trade practices, but not misappropriation.
Then the Dep’t of Commerce intervened, in the form of a PTO rep, and said
misappropriation was important b/c it allows courts to anticipate new areas for
development and retain equitable flexibility. But Commerce hadn’t consulted
w/other departments. “we have the Dep’t of State disagreeing w/everybody except
on the manufacturing clause and now we have the Dep’t of Commerce that takes a
different view. Does anyone purport to speak for the administration?” A staffer
asks the DOJ to weigh in, and the DOJ is firmly opposed to
misappropriation. It would neutralize the logic behind preemption. He says DOJ
misrepresented misappropriation as creating antitrust concerns, simulating
property rights, and too vague. [Honestly I don’t see that as a
misrepresentation!]

Striking the provision’s list in full then eliminates the record
of the logic.

There is a fundamental mismatch b/t the House Report and the
actual law. Different courts of appeal seeking to resurrect INS in the 80s and
90s unfortunately rely on the old House Report saying that “based on
legislative history, it is generally agreed that hot news survives preemption.”
NBA v. Motorola. Only corrected in 2011 in Flyonthewall, but it moved to other
jurisdictions like Ohio in the meantime, without being corrected there.

If I’d had time, I had a Q for Trimble: what are your
thoughts about criminal anti-camcording and anti-copying rules for people who
don’t own the masters, including laws making it illegal not to put the name of
the actual copy maker on the copies? OCGA § 16-8-60 prohibits transferring “any
sounds or visual images … onto any other … article without the consent of the
person who owns the master” and separately makes it unlawful to sell any article
on which sounds or visual images have been transferred “unless such … article
bears the actual name and address of the transferor of the sounds or visual
images in a prominent place on its outside face or package.”

from Blogger https://tushnet.blogspot.com/2026/04/copyright-act-panel-4-shifting-line.html

Posted in Uncategorized | Tagged , , , | Leave a comment

Panel 3: The Scope of Exclusive Rights and Modes of Enforcement

29th Annual BTLJ-BCLT Spring Symposium: Origins, Evolution,
and Possible Futures of the 1976 Copyright Act

Erik Stallman, UC Berkeley Law (Moderator)

Christopher Sprigman, NYU Law: Restatement of ©: assumes
perspective of common law court, attentive to and respectful of precedent but
not bound by precedents that conflict w/the law as a whole. Reporter is
supposed not just to go w/greater numbers of cases but with the better
principle, with explanations.

Statute in many central provisions is far from clear or fully
prescriptive—Congress left ample room for judicial interpretation: 102(b), 107,
rules of secondary liability, standard by which infringement is judged: these
are not peripheral, but Congress said next to nothing or even nothing about them.
Courts have built an intricate architecture of common law doctrines around the
skeleton of the statute over the past 50 years.

It’s ©’s hybrid nature, enabled by spaces left open by
Congress, that has kept © vital through huge changes.

Restatement makes use of legislative history as a window on
the meaning of statutory language, principally where courts have disagreed. The
window can be clouded, and we approach the enterprise w/caution—not the same as
rejecting legislative history. Fair use is a good example: commercial/nonprofit
distinction is just one facet of analysis of purpose and character, inviting
judicial development.

Fair use is “not an infringement of copyright.” What’s the
burden? Legislative history: Statutory presumptions/burdens of proof are not
justified—the intention was to allow the courts to make individualized
decisions. The courts have uniformly held, w/o analysis, that the BOP on fair
use is on the defendant. Seems inconsistent w/ statutory language and
expectations. Personal view: Reconsideration of this is overdue. Fair use is a
scope doctrine about the © owner’s rights in the first instance. It’s
ordinarily, for good practical reasons, the defendant’s burden to raise
the issue, just as with idea/expression. But the scope issue should then be decided
by the court as a matter of law.

Claim about fair use tension w/derivative works have always
seemed to him to prove too much. © owner’s exclusive right to prepare
derivative works is subject to fair use. Goldsmith: They aren’t mutually
exclusive, but neither do they always overlap. To reconcile the statutory
provisions, the Court held that the degree of transformation must go beyond
that required to create a derivative works, noting that some transformations
occur in purpose w/o physical alterations to content of work. Goldsmith:
W/transformations that do alter content, to qualify as “transformative” for
fair use, the D’s use must involve more than distinguishable variation but must
rise to a level to distinguish its purpose & character from that of the purpose
& character of the original. Broadly correct.

Oren Bracha, University of Texas at Austin, School of Law:
For 40 years, the infringement test has been eroded and dilution, vacated of
most of its substantive content, most importantly its central conception that
supplied coherence, which existed beforehand. That was the idea of
substitution: to infringe, a defendant’s work had to expressively substitute
for the plaintiff’s work. Of course there were hard cases but this was an
organizing principle that provided meaning.

Courts hollowed out the test from that notion of
substitution, leaving us w/a nebulous, frictionless idea of substantial
similarity that means very little. Once that happened, the test became unpredictable,
arbitrary, etc. It is us, meaning the case law, that did this, and therein lies
the problem.

Once that happened, additional unfortunate developments: (1)
the very confused and unfortunately more widespread tendency of some courts to
describe the infringement test as having an exception for de minimis uses. A
confusion of relevant concepts. Once that happened, other courts very quickly
understood this as a criterion of unrecognizability: no substantial similarity
when it’s de minimis, and it’s de minimis when one can’t recognize the P’s work—a
race to the bottom. (2) Alternative infringement tests develop: the
quality/quantity 2d Circuit test that applies when the regular test is inapt,
that is, when it doesn’t produce the outcome “infringement.”

After we hollow out the meaning of the infringement test,
what steps into the vacuum is the fair use doctrine. The idea is simple: if
infringement means very little/subjective, you don’t need to worry about it,
b/c we can always fix it w/fair use. Don’t even have to do the infringement
test! The back end becomes the front end. Warhol v. Goldsmith: substantial
similarity became a footnote—ignored by the dct, a sentence at the 2d Circuit,
abandoned at the Supreme Court.

Not an enemy of fair use, but this is abnormal and ungood.
We’re putting too much burden on the too-narrow shoulders of fair use. We’ve
shifted a lot of the burden of scope analysis to fair use. Basic mismatch
between its concept and the burden we’re making it bear—at the end it won’t
work very well.

Relatedly, courts ignore the difference b/t reproduction
& derivative works and don’t bother to tell us which is which. Derivative
works is a freewheeling concept—any secondary valuable use of the work—so the
boundary is unclear.

Fixes: meaningful infringement test. His would be along the
lines of expressive substitution. Once we’ve done that but only once we’ve done
that, we should cut fair use down to size. And we should fix the derivative
work/reproduction situation: the derivative work right should be a right of
making adaptations, not a freewheeling boundaryless idea.

Justin Hughes, Loyola Law School: Contributory liability
after Cox: What the hell?

Assumptions inherited from 1909 Act & FRCP—many secondarily
liable parties should only be liable for damages. That has produced a major
divergence b/t US and other developed economies w/sophisticated © schemes.
W/exposure to damages in mind, there were 2 distinct branches of secondary
liability—vicarious & contributory. 2d Circuit’s 1963 Shapiro Bernstein case
crystallized the vicarious standards outside the employer/employee context:
right and ability to supervise plus obvious and direct financial interest in the
exploitation of copyrighted materials, even in the absence of knowledge. The House
Report added “indirect” to its description of vicarious liability. Meanwhile,
contributory liability came from a case about a preparer of a motion picture
held liable for the exhibitors’ public performances.

Cox v. Sony: ignored previous case law; contributory liability
requires intent that can be shown only by inducement or that the provided
service is tailored to the infringement. [FWIW I think that a court could
easily find that continuing to host a particular piece of content is
tailored to the infringement once there’s been notice of a claim. I think it’s
fundamentally different to deal with a series of possibly continuing
infringements versus one ongoing infringement.]

Legislative history: used “to authorize” in 106 to avoid any
questions about liability of contributory infringers. For example: A person who
lawfully acquires an authorized copy of a motion picture could infringe if they
engage in the business of renting it to others for unauthorized public performance.

Litman is right: Statutory damages for single infringement
is not palatable applied to contributory infringers, and that’s a problem for
our system. Big mistake either in the Act or our understanding of it. Other
systems permit injunctions against third parties w/o holding them financially
responsible.

Fascinated by SCt’s obsession w/making patent & © into
kissing cousins. But what about TM and Inwood? [I think it’s bigger than that—the
Court wants a trans-substantive rule about equitable doctrines including
contributory liability, which is why everyone in Cox was citing Taamneh.]

Laura Heymann, William & Mary Law School: Dividing lines
b/t doctrines: Dastar and Star Athletica both don’t give great guides to distinguishing
©/TM and ©/design patent respectively. Maybe that’s the result of unusual
facts. Wal-Mart specifically invited rightsowners to use © and design
protection while building up secondary meaning required for trade dress rights.
Doesn’t favor election, but use of doctrines on back end to deal with end-runs
around limits on other rights and use of remedies, such as the apparently
revived interest in disclaimers.

Congress could also try a more intentional positive
description of the public domain to tell us how that interacts w/other IP
doctrines. Another way: Court’s own reasoning when it borrows from patent law.
Is Court looking at purposes behind the doctrines it borrows from? We call the
field “IP” and try to generate unifying themes, but not clear that Court or
Congress keeps those in mind. Is “staple article of commerce” a phrase used to
indicate a core doctrinal concept or just a convenient borrowing? Sony v.
Universal took pains to distinguish Inwood, rejecting kinship b/t © and TM.
Blackmun’s dissent disagreed with the comparison to patent. Path dependence:
that borrowing in Sony now goes unexamined in Cox. Not every member of the
Court is committed to the project of legal explanation. [That’s one way to say
it!]

Q: re burden of proof.

Sprigman: Law is clearly established until it isn’t. We have
to prepare the world for what might come next. Our Court is not respectful of precedent
and very bound by text. They’re unashamed to disrupt settled expectations [of
certain kinds].

Bracha: it would be an improvement; fair use started life as
part of the infringement test. It still wouldn’t be a good fit to have fair use
as the central mediator of scope.

Discussion of Cox/AI. Heymann points out that questions of
who is responsible for infringing outputs will be key, and the Court’s opinion
isn’t helpful in categorizing responsibility for direct infringement.

Sprigman: © should stay in its lane (even if Congress is
dysfunctional); we’re testing the limits of what courts can do. Maybe labor
law, products liability, tax law are more important for AI.

Stallman: maybe patent & © were more convergent before
1976 Act which gave protection on fixation; before that both regimes were
oriented around disclosure.

Pam Samuelson: we’re in a bit of a muddle b/c inducement is
a separate doctrine in patent law but the Court said, in order to rule as it
did in Grokster, that inducement was part of contributory infringement.

Cathy Gellis: Secondary liability is related to the
architecture of the internet, where we depend heavily on intermediaries for
speech—the First Amendment is therefore quite relevant. Fear of secondary
liability has important deterrent effects. If that’s right about underlying
concerns, a switch to TM will not change the dynamics.

from Blogger https://tushnet.blogspot.com/2026/04/panel-3-scope-of-exclusive-rights-and.html

Posted in Uncategorized | Tagged , | Leave a comment