timeshare exit firm wins fee award where plaintiff failed to show key elements of claim

Club Exploria, LLC v. Aaronson, Austin, P.A., 2022 WL
19479011, No 6:18-cv-576-JA-DCI (M.D. Fla. Nov. 4, 2022) (R&R)

“[F]ew parties are as adversarial—or as litigious—as
timeshare developers and timeshare exit companies.” Plaintiffs, timeshare
developers, sued defendants, a timeshare exit law firm and its named partner,
alleging various claims, including claims under the Lanham Act and the Florida
Deceptive and Unfair Trade Practices Act (FDUTPA). The court granted summary
judgment to defendants on all counts. Plaintiffs filed a fruitless “Motion for
New Trial,” essentially seeking reconsideration of the Court’s decision on the
tortious interference claim. The Eleventh Circuit affirmed the Court’s rulings.
Defendants sought to recover their attorneys’ fees; the magistrate recommended
granting the motion in part.

The Lanham Act provides that “[t]he court in exceptional
cases may award reasonable attorney fees to the prevailing party.” A
nonexclusive list of factors that district courts may consider includes
“frivolousness, motivation, objective unreasonableness (both in the factual and
legal components of the case) and the need in particular circumstances to
advance considerations of compensation and deterrence.”

This was an exceptional case. To survive summary judgment on
their Lanham Act claim Plaintiffs needed to produce at least some evidence
satisfying five elements; that: “(1) the … statements were false or
misleading; (2) the statements deceived, or had the capacity to deceive,
consumers; (3) the deception had a material effect on the consumers’ purchasing
decision; (4) the misrepresented service affects interstate commerce; and (5)
the plaintiff has been, or likely will be, injured as a result of the false or
misleading statement.” Although there were genuine issues of material fact as
to the first and second elements, there was no evidence presented as to the
third element.

In response to defendants’ motion, plaintiffs stated only: “[r]egarding
materiality, it is difficult to argue that [the alleged misconduct] would not
influence the Club Exploria Owners’ decision to hire Defendants.” There was “no
citation to legal authority and no citation to any record evidence that even
arguably supports this proposition. Even without hindsight, Plaintiffs should
have known that a single conclusory statement was wholly inadequate to rebut
the obvious lack of evidence supporting the materiality element.” [There are
arguments that literal falsity—which was all that plaintiffs argued—should be
presumed material; sometimes those arguments are persuasive, as when the falsity
goes to the core of the product or service.]

In other timeshare exit cases, including cases against these
defendants, the plaintiffs presented evidence of materiality, including an
expert report. Even if owners contacting Aaronson as a result of exposure to
the allegedly false websites was evidence of materiality, there wasn’t such
evidence here: almost all of the putatively affected owners were referred to
defendants by outside lawyers. Only one found defendants through the website on
which they hosted the allegedly false and misleading advertisements. Plaintiffs
neither deposed that owner nor included them in their expert’s damages report.
The result in the other case against these defendants “evinces, at least to
some extent, that if Plaintiffs had diligently pursued their claims, there was
a substantial likelihood that their claims would have proceeded to trial.” That
was not favorable to them in the fees inquiry.

Plaintiffs argued that the presence of genuine issues of
material fact regarding the first two Lanham Act elements militated against
awarding attorney fees here. “However, missing one element is just as fatal to
a claim as missing multiple elements. Moreover, the Court’s finding as to the
first two elements had little to do with the strength of Plaintiffs’ litigation
position.”

Also, the court found that plaintiffs lacked Lanham Act
standing for similar reasons: they presented no evidence creating “a genuine
issue of fact as to whether the nonpayment was caused by [Defendants].” Interestingly,
after the sj motions were briefed, defendants made a settlement offer that
would have waived all claims to attorneys’ fees, which expressed confidence
that they would prevail and be entitled to such fees, which the court labeled “prescient.”
Instead, despite the “obvious, fatal defect” in the Lanham Act claims, which defendants
pointed out and plaintiffs devoted only a “single, conclusory sentence” to in their
briefing, and despite the reasonable settlement offer, plaintiffs chose to
gamble on surviving summary judgment. Thus, this case was exceptional because
it “stands out from others with respect to the substantive strength of the
party’s litigating position” and in “the unreasonable manner in which the case
was litigated.” Zealous pursuit of a claim shouldn’t result in a fee award, “but
there is no such protection for a lackadaisical pursuit.”

FDUTPA: A prevailing party in a FDUTPA action may recover
reasonable costs and attorney’s fees from the nonprevailing party according to
the court’s discretion, which does not require exceptionality but does require
consideration of case- and party-related factors, as well as deterrence. Here,
those factors weighed in favor of such recovery.

“During this litigation, the Court voiced disapproval of
Plaintiffs’ litigation tactics,” including multiple motions to extend the summary
judgment deadline and others. Exploria had the ability to satisfy an award of
fees given its size. Deterrence was relevant because the district “has become
inundated with scores of timeshare-related disputes. Many of these disputes are
similar to this case …. [A]warding attorney fees here would serve to deter
timeshare-related claims that are not legitimate or that will not be diligently
pursued.” The court also noted that generally, the timeshare plaintiffs were
substantially larger than the timeshare exit companies. “This disparity creates
some risk that a timeshare developer (or multiple timeshare developers) may
weaponize ultimately meritless litigation to pressure a specific timeshare exit
company—which may be operating legally—out of business.” (This was not said to
accuse these particular plaintiffs of malfeasance but to make a general
observation about deterrence.)

As here, a claim that isn’t pursued diligently “ends up
wasting the Court’s limited resources and draining the resources of the smaller
defendants.” This case took three years, ending because of plaintiffs’ failure
to present enough evidence. “This failure is especially egregious as to the
Lanham Act claim because Plaintiffs’ failure went to materiality and
causation—basic, related elements—and developing the necessary evidence may
have been as simple as deposing the Affected Owner who found Defendants through
Defendants’ website.”

As to the merits, “[u]ltimately, the success of a
plaintiff’s FDUTPA claim is tied to the federal Lanham Act claim for false
advertising.” Even though the court found FDUTPA to be inapplicable because defendants
were not engaged in “trade or commerce,” had the court reached the merits it
was likely that the FDUTPA claim would have also failed for the same reasons as
the Lanham Act claim. “To the extent Plaintiffs believed that they could
piggyback off the result in another case without putting in the requisite
effort to develop the necessary evidence in their own case, that belief was
unreasonable.”

Thus, even though there was insufficient evidence of bad
faith, and insufficient efforts by the parties to explain whether the claim was
brought to resolve a significant legal question under FDUTPA, the factors
weighed in favor of a fee award.

However, defendants weren’t entitled to their appellate
fees, because the appeal concerned only tortious interference.

from Blogger http://tushnet.blogspot.com/2023/04/timeshare-exit-firm-wins-fee-award.html

Posted in Uncategorized | Tagged , | Leave a comment

Agency liability theory satisfies “commercial advertising or promotion” requirement of promoting one’s own products/services

Ariix, LLC v. Nutrisearch Corp., 2023 WL 2933306, No.
17CV320-LAB (DDL) (S.D. Cal. Apr. 13, 2023)

Previous
court of appeals ruling discussed here
. Ariix alleged that NutriSearch, the
publisher of the NutriSearch Comparative Guide to Nutritional Supplements, and the
Guide’s author MacWilliam were directly funded by Ariix’s competitor, Usana, so
that Usana could achieve the Guide’s number-one rating for nutritional
supplements. The Ninth Circuit reversed an initial dismissal, finding that
Ariix had “plausibly alleged that the defendant’s publication was commercial
speech, was sufficiently disseminated, and contained actionable statements of
fact.” The appellate panel remanded to decide whether the defendant’s
publication was for the purpose of influencing consumers to buy the defendant’s
goods or services, as additionally required for “commercial advertising or
promotion” under the Lanham Act.

The panel majority did note that the allegations in the
complaint suggest that the advertising was “intended to help Usana’s goods, not
NutriSearch’s product,” and that in analyzing this element, it may be helpful
for this Court “to determine whether the defendants and Usana had an agency
relationship; for example, it might be the case that the defendants were acting
as agents of Usana and therefore had a vested interest in the goods that Usana
sold, which might be enough to satisfy this element.” Judge Collins, in his
dissent, suggested that the third element may be satisfied with allegations
suggesting that “Defendants … acted on Usana’s behalf or at its direction by
secretly making, in exchange for compensation, specific changes requested by
Usana in its own or competitors’ product reviews in the Guide.” He concluded,
however, that the complaint falls short of alleging a true agency relationship
between Defendants and Usana:

That Defendants wrote obsequious
reviews in the hope that Usana would be pleased and buy more Guides or give
MacWilliam speaking engagements does not make them Usana’s agents in writing
those reviews. Nor does it establish that they acted on Usana’s behalf or
subject to its control in doing so.

Instead, the court agreed that Ariix’s position that,
“[w]hether assessed as a hidden financial arrangement, an agency relationship,
or a conspiracy, MacWilliam and NutriSearch had a vested interest in the sale
of Usana’s goods sufficient to attribute Usana’s goods to them.”

Ariix alleged that “Usana pays NutriSearch and MacWilliam
hundreds of thousands of dollars per year and provides substantial other
benefits—such as book sales to Usana representatives—which altogether account
for more than 90% of MacWilliam’s income.” [Okay, just for clarity: the next
few paragraphs are allegations from the complaint as recited by the court but I’m
not going to repeat “allegedly” 50 times.] As compensation, “Usana exercises
ultimate control over MacWilliam and NutriSearch’s product,” namely by having
Defendants “manipulate their ratings criteria to ensure Usana remains the
top-rated supplement company in the guide and actively sandbag Usana’s
competitors’ ratings and certifications” in order to “ratchet up sales for
Usana products.”

After formally ending his tenure on Usana’s advisory board
because his affiliation with Usana gave the appearance of bias, MacWilliam
approached Usana executives, explaining that he’d like to continue publishing
the Guide in exchange for Usana’s financial support: “I am going to create more
of a third-party appearance, but I’d like you to use me for speaking and
support me.” Usana agreed, but only if MacWilliam promised to “give [Usana] the
number-one rating.” MacWilliam accepted, “assuring Usana it would get the
number-one rating despite the guide’s claims of independence and objectivity.”

Defendants are “entirely dependent” on compensation from
Usana. For instance, “Usana directly pays [Defendants] hundreds of thousands of
dollars per year in fixed stipends, speaking fees, promotion fees, and travel
fees”; Usana “heavily promotes the guide to its sales representatives and
encourages them to purchase it”; Defendants “almost always tie[ ] the
publication of a new edition of the guide to the date of Usana’s annual
convention” so as to “continue to direct associates to the latest edition” and
ensure “robust sales” of the Guide; and at Usana’s speaking events, “MacWilliam
is the only purportedly ‘independent’ speaker who is allowed to promote and
sell his own products at such events.” Thus, “not only do[ ] [Defendants] rate
Usana highest because of the incentive to increase Usana’s sales and to keep
Usana happy, but also because, driven by the dictates of Usana, Usana’s
distributors are their largest market segment [for sales of the Guide].”

During times when Defendants “have failed to meet their
commitments to Usana”—i.e., awarding the top Gold Medal certification to
another company—“Usana punished [Defendants] for failing to deliver per their
agreement by cutting them off financially.” In 2008, an Usana executive
explained to MacWilliam that “we don’t want to stand up and say ‘we’re one of
the five best’ ”—Usana wants to be “number one.” After Usana responded
positively to MacWilliam’s question about whether it would help to be number
one in some way, NutriSearch allegedly “cured this breach of its secret
agreement with Usana by coming out with a new award called ‘Editor’s Choice’
and giving it to Usana.” NutriSearch understood that this would “entitle
MacWilliam to return to the Usana event circuit to speak and sell more books,
and thus earn more speaking fees and book royalties.” The next year, when
“another company was actually going to beat Usana,” MacWilliam explained the
situation to Usana, to which an Usana executive responded that “we pay
[Defendants] to make us number one.” MacWilliam “worked with Usana to adjust
his allegedly objective matrix so that Usana stayed on top.” Each year
thereafter, “Usana required MacWilliam to adjust his matrix” to ensure Usana
remained the top-rated company in the Guide.

The court found that the complaint plausibly alleged the
existence of a hidden financial agreement between defendants and Usana, under
which MacWilliam was paid “hundreds of thousands of dollars” under the guise of
speaking fees in exchange for awarding Usana’s products the Guide’s number-one
rating. All the while, the Guide was explicitly touted as an independent
publication.

This hidden financial agreement plausibly helped increase
the sale of both Usana’s products and sales of the Guide itself—to Usana’s
distributors. “ The parties here had a clear financial arrangement designed to
influence consumers to buy products from a third-party in which Defendants had
a direct financial stake.” (Citing, among others, Enigma Software Grp. USA, LLC
v. Bleeping Comput. LLC, 194 F. Supp. 3d 263, 294 (S.D.N.Y. 2016) (holding that
“by alleging that [Defendant] earns a commission on directed sales of [products
sold by Plaintiff’s competitor], the SAC adequately pleads that [Defendant] had
an economic incentive to engage in such promotion,” and that such commercial
speech was “made for the purpose of influencing consumers to buy products in
which [Defendant] has a financial stake”).)

In light of this financial agreement, an agency relationship
would exist, and the alleged misrepresentations were made within its scope.

“For an agency relationship to exist, an agent must have
authority to act on behalf of the principal and ‘[t]he person represented [must
have] a right to control the actions of the agent.’ ” Actual control is not
necessary; as long as there is an agreement that the principal has the right to
control the agent, an agency relationship exists.

The facts above plausibly alleged that Usana manifested
assent to defendants acting on its behalf. The precise details of the agreement
weren’t required at this stage in the litigation. Likewise, the complaint plausibly
alleged that defendants consented to act on Usana’s behalf and agreed to be
subject to its control. The allegations indicated that defendants voluntarily
yielded to Usana’s desires and complied with its directives in order to secure
Usana’s financial backing.

The Court need not find that the principal “actually
control[s] [the] agent as a prerequisite for establishing a[n] [agency] relationship,
rather the principal need only have ‘a right to control the actions of the
agent.’ ” The complaint plausibly alleged Usana’s editorial control and that “Usana
was clear in its directive that it be awarded the number-one award or else it
would withdraw its financial support…. That Usana went so far as to require
that Defendants change their entire ratings matrix to ensure Usana received the
number-one award is highly suggestive of Usana’s right to control not only
Defendants’ end product, but also the manner and methodology of Defendants’
performance.”

The Ninth Circuit recognizes four avenues through which an
agency relationship may be established: “actual authority, apparent authority,
ratification, and employment (respondeat superior).” Ariix claimed that the
first three all applied.

An agent has actual authority to take a certain action when
“the agent reasonably believes, in accordance with the principal’s
manifestations to the agent, that the principal wishes the agent so to act.” Actual
authority is limited to actions “specifically mentioned to be done in a written
or oral communication” or “consistent with” a principal’s “general statement of
what the agent is supposed to do.” Ratification, on the other hand, occurs when
the principal accepts the benefit of the agent’s act either with actual
knowledge of the material facts or with “knowledge of facts that would have led
a reasonable person to investigate further”—also known as “willful ignorance.”

The complaint plausibly alleged an agency relationship
created through actual authority. “[W]hile the precise details of their
agreement remain unknown, including whether Usana’s instructions were written
or communicated verbally to Defendants, such facts are appropriate for discovery
at a later stage in this litigation.”

The court declined to address whether Ariix’s allegations
separately established a plausible conspiracy.

from Blogger http://tushnet.blogspot.com/2023/04/agency-liability-theory-satisfies.html

Posted in Uncategorized | Tagged | Leave a comment

even if retailer is responsible for price premium, misleading label is actionable

DiGiacinto v. RB Health (US) LLC, — F.Supp.3d —-, 2023
WL 2918745, No. 22-cv-04690-DMR (N.D. Cal. Apr. 11, 2023)

Plaintiff alleged that Children’s Delsym Cough Relief was
misleadingly marketed as different from, and more expensive than, the adult
product, when the concentration is the same. The front of the packaging for the
children’s product contains a cartoon image of a child. It states “Ages 4+” at
the top of the package and “For Children & Adults” at the bottom. The front
of the packaging for the adults’ product contains no statement about the
suitability of the product for any ages: 

The side labels for both products contain an identical
dosing chart that includes dosing amounts for children and adults along with
the statement “Dosing Cup Included” below an image of a cup containing liquid. The
“Drug Facts” labels on the back of the packaging for both products are
identical. Both products contain the same amount of the active ingredient and
the same inactive ingredients.

DiGiacinto further alleged that “[n]o reasonable consumer
who understood that the Children’s Delsym Cough Relief product was formulated
identically to the adult’s Delsym Cough Relief product would choose to pay more
for it.” He brought the usual
California statutory
and common-law claims.

RB Health submitted a declaration by its Trade Marketing Director
stating that RB Health does not sell the Delsym Cough Relief and Children’s
Delsym Cough Relief products “directly to consumers, but, instead, sells the
Products to retailers and to distributors who in turn sell to retailers.” Since
2018, RB Health has sold both products “to the same distributors and retailers
at the same price. The price the consumer pays for the Products is not set by
RB Health.” Thus, RB Health argued, it was not responsible for plaintiff’s harm
and he lacked Article III standing.

But Article III doesn’t require proximate cause.

Here, RB Health was responsible for the different labeling
of identical products, which plausibly linked to his injury in a way that was “more
than attenuated.” Even if RB Health wasn’t responsible for the price premium,
it was allegedly responsible for the representations that allegedly led him to
purchase the more expensive product. “A causation chain does not fail simply
because it has several links, provided those links are not hypothetical or
tenuous and remain plausible.” DiGiacinto also sufficiently alleged that he
will be unable to rely on the advertising or labeling of the children’s product
in the future, providing standing to seek injunctive relief.

This is because DiGiacinto cannot
discover whether RB Health’s misrepresentations have been cured simply by
looking at the children’s product front label since it does not disclose that
it is pharmacologically identical to the adults’ product. Instead, he would
have to inspect and compare the ingredient labels on two separate products,
including the active and inactive ingredients listed, to determine whether the
products are identical in form and quantity. The Ninth Circuit has held that
“reasonable consumers” should not “be expected to look beyond misleading
representations on the front of [packaging] to discover the truth from the
ingredient list in small print on the side of the [packaging],” and RB Health
does not cite any cases holding that a reasonable consumer is expected to
compare labels on more than one product in order to determine whether a label
is accurate.

Cases rejecting similar claims were distinguishable because
here, the front of the packaging of both the children’s and the adults’
products identifies the active ingredient but not its concentration. (That’s
not really a great distinction because it still requires consumers to compare
two different products to avoid deception.) Also, while the front of the
children’s product discloses that it can be used for anyone over four years of
age (“Ages 4+”), the front label of the adults’ product said nothing about age.
“This could lead a reasonable consumer to believe that the adults’ product is
not suitable for children to consume and to purchase the children’s product
instead.”

from Blogger http://tushnet.blogspot.com/2023/04/even-if-retailer-is-responsible-for.html

Posted in Uncategorized | Tagged , | Leave a comment

Even in default, it’s not TM infringement to resell legitimate goods (but maybe false advertising to call them new)

Quincy Bioscience, LLC v. BRYK Enters., LLC, 2023 WL 2933464,
No. 22-cv-658-jdp (W.D. Wis. Apr. 13, 2023)

I don’t usually blog default cases because there’s usually
little legal analysis; this case is an exception around the fraught area of
first sale, showing unusual diligence by the court. Quincy sued BRYK “under
multiple legal theories for making unauthorized sales of products branded with
Quincy’s PREVAGEN trademark.” In evaluating the question of personal jurisdiction,
the court concluded that it existed, but Quincy’s submissions exposed

a problem with the merits of its
claims for trademark infringement and unfair competition: the products alleged
to have been sold by BRYK were genuine PREVAGEN products. The sales were
unauthorized, in the sense that Quincy didn’t want or authorize BRYK to make
them, but whether those sales were unlawful is another matter.

The court dismissed most of Quincy’s claims (counterfeiting,
trademark infringement, and false designation of origin) except for false
advertising—a rare (and conceptually sound) approach that other, non-default
cases could benefit from.

Quincy had two theories of harm: (1) some of the products
weren’t true Prevagen supplements and didn’t contain the putatively active
ingredients; (2) other products were sold “in defective condition, including
with outer box packaging completely missing, damaged or compromised.” “These
included six products which arrived with no outer packaging whatsoever, an
additional four products whose packaging was significantly damaged, and an
additional six that did not include an accessory pill-minder shown and
described in the Amazon listings from which the test orders were placed.”

In response to the court’s order to develop more information
about personal jurisdiction, Quincy submitted evidence of test orders, including
one with a Wisconsin billing address (but shipping them elsewhere). But the
test orders didn’t include counterfeit supplements; instead, Quincy said it was
aware of a single counterfeit order sold by BRYK, and that order was sent to a
different customer, who then sent it to plaintiff. There were no allegations that
the customer had any contacts with Wisconsin, which was a problem because the
plaintiff must show that there is a connection between its injury and the
defendant’s contacts with the state. Quincy agreed to dismiss its claims based
on a counterfeiting theory without prejudice and to withdraw its request for
damages. But its other claims were based on the theory that BRYK shipped
PREVAGEN products in defective condition to a customer with a Wisconsin billing
address. Fulfilling orders of a forum-state resident with a forum-state billing
address was minimally sufficient under the facts of this case to show
purposeful availment. Quincy alleged that some of the products it ordered using
its Wisconsin billing address arrived in defective condition, constituting a
sufficient connection between BRYK’s Wisconsin contacts and Quincy’s injury.

“Even after default … it remains for the court to consider
whether the unchallenged facts constitute a legitimate cause of action.” Quincy
alleged all the elements of false advertising: BRYK allegedly states on its
Amazon storefront that the PREVAGEN products it sells are in “new” condition,
but Amazon defines “new” to mean that the product is sold in “the original
manufacturer packaging.” Quincy also alleges that some of the orders it
received from BRYK “arrived with no outer packaging whatsoever.” (Damaged packaging
and missing pill boxes supposed to accompany the supplements didn’t count,
because Quincy didn’t show that any language in Amazon’s definition of “new”
that requires the packaging to be undamaged or to include pill boxes.) A
representation of new condition would have “a tendency to lead consumers to
believe that they were receiving all of the packaging,” and the failure to
provide all packaging could influence a consumer’s decision whether to buy the
product. Quincy was likely to be injured as a result of diverted sales or loss of
good will. (Why would there be diverted sales? The first sale took care of
that. It’s also not obvious, though standard to assume, that consumers would
blame Quincy rather than Amazon or the third-party seller in terms of harm to
goodwill.)

But the unauthorized sale of a genuine product does not
violate trademark law. True, a product [sold as new] that doesn’t meet the
trademark holder’s quality control standards is not really a “genuine” product,
so it may confuse consumers and erode customer goodwill. But this theory requires
the TM owner to show three things: (1) it has established legitimate,
substantial, and non-pretextual quality control procedures; (2) it abides by
these procedures; and (3) the non-conforming sales will diminish the value of
the mark. Quincy didn’t show or allege anything about Quincy’s quality-control
practices and procedures. Quincy could submit a new fee petition and a new
proposed injunction to proceed.

from Blogger http://tushnet.blogspot.com/2023/04/even-in-default-its-not-tm-infringement.html

Posted in Uncategorized | Tagged , | Leave a comment

Bad Spaniels: trademark parody and fair use doctrines at Northeastern, Apr. 13, 4 pm

Join Professor Rebecca Tushnet and Professor Alexandra J. Roberts for a conversation about Jack Daniels v. VIP Products.

Register here.

Date and time

Thursday, April 13 · 4 – 5:30pm EDT

Location

Northeastern University School of Law 416 Huntington Avenue Boston, MA 02115

from Blogger http://tushnet.blogspot.com/2023/04/bad-spaniels-trademark-parody-and-fair.html

Posted in Uncategorized | Tagged , , | Leave a comment

27th Annual BTLJ-BCLT Symposium: From the DMCA to the DSA: Panel 4: Industry Perspectives

Moderator: Daphne Keller, Stanford Cyber Policy Center   

DSA represents a shift to operational mandates compared to
DMCA, Art. 17—thoughts?

Sabrina Perelman, Pinterest: Important to remember what DSA
is and isn’t. We’ve preserved the safe harbor, prohibition against general
monitoring. Good Samaritan provision still exists. What drastically changed is
what we have to do around content and what we have to be transparent about—operational
and technical concerns. You can understand the reasons for transparency and
user efficiency, but putting it into practice is another matter. How do you
navigate sending 100s of millions of statements about actions per week? How do
you avoid giving away the store to bad actors/give away competitively sensitive
information about user safety when giving your explanations? Could be good for
consistency: there may be some things we’ll have to do for DSA that we’ll
choose to do more broadly. But there are EU inconsistencies: Prevailing
interpretations of local laws may stick around—not clear that Germany will
repeal NetzDG/change interpretation of law.

Canek Acosta, Microsoft: DMCA goes further in some ways/losing
conditional immunity, DMA goes further in others/transparency, Art. 17 goes
further in staydown. Scale is important: the content moderation team’s job is
mostly to keep stuff up, which represents the balance of fundamental
rights/free expression—but if you get 4 million requests in 6 months that’s
very challenging.

Chris Riley, Data Transfer Initiative: It could have been very
different—we weren’t far away from a world where the EU passed a law saying “don’t
let illegal content up or you’ll be liable”—magic filter demand was with us for
a while. We’re better off than that! We’re not out of the woods on magic filter
demands, so have some perspective.

Remy Chavannes, Brinkhof: theory of harmonization is great;
platforms will have to stick their necks out and challenge bad laws because nations
will resist getting rid of old laws or their courts will insist that EU
directives are just the preexisting law of the member state.

Keller: does DMCA have lessons?

Holly Hogan, Automattic: it’s very different in being about liability
instead of accountability. But checks and balances between complainants and
users is a core value of DSA as well. Experience of DMCA is that it can be
efficient for legit complaints, it’s also something that can be abused by
people w/an interest in silencing speech. We reject 75% of notices we get for
being incomplete or not actionable. Good chunk, 10%, are flatly abusive—addressing
uncopyrightable subject matter, attacking clear fair use/permissible
exceptions, or just fraudulent—a scam run by an international charitable org trying
to remove
anything critiquing it. Users are scared by notices and so we
have to create a system that protects them as best we can.

Rob Arcamona, Meta: We can’t expect any one stakeholder to
do any of this alone. If there are going to be answers to these questions we’ve
talked about, they can only come by understanding the different pressures,
tradeoffs, and intentions that each party has. Discussing how there are
ambiguities and how things operate in practice is the only way forward.

Keller: DSA creates complex ecosystem with multiple roles,
some funded and some less so—that may be a positive development where there are
recognized roles for stakeholders.

Arcamona: you have to understand others’ roles, not just
your own role.

Clemens Molle, Bird & Bird: DSA’s stringency is hard to
evaluate: clients ask for the most stringent approach so we can determine
whether we will follow it, but that’s not so simple/possible.

Keller: what about operating under all 3 regimes, DSA, DMCA,
Art. 17—what are the complexities?

Arcamona: Concerns about stacking/different components of
service may be subject to different regimes. Individual member states that have
legislated particular actions beyond Art. 17 under its guides: in Germany, user
associations can file against overblocking; Sweden has an individual right to
sue; Italy: can issue rules about how appeals can go, including staydown
appeals, in contrast to Germany where content is supposed to stay up as long as
possible. Preflagging in Germany/Austria differs in some respects. Spain=maximalist
for ©. All these differ from Art. 17. Are any of these valid? Did the DSA
overtake the extra bits over the top of Art. 17?

Chavannes: platforms will have to litigate; no opinion is
valid until the CJEU gives an answer in 5-10 years, if you know that a particular
service is under Art. 17—and if you aren’t, you have entirely different regime
under the YT case. The definitions are a mess, overlapping in uncertain ways,
and there’s no way to find out whether if you’re in scope before lawsuit. You have
to make up your mind whether you’re going to challenge national regulators;
platforms have complex incentives and internal stakeholders not keen to
litigate.

Annemarie Bridy, Google: You have to build for compliance
while you’re deciding—there are workflows that need to be created or rejiggered.
What if a service adds features or grows into scope?

Keller: what are people most/least prepared for?

Perelman: we’re ok on statements of reasons, terms &
conditions disclosure, etc., stuff on our side. Not clear: how many appeals
will we be getting? What is transparency reporting really meant to cover?

Riley: easier ability to prepare for what’s in control of
company—you can build a capacity to issue a transparency report, but things
that are interactive/dependent on further assessments and guidance like audit reports
are very hard to figure out—no shared knowledge base.

Hogan: core concepts like transparency and reasons for
decisions are already part of our company, so that’s the clearest path. Challenge:
figuring out who’s a good faith actor or a bad faith actor—mistakenly posted © content
or running up against mature content—French Vogue is ok but maybe something
else isn’t. The spam exception exists but who else deserves more attention is
not clear.

Chavannes: How to cope with the uncertainty? US lawyers may
find the uncertainty offensive with all the different definitions and lack of
clarity over simple things like which bits of which services fall under which
rules—even something as basic as territorial application: does an American
complainant complaining about Italian user get coverage, and vice versa? Need
to avoid panic and paralysis.

Bridy: questions about ADR and how it doesn’t recognize that
© is different from TOS violation. The real parties in interest in an ordinary
demotion are platform and user—the real parties in interest. For © it’s
different—the real parties in interest are rightholders and users, and the
platform is an intermediary, which the DMCA lets get out of the way so the real
parties in interest can litigate if they want to. Should © disputes go to the
same process with the same parties?

Arcamona: Amount of focus and attention is high, but we’re
least prepared for deharmonization of © regimes across Europe. Digital Single Market
was supposed to harmonize rules. However, if small nuances continue in member
state implementation continue to require big operational differences, you’ll
end up with different types of services and access to content in one part of EU
than in others.

Acosta: Transparency reporting is already happening; we have
to collect additional data and format it right, but that will be fairly
straightforward. Risk assessment is thorny—there’s not a lot of specificity,
even w/examples. How do you measure the risk of freedom of expression v. copyright
takedowns? Now that generative AI is built into our products, is it UGC? Where
does it fall and how does the DSA affect it?

Keller: people are more prepared for the things w/the soonest
deadlines. Researcher access to data can’t come into effect until all the
national coordinators exist, so that has a longer timeline. Non-VLOPs who may
not be paying attention to Europe and aren’t obligated until 2024: don’t know how
many of them are thinking about this.

Arcamona: Rule that became Art. 17 was supposed to be 2018;
we started implementation then and hoped to interact w/member states about
implementation. Compliance starts far earlier than you anticipate b/c of
operational and engineering runup time.

Keller: what to be grateful for?

Perelman: we dodged staydown.

Chavannes: democratic mandate: plaintiffs said this was 20
years old, and judges were sympathetic: should they really be interpreting the
rules this way and preserving safe harbors? That’s really important.

Keller: lawful but awful, which UK wants as a whole new
category—de facto illegal online but liability falls on platform rather than
speaker. Many other parts of the world are in paroxysms about age verification.
But maybe it would have been better to get a Brussels standard.

Chavannes: no bullet is ever fully dodged in Europe—all these
things are coming again: must pay/must carry; bans on advertising; these things
never really die.

Keller: how will audits work?

Riley: not clear. Financial accounting has established
procedures and bodies. Chicken and egg: there’s no agreed upon thing to measure
qualifications against. This is an opportunity and a challenge—reaching out to
different stakeholders and there’s a way to create a knowledge playbook from
social scientists and others to find questions to be asked.

Keller: ADR?

Hogan: Intent is for when a mistake was made—to reach the
light of day/deter bad decisions. But the other clear bad outcome use is trolls
that harass you b/c you won’t host their content any more. Folks who feel most
entitled to do this can be very abusive and leverage that system to their advantage
for a while. But Bridy also points to different types of disputes, not really
about TOS. We see real attempts to remove critical content. In GDPR, we have a
number of complaints to data protection regulators captioned as privacy/right
to be forgotten/sprinkling in © where there’s a headshot, but it’s really a
defamation case. We’re stuck in this gobetween role; users are often scared and
there’s a powerful player on the other side. What do we do in that case? We
have a user-absent case in front of a data regulator where we say “we’re just a
processor, but this is an accusation of corruption against a public official,
so this really isn’t a right to be forgotten case.” There are also disputes over
what should be allowed—© cases around permissible use. Does that end up in ADR—is
that efficient/providing access to justice? But do we build up a body of law
behind closed doors that has a large impact on the internet? Users and
platforms might not know what’s allowed w/o published caselaw.

Chavannes: how much work will platforms put into these
cases? Search engines have litigated some RTBF cases, but sometimes they just
withdraw into the bushes. Do you invest in defending interpretations of
policies? Potentially unlimited number of institutes in member states
incentivized by cost structure to let users win—if that’s the case, what’s the
benefit of defending free expression, when users can just pick a provider that
will let them win?

Keller: can you opt out of targeted ads since you’re supposed
to be able to opt out of recommendations?

Chavannes: it would be very odd for recommender systems rules
to apply to ads; there was a legislative debate over banning targeted
advertising that was resolved against doing so, so it would be antidemocratic
to say otherwise. That being said, it’s not clear enough to absolutely prevent
a nation from saying it’s so. Ask in 10 years. Dealing w/uncertainty: take the
message certainly—a US lawyer is trained to read black letter law, but that’s
the worst way to think about DSA compliance. Work towards improving user
communication/transparency; that buys you the right to have discussions around the
edges around compliance. Clever lawyering is a bad way to deal with it; avoid
fines by trying to comply.

Riley: European process is designed to give power to
regulator to penalize behavior according to sense of direction they’re trying
to establish; this is very hard in the US common law system. Three phrases help
the point: coping with uncertainty; building for compliance; operational
balance.

Keller: legal and public policy teams should be talking to
one another. People in Brussels continue to matter as enforcers in a way that
they might not with other directives.

Fred von Lohmann: Implicit story about the competitive arena:
go with the spirit of the thing and don’t be a rules lawyer and try to find the
edge. That’s a fine thing if there are only a few big players. But if you’re in
a highly competitive environment, you can’t wait for an edgy competitor to be
fined out of existence—you’ll be out of business first. What I’ve heard in
these days is worry of trollification of DSA process—white supremacists crowdsourcing
50,000 complaints and 50,000 appeals, each of which requires human review. The
DSA is supposed to deter that with trusted flaggers, but the trolls won’t come
in that way; the other way is that you can temporarily suspend their accounts
after a period of notification. His experience is that they don’t care b/c they
generate 50,000 new accounts the next day. How do people feel about this trollification,
not just about ©? Is there enough in the DSA to protect against it?

Hogan: worries about that. Even under DMCA, we’ve sued abusers
and never got a penny. There’s no real downside to trolling. They just come
back. We’ve experienced at high volumes a pernicious organization on behalf of
rightsholders, flooding our system w/1000s of junk notices taking up ½ of our
queue, blocking legitimate claims. But if they have 10 claims that are good in
there, hard to ignore them. Worry about amplification of trolls, as well as the
80 year old blogger who is targeted for abuse—hard to create downsides for
that. Requires a big T&S team and investment of resources.

Perelman: we’ve been lucky so far and able to review
everything.

Chavannes: under the good guy theory, you should want to be in
that situation—don’t do the responses and then litigate to get a narrow
interpretation of the provisions from the CJEU.

Keller: Chavannes is a firm lawyer whose solution is to litigate
early and often.

Chavannes: cheaper than 50,000 responses!

Molle: you can use automated means to do this as long as
there’s human supervision—you could structure procedures in a way to take some
of the blows that way.

Keller: are spammers going to bother to sue?

Q: on cost benefit analysis—it’s one thing to have some
practices in place, but how should a US startup think about this? Should they
block the EU and wait to grow? Or should they try to comply early on?

Arcamona: Posting on blogs as way to influence policymakers:
we read them too! Cloud hosting in Chechnaya, shut down for fear they couldn’t
comply. That’s important.

Keller: comply a little, hope it looks ok, hope you don’t
get any attention?

Hogan: it depends on your userbase and economic value of EU;
if you know about the DSA, which you might not if you’re a 5 person startup. We’ve
seen publishers decide not to continue in EU under GDPR—it could happen.

Chavannes: is the US such a safe space for a fledgling
online platform these days? [I mean … still yes?]

Riley: Yeah, Texas and Florida are worrisome. But it depends
on how worried your lawyers are; are you really going to get noticed by the
regulators if you’re small?

Keller: you don’t just have to worry if you’re in Texas—any of
your users can bring a case.

Acosta: it’s common to pilot a feature in one jurisdiction
for a while to test it. It’s possible that they’ll stay out for a while but it’s
a big market so it will stay attractive.

Q: all of you are VLOPs. To what extent are you considering
core functionality and product changes until the dust settles? I would advise
going in general direction but abstain from significant changes you’re not evidently
required to do.

Hogan: not a VLOP!

Perelman: we are a very small VLOP; have to grapple with
that but with far fewer resources, including $ and tech, than others. That has
required creative lawyering—talk about what we have to do, what we should do.
Some product changes, but maybe there are some requirements that you do find
the bare minimum and see where the dust settles.

Samuelson: 6% of global turnover is big and other countries
will be attracted to it.

Arcamona: for Art. 17, we had to ramp up before it came into
effect. DSA adds some pressure to that. So you have to keep having dialogues
about tradeoffs. When the lawsuits come they will be fast and furious so
advance planning is the key.

Chavannes: the fine changes the tone of the conversation
w/regulators—amazed if it happens in first 5 years. The regulator can afford to
sit back and say “you have to do this” b/c you know that they have that option.

from Blogger http://tushnet.blogspot.com/2023/04/27th-annual-btlj-bclt-symposium-from_64.html

Posted in Uncategorized | Tagged , , | Leave a comment

27th Annual BTLJ-BCLT Symposium: From the DMCA to the DSA: Panel 3: Intended and Unintended Consequences of the DSA

 Moderator: Pamela Samuelson, Berkeley Law School

From Notice-and-Takedown to Content Licensing and Filtering:
How the Absence of

UGC Monetization Rules Impacts Fundamental Rights       

João Quintais, University of Amsterdam with Martin
Senftleben, University of Amsterdam

Human rights impact of the new rules. When we say notice and
takedown is no longer extant in EU, that’s not entirely true—DSA has it. We
look at safe harbors not from an industry perspective—allowing them to develop
services regardless of what users upload—but from a user human rights
perspective—we allow users to upload whatever they want and correct it later.
DSA has a © interface that says insofar as there’s specific © legislation it is
lex specialis and prevails over DSA. Thus, the safe harbor comes to an end
w/r/t © content. Art. 17 instead controls for OCCSP systems. Licensing and filtering
replaces safe harbor.

That has a human rights impact. Risk of outsourcing human
rights obligation to private entities, and conceding these mechanisms by
putting responsibility for correcting excesses in hands of users. Regulator is
hiding behind other parties.

W/r/t platforms like YouTube, requirement of not allowing upload
w/o licensing is a major intrusion on user freedom, leading to public demonstrations
against upload filters that can’t distinguish b/t piracy and parody. DSA says
you have to take care of human rights in proportionate manner in
moderation/filtering. But regulator doesn’t do the job itself. Guardians of UGC
galaxy are the platforms themselves. Regulator hides behind industry and says
we’ve solved the issue.

Reliance on industry is core of Art. 17—cooperation b/t
creative industry and platform industry. One industry sends info on works that
should be filtered, and the platforms have to include that info in their
filtering systems. But regulator says that this cooperation shouldn’t result in
preventing noninfringing content—but how realistic is this? They aren’t at the
table maximizing freedom of communication; they are at the table maximizing
profit. That’s ok as a goal but these actors are not intrinsically motivated to
do the job assigned to them.

Concealment strategy: Empirical studies show that users are
not very likely to bring complaints; system excesses will stay under the radar.
Legislator suggests a best practice roundtable among big players. ECJ has
spoken in Poland decision; what the court did confirmed that this
outsourcing/concealment strategy was ok, rubberstamping the approach. The Court
says the filtering systems have to distinguish b/t lawful and unlawful content,
but that’s an assumption and not a very realistic one. The incentive to filter
more than necessary is higher than the incentive to maximize freedom of
expression. The court says the filtering is only set in motion if rightsholders
send a notification, but again the incentives are to send lots of notices.

Audit system could be a solution, involvement of member
states could be a solution, but we have to discuss the issues openly.

Monetization is an area where the issues are particularly
bad. YouTube, Instagram, TikTok are the subjects: an action of obtaining
monetary benefit/revenue from UGC provided by the user. There are restrictions
in DSA on demonetization. Most discussion has been about upload filters, but YT’s
© transparency reports show that actually most of the action is not blocking
via filtering—it’s mostly monetization through Content ID. 98.9% of claims are
through Content ID, and 90% are monetized.

Art. 17 doesn’t say anything specific about monetization; it’s
all about filtering and staydown. Provisions for fair remuneration for authors
are very difficult to apply. Lots of actions are not covered by the regulation,
especially visibility measures and monetization. DSA has two clear provisions
covering statement of reasons for in platform complaint resolution/ADR, but most
of what users can do is ex post.

Big platform practices change regularly. Content ID and
Rights Manager (Meta) are the big hitters; you can get other third party
solutions that intermediate b/t rightsholders and platforms like Audible Magic
and Pex. Legibility of access to monetization in Content ID and Rights Manager:
available only to large firms. If you take Poland decision seriously, it should
only be filtered for manifestly infringing content, but these systems are
designed to allow parameters below the legal threshold.

On the monetization side, it’s billed as ex post licensing;
as a rule this is not available to small UGC creator, though there are
exceptions. There is a lack of transparency; left to private ordering.

Human rights deficits: Misappropriation of freedom of
expression spaces & encroachment on smaller creators’ fundamental right to ©.
If a use is permissible and does not require permission, larger rightsholders
can nonetheless de facto appropriating and being given rights over content for
which they have no legal claim. Creates the illusion that there’s no expressive
harm b/c the content stays up, but there’s unjustified commercial exploitation.
UGC creator is often a © owner, with protection for that as a work. Only they
should logically be allowed to monetize. Interferes w/ UGC creator’s
fundamental right, but this is not a default option on the platform for
historical reasons, leaving them only to ex post remedies, which are weak
sauce.

Recommendations: bring these problems to light. Audit
reports should focus on these questions. Human rights safeguard clause: must
take care to protect parody, pastiche—if they pass the filter they should not
lead to monetization. Confining filtering to manifestly infringing UGC also
helps. German solution: collective licensing schemes w/a nonwaivable remuneration
right for UGC creators, not just to industry players. Inclusion of creative
users in a redesign of a more balanced monetization system.

An Economic Model of Intermediary Liability         

James Grimmelmann, Cornell Law School; Cornell Tech

Economic claims about effects of liability regimes are
common. Chilling effects; whether platforms do or don’t have sufficient
incentives to police content; claims that 230/DMCA don’t’ adequately balance free
expression with safety concerns; etc. These are statements about incentives,
but primarily policy arguments, not empirically tested. There are some
empirical studies, but our project is to create an economic model to provide a
common framework to compare arguments. This can help create predictions and visualize
the effects of different rules. Mathematical model forces us to make explicit
the assumptions on which our views rest.

Start w/question of strict liability v. blanket immunity;
look at possible regimes; map out core elements of 512, DSA, and 230. Not going
to talk about policy responses to overmoderation/must-carry obligations or
content payment obligations.

Basic model: users submit discrete items of content. Each
item is either harmful or harmless. Platform chooses to host or take down. If
hosted, platform makes some money; society gets some benefits; if the content
is harmful, third party victims suffer harm. Key: platform does not know
w/certainty whether content is harmful, only the probability that it is. These are
immensely complicated functions but we will see what simplification can do.
[What happens if harmful content is profitable/more profitable than nonharmful
content? Thinking about FB’s claim that content that is close to the line gets
the most engagement.]

A rational moderator will set a threshold. Incorporates judgments
about acceptable risk of harm in light of likelihood of being bad v benefits of
content to platform and society.

The optimal level of harmful content is not zero: false
positives and false negatives trade off. We tolerate bad content b/c it is not
sufficiently distinguishable from the good, valuable content. Users who post
and victims harmed may have lots of info about specific pieces—they know which
allegation of bribery is true and which is not—but platforms and regulators are
in positions of much greater uncertainty. Platform can’t pay investigators to
figure out who really took bribes.

Under immunity, platform will host content until it’s individually
unprofitable to do so (spam, junk, other waste of resources). This might result
in undermoderation—platform’s individual benefit is costly for society. But it’s
also possible that platform might overmoderate if platforms take stuff that’s
not profitable down but was net beneficial for society. There is no way to know
the answer in the abstract; depends on specifics of content at issue.

Focusing on undermoderation case: one common law and econ response
is strict liability. This is always less than the benefit to society—it will
always overmoderate content that would be unprofitable under strict liability
but would be beneficial to society. This is Felix Wu’s theory of collateral
censorship: good content has external benefits and is not distinguishable from bad
from outside. If those two things are true, strict liability won’t work b/c
platform only internalizes all harm, but not all benefit.

Other possible liability regimes: actual knowledge, when no
investigation is required—this allows costless distinctions between good and
bad. But does actual knowledge really mean actual knowledge or is it a
shorthand for probabilistic knowledge of some kind? Intuition is that notices
lower cost of investigation. Fly in the ointment: notices are signals conveying
information. But the signal need not be true. When investigations cost money,
many victims will send notice w/o full investigation. Turns out notices
collapse into strict liability—victims send notices for everything. Must be
costly to send false notices; 512(f) could have done this but courts gutted it.
DSA does better job with some teeth to “don’t send too many false notices.” Trusted
flagger system is another way to deal with it.

Negligence is another regime—more likely than not to be
harmful. Red flag notice under DMCA. Gives us a threshold for platform action.
Conditional immunity is different: based on total harm caused by the platform—if
too much, platform is liable for everything. This is how the repeat infringer
provisions of 512 work: if you fail to comply, you lose your safe harbor
entirely. These can be subtly different. Regulator has to set threshold
correctly: a total harm requirement requires more knowledge of the shape of the
curve b/c that affects the total harm; the discontinuity at the edge is also
different.

512 is a mix: it has a negligence provision; a financial
benefit provision—if it makes high profits from highly likely to be bad
content; repeat infringers. DSA has both actual knowledge and negligence
regimes. Art. 23 requires suspension of users who provide manifestly illegal
content, but only as a freestanding obligation—they don’t lose the safe harbor
for doing it insufficiently; they simply pay a fine. 230 is immunity across the
board, but every possible regime has been proposed. It is not always clear that
authors know they propose different things than each other—negligence and
conditional immunity look very similar if you aren’t paying attention to
details.

Although this is simplified, it makes effects of liability
rules very obvious. Content moderation is all about threshold setting.

Interventions   Rebecca
Tushnet, Harvard Law School

Three sizes fit some: Why Content Regulation Needs Test
Suites

Despite the tiers of regulation in the DSA, and very much in
Art. 17, it’s evident that the broad contours of the new rules were written
with insufficient attention to variation, using YouTube and Facebook as
shorthand for “the internet” in full. I will discuss three examples of how that
is likely to be bad for a thriving online ecosystem and offer a
suggestion. 

The first issue is the smallest but reveals the underlying
complexity of the problems of regulation. As Martin Husovec has written in The
DSA’s Scope Briefly Explained,
https://ift.tt/0iB3HSG,

Placement in some of the tiers is defined by reference to
monthly active users of the service, which explicitly extends beyond registered
users to recipients who have “engaged” with an online platform “by either
requesting the online platform to host information or being exposed to
information hosted by the online platform and disseminated through its online
interface.” Art. 3(p). While Recital 77 clarifies that multi-device use by the
same person should not count as multiple users, that leaves many other
measurement questions unsettled, and Husovec concludes that “The use of proxies
(e.g., the average number of devices per person) to calculate the final number
of unique users is thus unavoidable. Whatever the final number, it always
remains to be only a better or worse approximation of the real user base.” And
yet, as he writes, “Article 24(2) demands a number.” This obligation applies to
every service because it determines which bucket, including the small and micro
enterprise bucket, a service falls into.

This demand is itself based on assumptions about how online
services monitor their users that are simply not uniformly true, especially in
the nonprofit or public interest sector. It seems evident—though not specified
by the law—that a polity that passed the GDPR would not want services to engage
in tracking just to comply with the requirement to generate a number. As
DuckDuckGo pointed out, by design, it doesn’t “track users, create unique
cookies, or have the ability to create a search or browsing history for any
individual.” So, to approximate compliance, it used survey data to generate the
average number of searches conducted by users—despite basic underlying
uncertainties about whether surveys could ever be representative of a service
of this type—and applied it to an estimate of the total number of searches
conducted from the EU. This doesn’t seem like a bad guess, but it’s a pretty
significant amount of guessing.

Likewise, Wikipedia assumed that the average EU visitor used
more than one device, but estimated devices per person based on global values
for 2018, rather than for 2023 or for Europe specifically. Perhaps one reason
Wikipedia overestimated was because it was obviously going to be regulated no
matter what, so the benefits of reporting big numbers outweighed the costs of
doing so, as well as the stated reason that there was “uncertainty regarding
the impact of Internet-connected devices that cannot be used with our projects
(e.g. some IoT devices), or device sharing (e.g. within households or
libraries).” But it reserved the right to use different, less conservative
assumptions in the future. In addition, Wikipedia also noted uncertainty about
what qualified as a “service” or “platform” with respect to what it did—is
English Wikipedia a different service or platform for DSA purposes than Spanish
Wikipedia? That question obviously has profound implications for some services.
And Wikipedia likewise reserved the right to argue that the services should be
treated separately, though it’s still not clear whether that would make a
difference if none of Wikipedia’s projects qualify as micro or small
enterprises.

The nonprofit I work with, the Organization for
Transformative Works (“OTW”) was established in 2007 to protect and defend fans
and fanworks from commercial exploitation and legal challenge. Our members make
and share works commenting on and transforming existing works, adding new
meaning and insights—from reworking a film from the perspective of the
“villain,” to using storytelling to explore racial dynamics in media, to
retelling the story as if a woman, instead of a man, were the hero. The OTW’s
nonprofit, volunteer-operated website hosting transformative, noncommercial
works, the Archive of Our Own, as of late 2022 had over 4.7 million registered
users, hosted over 9.3 million unique works, and received approximately two
billion page views per month—on a budget of well under a million dollars. Like
DuckDuckGo, we don’t collect anything like the kind of information that the DSA
assumes we have at hand, even for registered users (which, again, are not the
appropriate group for counting users for DSA purposes). The DSA is written with
the assumption that platforms will be extensively tracking users; if that isn’t
true, because a service isn’t trying to monetize them or incentivize them to
stay on the site, it’s not clear what regulatory purpose is served by imposing
many DSA obligations on that site. The dynamics that led to the bad behavior
targeted by the DSA can generally be traced to the profit motive and to
particular choices about how to monetize engagement. Although DuckDuckGo does
try to make money, it doesn’t do so in the kinds of ways that make platforms
seem different from ordinary publishers. Likewises, as a nonprofit, the Archive
of Our Own doesn’t try to make itself sticky for users or advertisers even
though it has registered accounts.

Our tracking can tell us how many page views or requests
we’re getting a minute and how many of our page views come from which browsers,
since those things can affect site performance. We can also get information on
which sorts of pages or areas of the code see the most use, which we can use to
figure out where to put our energy when optimizing the code/fixing bugs. But we
can’t match that up to internal information about user behavior. We don’t even
track when a logged in account is using the site—we just record the date of
every initial login, and even if we could track average logins per month, a
login can cover many, many visits across months. The users who talk to us
regularly say they use the site multiple times a day; we could divide the
number of visits from the EU by some number in order to gesture at a number of
monthly average users, but that number is only a rough estimate of the proper
order of magnitude. Our struggles are perhaps extreme but they are clearly not
unique in platform metrics, even though counting average users must have
sounded simple to policymakers. Perhaps the drafters didn’t worry too much
because they wanted to impose heavy obligations on almost everyone, but it
seems odd to have important regulatory classes without a reliable way to tell
who’s in which one.

These challenges in even initially sorting platforms into
DSA categories illustrates why regulation often generates more
regulation—Husovec suggests that, “[g]oing forward, the companies should
publish actual numbers, not just statements of being above or below the 45
million user threshold, and also their actual methodology.” But even that, as
Wikipedia and DuckDuckGo’s experiences show, would not necessarily be very
illuminating. And the key question would remain: why is this important? What
are we afraid of DuckDuckGo doing and is it even capable of doing those things
if it doesn’t collect this information? Imaginary metrics lead to imaginary
results—Husovec objects to porn sites saying they have low MAUs, but if you
choose a metric that doesn’t have an actual definition it’s unsurprising that
the results are manipulable.

My second example of one size fits some design draws on the
work of LLM student Philip Schreurs in his paper, Differentiating Due Process
In Content Moderation: Along with requiring hosting services to accompany each
content moderation action affecting individual recipients of the service with
statements of reasons (Art. 17), platforms that aren’t micro or small
enterprises have due process obligations, not just for account suspension or
removal, but for acts that demonetize or downgrade any specific piece of
content.

Article 20 DSA requires online platform service providers to
provide recipients of their services with access to an effective internal
complaint-handling system; although there’s no notification requirement before
acting against high-volume commercial spam, even for high-volume commercial
spam, platforms have to provide redress systems. Platforms’ decisions on
complaints can’t be based solely on automated means.

Article 21 DSA allows users affected by a platform decision
to select any certified out-of-court dispute settlement body to resolve
disputes relating to those decisions. Platforms must bear all the fees charged
by the out-of-court dispute settlement body if the latter decides the dispute
in favor of the user, while the user does not have to reimburse any of the
platforms’ fees or expenses if they lose, unless the user manifestly acted in
bad faith. Nor are there other constraints on bad-faith notification, since
Article 23 prescribes a specific method to address the problem of repeat
offenders who submit manifestly unfounded notices: a temporary suspension after
a prior warning explaining the reasons for the suspension.  The platform must provide the notifier with
the possibilities for redress identified in the DSA. Although platforms may
“establish stricter measures in case of manifestly illegal content related to
serious crimes,” they still have to provide these procedural rights.

This means that due process requirements are the same for
removing a one-word comment as for removing a 1 hour video: for removing a
politician’s entire account and for downranking a single post by a private
figure that uses a slur. Schreurs suggests that the process due should instead
be more flexible, depending on the user, violation, remedy, and type of
platform.

The existing inflexibility is a problem because every anti
abuse measure is also a mechanism of abuse. There seem already to be
significant demographic differences in who appeals a moderation decision, and
this opens up the possibility of use of the system to harass other users and
burden platforms, discouraging them from moderating lawful but awful content,
by filing notices and appealing the denial of notices despite the supposed
limits on bad faith. Even with legitimate complaints about removals, there will
be variances in who feels entitled to contest the decision and who can afford
to pay the initial fee and wait to be reimbursed. That will not be universally
or equitably available. The system can easily be weaponized by online
misogynists who already coordinate attempts to get content from sex-positive
feminists removed or demonetized. We’ve already seen someone willing to spend
$44 billion to get the moderation he wants, and although that’s an outlier there
is a spectrum of willingness to use procedural mechanisms including to harass.

One result is that providers’ incentives may well be to cut
back on moderation of lawful but awful content, the expenses of which can be
avoided by not prohibiting it in the terms of service or not identifying
violations, in favor of putatively illegal content. But forcing providers to
focus on decisions about, for example, what claims about politicians are false
and which are merely rhetorical political speech may prove unsatisfactory; the
difficulty of those decisions suggests that increased focus may not help
without a full-on judicial apparatus.

Relatedly, the expansiveness of DSA remedies may water down
their realistic availability in practice—reviewers or dispute resolution
providers may sit in front of computers all day, technically giving human
review to automated violation detection but in practice just agreeing that the
computer found what it found, thus allowing the human to complete thousands of
reviews per day as Propublica has found with respect to human doctor review of
insurance denials at certain US insurance companies.

And, of course, the usual anticompetitive problems of
mandating one size fits all due process are present: full due process for every
moderation decision benefits larger companies and hinders new market entrants.
Such a system may also encourage designs that steer users away from
complaining, like BeReal’s intense focus on selfies or Tiktok’s continuous flow
system that emphasizes showing users more like what they’ve already seen and
liked—if someone is reporting large amounts of content, perhaps they should
just not be shown that kind of content any more. The existing provisions for
excluding services that are only ancillary to some other kind of product—like
comments sections on newspaper sites, for example—are partial at best, since it
will often be unclear what regulators will consider to be merely ancillary. And
the exclusion for ancillary services enhances, rather than limits, the problem
of design incentives: it will be much easier to launch a new Netflix competitor
than a new Facebook competitor as a result.

© specific rules are not unique: subject to same problem of
legislating for YouTube as if YouTube were the internet. Assumes that all
OSSCPs are subject to same risks. But Ravelry—a site focused on the fiber
arts—is not YouTube. Cost benefit analysis is very different for a site that is
for uploading patterns and pictures of knitting projects than for a site that
is not subject-specific. Negotiating with photographers for licensing is very
different than negotiating with the music labels, but the framework assumes
that the licensing bodies will be functioning pretty much the same no matter
what type of work is involved. Sites like the Archive of Our Own receive very
few valid © claims per works uploaded, per time period, per any metric you want
to consider, and so the relative burden of requiring YouTube-like licensing is
both higher and less justified. My understanding is that the framework may be
flexible enough to allow a service to decide that it doesn’t have enough of a
problem with a particular kind of content to require licensing negotiations,
but only if the authorities agree that the service is a “good guy.” And it’s
worth noting, since both Ravelry and the Archive of Our Own are heavily used by
women and nonbinary people, that the concept of a “good guy” is likely both
gendered and racially coded, which makes me worry about its application.

Suggestion: Proportionality is much harder to achieve than
just saying “we are regulating more than Google, and we will make special
provisions for startups.” To an American like me, the claim that the DSA has
lots of checks and balances seems in tension with the claim yesterday that the
DSA looks for good guys and bad guys—a system that works only if you have very
high trust that the definitions of same will be shared.

Regulators who are concerned with targeting specific
behaviors, rather than just decreasing the number of online services, should
make extensive use of test suites. Daphne Keller
of Stanford and Mike
Masnick of Techdirt
proposed this two years ago. Because regulators write
with the giant names they know in mind, they tend to assume that all services
have those same features and problems—they just add TikTok to their
consideration set along with Google and Facebook. But Ravelry has very
different problems than Facebook or even Reddit. Wikipedia was big enough to make
it into the DSA discussions, but the other platforms burdened most because they
haven’t built the automated systems that the DSA essentially requires are now
required to do things that Facebook and Google weren’t able to do until they
were much, much bigger.

A few examples of services that many people use but not in
the same way they use Facebook or Google, whose design wasn’t obviously
considered: Zoom, Shopify, Patreon, Reddit, Yelp, Substack, Stack Overflow,
Bumble, Ravelry, Bandcamp, LibraryThing, Archive of Our Own, Etsy.

The more complex the regulation, the more regulatory
interactions need to be managed. Thinking about fifty or so different models,
and considering how and indeed whether they should be part of this regulatory
system, could have substantially improved the DSA. Not all process should be
the same just like not all websites should be the same, unless we want our only
options to be Meta and YouTube.

Q: Another factor: how do we define harm and who defines it—that’s
a key that’s being left out. Someone stealing formula from Walgreens is harmful
but wage theft isn’t perceived as the same harm.

Grimmelmann: Agree entirely. Model takes as a given that regulator
has a definition of harm, and that’s actually hugely significant and contested.
Distribution of harms is also very important—who realizes harm and under what
conditions.

Q: Monetization on YT: for 6-7 years, there’s been
monetization during dispute. If rightsholder claim seems wrong to user, content
stays monetized until dispute is resolved. We might still be concerned over claims
that should never have been made in the first place. YT has a policy about
manual claims made in Content ID. Automatic matching doesn’t make de minimis
claims; YT changed policy for manual claims so they had to designate start and
stop of content experience and that designation had to be at least 10 seconds
long. A rightsholder who believes that 9 seconds is too much can submit a
takedown, but not monetize. Uploaders that sing cover songs have long been able
to share revenue w/© owners.

Quintais: the paper goes into more detail, but it’s not
clear that these policies are ok under new EU law. [Pastiche/parody is the
obvious problem since it tends to last more than 10 seconds.] Skeptical about
the monetization claims from YT; YT says there are almost no counterclaims. If
the system can’t recognize contextual uses, which are the ones that are
required by law to be carried/monetized by the uploader? A lot of monetization
claims are allegedly incorrect and not contested. Main incentive of YT is
pressure from rightsholders w/access to the system further facilitated by Art.
17.

Q: platforms do have incentives to care about fundamental
rights of users. We wouldn’t need a team at YT to evaluate claims if we just
took stuff down every time there was a claim. [You also wouldn’t have a service—your
incentives are to keep some stuff up, to be sure, but user demand creates
a gap as Grimmelmann’s paper suggests.]

Quintais: don’t fundamentally disagree, but Art. 17 leaves
you in a difficult position.

Hughes to Grimmelmann: Why assume that when harm goes up,
societal benefit goes down? Maybe as harm to individual goes up so does
societal benefit (e.g. nude pictures of celebrities).

A: disagrees w/example, but could profitably put a
consideration of that in model.

from Blogger http://tushnet.blogspot.com/2023/04/27th-annual-btlj-bclt-symposium-from_5.html

Posted in Uncategorized | Tagged , , , | Leave a comment

27th Annual BTLJ-BCLT Symposium: From the DMCA to the DSA: Shira Perlmutter, Register of Copyrights, Keynote on emerging tech

Shira Perlmutter

AI detection and AI generation of content: CO has role to
play in applications for registration and as advisors to Congress/exec branch
on © law and policy.

Use of tech measures to detect works online: we published a
long report on 512 concluding that it has become unbalanced and should be
updated, including definitions for standard technical measures (STMs). Service
providers have an obligation to accommodate and not interfere w/STMs; they hadn’t
yet been figured out yet and providers shouldn’t have to use them but should
also not interfere w/their use. 512(i) provided that they had to be developed
in a fair, multistakeholder process and had to be available fairly and w/o
undue burden on providers. But not a single tech has been designated as an STM;
we could make the vision a reality. [Ah yes, get the nerds on it.] We conducted
a public study on STMs and also panels on voluntary tech measures. We
recommended three changes to the statute: clarifying that the terms “broad consensus”
and “multi-industry” require substantial consensus and not unanimity, and only
w/r/t industries in question. Replace “developed by a broad consensus” w/ “designated
by a broad consensus” even if they were initially developed by a narrow group.
Set out factors for determining whether there were substantial costs or burdens
on service providers. We didn’t recommend gov’t designation process or repealing
512(i) entirely. Given complexities of evolving tech, weren’t convinced gov’t
process would work, but an improved consensus-based framework could play a role
in curbing infringement, but still an open question about getting everyone to
table.

Voluntary measures received 6000 public comments. No real
surprises. Diversity of online marketplace has generated increasingly wide
variety, precluding one size fits all. Effective tech measures share:
inclusivity; collaboration; communication; and transparency. Many participants
expressed frustration when these elements were missing. Future initiatives
would benefit from ensuring these attributes. As in EU discussions, other areas
of divergence related to resources and access—resource intensive measures
remain problematic for small rightsowners and small services. Small service
providers, including startups, don’t have capital to invest in expensive
technical measures. But others respond that limits on size and resources
shouldn’t excuse putting in place protections if the platform is distributing
content to the public. [Sadly, they don’t mean “if the platform is distributing
infringing content.”] Access was also controversial w/individual rights holders
who were frustrated at not being included in discussions; but there is also a
risk of intentional or unintentional misuse.

NFTs: we’re conducting a joint study w/PTO on IP issues.
Participants in roundtable didn’t ask for statutory change specific to ©;
automated royalties for token resale were exciting. But unclear if it’s
enforceable downstream particularly if it moves b/t markets. And a purported
transfer of © can be difficult if there are off-chain terms and conditions.
Sending a takedown notice to a marketplace can block sale, but doesn’t get rid
of the actual putatively infringing content. And there are jurisdictional
challenges and challenges in identifying source of content.

Current hottest area is AI. Does Act restrict authorship to
humans? Fed Cir said inventors had to be human; Thaler is arguing that © should
recognize AI authors. We don’t think we acted arbitrarily and capriciously in
so holding. What about works produced through a combination of human and machine?
What type of human contribution is enough? Comic book case: we registered the
work based on an application identifying a single human author that didn’t
disclose use of Midjourney. When registrant claimed publicly that it was
AI-generated, we asked for more information, and determined that, b/c of the
way the tech works, the individual images lacked sufficient human authorship,
but issued a more limited registration that covered the human-authored
elements: text and selection/coordination/arrangement of the images. Given the
increasing number of applications for AI works, additional guidance was needed,
so we issued a clarifying statement. Our goal was to help people avoid problems
w/validity of registration. Affirms the human authorship requirement and
instructs applicants of duty to disclose inclusion of significant AI generated
content and provide brief explanation of human author’s contributions. Further questions
will be addressed on a case by case basis since new hypotheticals keep coming
up. This is not the end of our guidance.

How do we distinguish b/t use of AI as tool, like Photoshop,
and AI-generated content? Continued work!

AI datasets trained on © images scraped from websites: is
that ok? Getty has sued Stability AI in Delaware and UK alleging that it
scraped 12 million images to train Stable Diffusion; also a proposed class
action. Key question is whether exceptions apply—text and data mining exceptions
in many countries, and fair use in US. Result will likely depend on nature of
output and effect on market; courts may view ingestion for research differently
than ingestion for producing content that competes in the market. Spectrum: output
is substantially similar; output is similar style; output appeals to same
audience—courts may treat differently. Who would be held responsible for any
infringement? Owner of computer, programmer, prompter?

We’re not done even if we clarify existing law: need to
consider whether existing law should be changed. Would it promote progress, as
Thaler says, to grant rights in AI generated content? What about the fact that
the Clause specifies granting rights to Authors as the means to promote
progress? It’s hard to incentivize a machine; do we need to incentivize the
machine’s owners more than they already are? Can “authors” include machines?
Can some other constitutional clause form the basis for sui generis rights? To
the extent that fair use or another exception applies, should there be
accommodation for human creators of the works, if not authorization then
attribution or remuneration? Certain services have established voluntary
remuneration of some kind. Questions are always easier than answers, but we’ll
be looking at all of these. Launched broader AI initiative to address scope of ©
and also legal status/implications of ingesting © works in training AI, holding
listening sessions with creators, lawyers, technologists. Will hold informational
webinars over summer and publish notice of inquiry afterwards intended to
inform a report/series of reports analyzing the issues.

from Blogger http://tushnet.blogspot.com/2023/04/27th-annual-btlj-bclt-symposium-from_71.html

Posted in Uncategorized | Tagged , | Leave a comment

27th Annual BTLJ-BCLT Symposium: From the DMCA to the DSA: Panel 2: Will the DSA Achieve a “Brussels Effect”?

 Moderator: Martin Senftleben, University of Amsterdam

Copyright Law and/or/vs. a ‘Brussels Effect’ for the Digital
Services Act

Jennifer Urban, Berkeley Law School

The Brussels Effect claim is descriptive, not predictive—can
it apply to the DSA? Criteria favoring a Brussels Effect are market size
controlled by regulator—a lot of firms want to compete there; regulatory
capacity/institutional expertise; stringent standards. Most likely to happen
when the EU standard is the most stringent. That ties to nondivisibility—firm’s
choice based on incentives/cost benefit analysis to treat their activities as
nondivisible/nongeofenced. Inelastic targets are another factor: © holders,
members of the public, potential infringers.

Consider here stringent standards and nondivisibility.

Bradford’s four examples of the Brussels Effect v.
copyright: (1) Food/chemicals (physicality, unlike ©; easy to establish
stringent rules b/c it’s allowed v. disallowed or level of chemical is allowed
but not more; generally good observability/enforceability; first mover tends to
be clear, though who it is can vary—banning hydrogenated oils in certain kinds
of foods); (2) privacy (like ©, lacks physicality; not binary but complex and
nuanced; first mover clear: EU competing against regimes that were
comparatively much less developed, creating a comparative vacuum that could be
filled; strong first mover effect creates obvious stringency differential
initially even with nuance and balance-seeking; this could change later and
depends on stickiness of baseline regime). (3) competition and (4) digital
economy—copyright is potentially an aspect of both of those; DSA also relevant.

Copyright lacks physicality; isn’t binary, complex, nuanced,
balancing—what even is stringency in this context? Highly developed,
longstanding sectoral systems put in place over hundreds of years.
Traditionally supported a highly segmented, explicitly territorial market approach
by multinationals. Very sticky. EU has been first mover on some things and not
others (Art. 17 v notice and takedown).

A service provider decides what it must do under 512 and the
DSA/Art. 17—figures out where it fits in the ecosystem of regulated actors. Then
figures out what it must do: in US, don’t engage in primary or secondary
infringement. In EU: do not directly infringe; DSA obligations—T&Cs,
transparency, notice & action, statement of reasons, allow
complaint/appeal, ADR, trusted flaggers, etc. Then figures out what it can
do: choose to comply w/512’s detailed rules, deciding details of implementation
and making removal decisions. In EU, they get the safe harbor no matter what,
and they decide terms of service, details of implementation, make removal
decisions.

Which regime to choose? DMCA is a post-office model: you just
respond to notices and send to users, just passing things on. DSA: they’re
designer, adjudicator, rights protector, systemic risk avoider—much more detailed,
rule-oriented role if they choose DSA path. If we think most stringent rule is
more likely adopted worldwide, looking good for DSA. DSA into account many
lessons from notice and takedown system and intermediary liability in US and
other countries—balance, notification to complainer whether something came
down, dispute resolution processes. Both more stringent to benefit of
users/public and to benefit of © owners in some important ways; many big
services do these things already.

Complications: what does stringency mean in this context?
Infringement/infringers? Inaccurate/abusive © claims/complainants? Who are we
protecting/benefiting? Copyright holders/fair users? Incentives for new
copyright protected expression? Innovation and follow on creativity?

Nondivisibility: voluntary adoption of second jurisdiction’s
rules where standardization is attractive/incentivize. Must consider entire
regime: downside risk, directionality of risk—structural and practical bias
towards takedown under 512 remains; downside risk is disproportionately from
one direction—downside of false/inaccurate takedowns or filters is much lower
than downsides of mistaken stay-ups. This is a form of stringency and affects
incentives for non-divisibility, especially given US statutory damages and
injunctions. No balancing required—recognizing fair use is not required for
platforms, nor is giving users much procedure. Stringency can thus be non
obvious, contested/contestable, and downside risk matters.

Guess: many service providers will therefore stick with 512;
US wins, but maybe for the wrong reasons.

The DSA Trusted Flagger Regime and Its Interplay with
Article 17 DSMD in the Aftermath of CJEU, Poland – A Promising Model?

Eleonora Rosati, Stockholm University

Trusted flaggers moved from practice to statutory
regulation, and are relevant to Art. 17. YT had it since 2012; now only NGOs
and gov’t agencies. YT for 2022: most removals were automated (5.2 million),
250,000 from users, 51,000 from organization, a few from gov’t.

DSA institutionalized trusted flaggers in Art. 22: rationale
to make action against illegal content quicker/more reliable. Eligibility:
entities w/expertise and competence in detecting, identifying and notifying
illegal content, in a way that’s diligent, accurate and objective; independence
from platform; private entities and individuals can conclude bilateral agreements
w/platforms. Recognition by DS Coordinator of member state where established;
must be recognized by all platforms targeted by DSA. Obliged to do annual
reporting at least on types of notices to whom, for what, and resulting action;
suspension of trusted flagger status possible during investigation from
significant number of imprecise/inaccurate/unsubstantiated notices.

At least in certain sectors, trusted flaggers will play a
significant role in ©/Art. 17 for acting expeditiously in response to notice.
CJEU Poland: one limit the court focused on is that rightsholders have to
provide relevant and necessary information. Maybe helpful in creating a global
standard. But there is a targeting approach—you don’t need to be in the EU to
be covered by the DSA if you target the territory—this can come from language
used, currency, possibility of ordering products or services, relevant
top-level domain, can you purchase/download app from local app store, local
advertising or advertising in a EU language, customer relations including
language used. [I wonder what other Spanish- and Portuguese-speaking and
Francophone countries think of this.]

The Global Internet and Its Workable, Bespoke, Patchwork
Regulation     

Justin Hughes, Loyola University Los Angeles

John Perry Barlow’s Declaration of Independence of
Cyberspace: a full repudiation of the lack of gov’t authority or moral right to
rule cyberspace. DSA: Admirable technocrats’ work, even with much that is
unknown or could have been done differently. So many working parts have been
brought together, requiring consultation and compromise. Although we’re here to
bury the Ecommerce directive, not to praise it, it was easy and fun for
students, and he’s not sure how or if the DSA can be taught, except on its own.

He’s always advocated a local, evolutionary approach
w/convergence in legal norms, as w/notice and takedown. Still happening in some
areas. But Art. 17 made it obvious that defection from 2000s consensus on
intermediary liability was happening, and DSA is next step in ending that global
consensus. Brussels effect is a decision to conform private behavior to a single
standard v. uncoordinated legal regimes following other regimes to convergence.

Assuming there’s a difference b/t notice and takedown and notice
and action, will it go to the US? Prediction: no. Content owners won’t agree to
a revision of 512, at least w/o a long period of seeing what platforms will do.
1A will also complicate any efforts to adopt many DSA elements.

Imagine an EU domiciliary uploads © work belonging to an
American that’s not an OCCSS under Art. 17 or VLOP under DSA. Sender sends DMCA
notice. What does the platform do? We don’t have any court decisions saying what
the outer limits of expeditious takedown are. Maybe platform adjusts by complying
with both regimes—could have a quiet, virtually hidden effect on DMCA compliance.
But it’s not hard to immediately disable access to all US IP addresses while
taking a bit longer to, per DSA, take a decision in respect of the information
to which the notice relates in a timely, diligent and non-arbitrary manner. Indeed,
this might be a good way for a platform to show that it was taking DSA seriously.
Don’t assume that Brussels and California effect that occurs w/privacy also
occurs when corps can adjust tech granularly.

Globally speaking, car manufacturing is a big business
dominated by 20 companies in 8 countries. Despite that, 35% of world’s population
still drives on the wrong side of the road; private actors can deal w/different
regimes where tech allows.

Search engines: just as w/Ecommerce directive, there’s no
safe harbor/notice and action provisions for search engines. There were
proposals in DSA but no ultimate obligation. But there’s still 512(d). Should
we assume that DMCA 512(d), used by © holders around the world, will keep
working the way it has been working? Seems to be used mostly by highly automated
trusted flaggers. 4 of top senders are the entities that will undoubtedly
become trusted flaggers.

Transparency will either exacerbate arms race w/bad guys if
there’s too much disclosure; or it will produce vague disclosures and not have
that much impact.

Unintended effects: major content owners will employ
multiple trusted flaggers b/c of the mechanism for suspending trusted flaggers,
and no major content owner can afford to have its trusted flagger suspended even
for a bit.

Will the DSA Achieve a “Brussels Effect”?

Anupam Chander, Georgetown Law School

Imagine the DSA of Brazil, India, Nigeria, or Putin. Roche-Laguna
wants to think that the DSA passes the Putin test. Does it?

Companies could adopt DSA-compliant practices worldwide, or
EU itself might promote DSA as a legal model including in its free trade
agreements. Governments might also find much to envy in DSA. Disinformation,
hate speech, communal violence, and election interference are acute in many
countries in the Global South. But institutional capacity and resources can be
more limited and civil society institutions/independent press more fragile. States
are often at grave risks of democratic backsliding or already have
authoritarian tendencies w/serious risks for free expression.

Consider Digital Services Coordinator. Not just a
“coordinator”: DSC chooses trusted flaggers w/significant privileges, certifies
dispute settlement bodies—decides who the judges are. Can also demand info from
platforms related to suspected infringement of DSA, including power to conduct
onsite investigations and seize information regardless of storage medium; DSC
can also seek a judicial order to temporarily suspend a platform.

Demands to control social media—take down posts from
political rivals, block full services—are common around the world.

Commission powers: mitigation measures for risks the
platforms may create; crisis protocols. These powers translated into new jurisdictions
might be less trustworthy. Nigeria and Twitter got into a standoff—Twitter deleted
some tweets from Nigeria’s president for stoking violence/civil war. Gov’t
banned Twitter for 6 months; Twitter negotiated a return on undisclosed terms,
which could be characterized as mitigation measures for the risks the gov’t
thought Twitter posed to the Nigerian people. [Consider how many countries—including
some in the EU—think that “promotion” of LGBTQ+ identities constitutes a risk
to the public.]

What about the checks and balances? DSC is supposed to be independent,
and powers exercised in conformity w/EU charter of rights; crisis protocol must
include safeguards to protect charter rights; no filtering obligations and
conditions for being a trusted flagger. But overall there is great fear of
corporations in this and not much fear of gov’t; consider how it might be
weaponized. When we think about fundamental rights, we must always keep in mind
protecting fundamental rights both against private corporations and the state.
The distinction b/t safe harbor and liability might exist for an individual
piece of content, but the DSA has a very substantial liability regime of 6% of
global turnover, 50% higher than GDPR max, that is available to the state. That
itself might be attractive to many govts across the world across the economy.

Q: interaction w/ localization of user data requirements to
facilitate law enforcement?

Urban: parallels to DSA/512. Where a company is making a
decision about which regime to apply voluntarily, that applies to
extraterritorial information. If Russia requires localized data, then the Q of
whether one regime is more efficient doesn’t come up in the same way.

Keller: Americans don’t much trust regulations or regulators,
and we have scar tissue around litigation—if the rule is a little bit ambiguous
someone will litigate and that will cost a lot of money. This causes Americans
to be freaked out by the DSA; Europeans think that it would be fine for other
countries to implement the DSA but maybe that’s not true b/c of the differences
in legal cultures.

Hughes: this is a well-known difference in population, but
not legal community—distrust of gov’t and undue trust for corporations until
recently. Reversed in EU, but that’s more popular culture than legal community.
In the latter, a very large percentage of lawyers have worked in gov’t and have
more trust in the process. [Not sure that characterizes the former gov’t
lawyers I know, especially given the state/federal divide.]

Rosati: EU motto is “United in Diversity.” Sensitivities are
very different country to country. Not a uniform bloc; in the end it was a
compromise.

Senftleben: don’t ask people about their national gov’ts—a completely
different story to trust in the EU.

Q: what will happen in Hungary? What safeguards will keep
Hungarian authorities from abusing their power under the DSA? American right-wing
populists are watching Orban closely for inspiration, and DeSantis is using his
playbook. Ken Paxton has weaponized investigative powers against tech
companies, and Orban is smart enough to see that. Who cares who the population
trusts? The legal question is what can be done to weaponize these powers and
what safeguards exist against abuse?

Chander: We need to be cautious—the EU project doesn’t have
competence over everything; DSA repeatedly defers to national laws including in
definitions of illegality. Charter is supposed to be followed, but the dispute
settlement system in the Charter has an enormous backlog of cases.

Senftleben: A real risk b/c we know that Hungary is different
from other EU states, but it’s a regulation, so the text of the framework is
beyond the reach of national gov’ts. The competencies are embedded in a European
framework. System would be able to react to extreme tendencies in individual
gov’ts better than previous framework. [But the trusted flagger system now
blesses the idea of deputizing entities to do takedowns—I think the particular
risk is clearly to LGBTQ+ content.]

from Blogger http://tushnet.blogspot.com/2023/04/27th-annual-btlj-bclt-symposium-from_86.html

Posted in Uncategorized | Tagged , , | Leave a comment

27th Annual BTLJ-BCLT Symposium: From the DMCA to the DSA: Keynote and copyright interactions

Opening Keynote by Irene Roche-Laguna of the European
Commission’s DGCONNECT group on origins and aspirations for the DSA

People thought it couldn’t be done; didn’t know whether it would
be a directive or a regulation. But took only 6 months to agree and 22 months
to be adopted. We also managed to keep the substance—3 red lines: country of
origin, safe harbors, prohibition of general monitoring obligations—widely acknowledged
for its balance. Council often turns a beautiful baby into a Frankenstein’s
monster; it was very close to being much worse. In Council, we had proposals
for staydown; Netherlands asked for a modest duty of care; Germans wanted a
24-hour deadline copying NetzDG; Parliament wanted liability exemption for marketplaces
except for illegality; staydown; and prohibition of automated filtering that
would have prevented spam filters.

Now the EU is ahead of the US—in Gonzalez, SCOTUS is being
asked about recommendation systems; but we’ve already answered that.
Recommending videos on the basis of user behavior is not enough to show
specific knowledge of illegality. Twitter: does a platform that scans for
terrorist activity become liable merely b/c it could have taken more aggressive
action? EU has answered in eBay case, no, it has to have actual knowledge,
usually triggered by a valid notice. “Good Samaritan”: implementation of
measures detecting content that may infringe the law does not constitute
knowledge and control over that which escapes deception. We win!

DSA improvements on Ecommerce directive: new, clarified and
linked to due diligence obligations.

New is important b/c it’s a democratic endorsement. Qs: “These
companies know everything about us; how can they not know what’s illegal? How
can they not know when illegal content is reposted? They make money from third
party content and should be responsible for it. The Ecommerce Directive you
point to is so old, adopted when the internet was new.” This legal ageism was
the last resort of critics. Although it was opening Pandora’s box, a hornet’s
nest, and a can of worms at the same time, it was worth redoing. Fortunately
the red lines were respected as the democratic mandate of the safe harbors was
respected. This was not easy—the first question asked in committee was what
about staydown. But Art. 8 prohibits general monitoring obligations, which is a
success.

Clarified and a regulation instead of a directive: That’s
important b/c a directive is transposed into national law, w/27 potential
different means. Some member states would define actual knowledge to be limited
to manifestly illegal content; some wouldn’t; some would have notice and takedown,
others didn’t; etc. They tried to get DSA to be directive, but it’s not—the end
of legal fragmentation. And the rules are clarified by incorporating longstanding
caselaw in relation to safe harbors over 22 years of application/interpretation,
especially about when a provider plays an active role leading to knowledge and
control. Active/passive is not an important distinction; notice has to show
content is manifestly illegal w/o need of detailed legal examination. Suspicion
of illegality is not sufficient.

Liability exemptions are independent of a full set of due
diligence obligations. This is the major DSA regulatory contribution—splitting due
diligence from liability for third party content. National courts have pushed against
safe harbors to get platforms to do something—immense pressure on the safe
harbors. Liability and social responsibility were mixed in debates. National
courts had to impose duty of care or accept safe harbors as hands-off approach.
But DSA allows protection for third-party content while expecting the platform
to act diligently. And it’s fully harmonized, meaning that states can’t try to “top
up” the DSA. If the platform is diligent, it is protected from liability even
if the content is illegal. There were attempts to make safe harbors conditional
on due diligence, but they were not accepted. Judges will not be auditors of
DSA compliance.

Due diligence obligations focus on procedures, not on
content—what is illegal. No admin oversight of content. The majority of
moderation decisions are not about removal, and not about illegality. Users
also need redress/transparency about those decisions.

Three characteristics of DSA that are building blocks: (1)
single market nature, (2) proportionality, (3) process effect. Single market
effect: harmonization of national rules, like US federal preemption. Helps
service providers pay engineers instead of lawyers; uses country of origin
provision where they’re subject to compliance only in that member state. Legal
fragmentation is bad for businesses and legal certainty. Easier said than done,
but DSA centralizes and neutralizes enforcement against systemic risks posed by
VLOPs and VLOSEs.

Balanced approach: if we regulate only with Google in mind,
we will only have Google, so rules needed to be proportionate to size and
capacity of providers. Higher responsibilities for services that are higher in
the food chain—infrastructure providers are different than consumer-facing
providers; this creates the distinction between transmission and hosting and
VLOPs. And startups/small providers have more protections.

Brussels effect: is this worth exporting? GDPR was taken
with skepticism, caution, and then emulated around the world. DSA could be the
same. Could be worth exporting even to less democratic countries b/c of the
checks and balances and judicial control. [This seems in tension with the claim
yesterday that the DSA looks for good guys and bad guys—a system that works
only if you have very high trust that the definitions of same will be shared.]

Panel 1: How the DSA Shifts Responsibilities of Online
Service Platforms

Moderator: Erik Stallman, Berkeley Law School

Designing Rules for Content Moderation: The Shift from
Liability to Accountability in Europe       

Martin Husovec, London School of Economics

Principles that could be useful in trans-Atlantic dialogue:
Many provisions are too European for US, like risk mitigation. [Ouch.]
Framework was validated over time as right one: liability safe harbors is a success
story for the internet b/c it created breathing space for expression and new
services. Ecommerce directive was regulating the member states, not the
services—trying to coordinate how they could regulate in their own
jurisdictions; national regulation and self-regulation was the intent. Second
generation in DSA: try to turn previously unregulated industry into regulated, especially
the largest subset.

What are the building blocks that could be useful abroad?
Four principles:

(1)  
DSA has horizontal rules, not sectoral
fragmentation; covers all areas of law. [But see yesterday’s discussion of ©.]
That made it easier to adopt. Art. 17 does interact, but DSA creates safeguards
that member states might not have wanted to enact. Avoids problems of
regulatory arbitrage. 230 v. DMCA—one set of horizontal rules avoids that. And
proportionate rules are easier b/c they look at all sides, not just complaints
of one industry. Risk mitigation allows you to think both about overblocking
and grievances of © owners.

(2)  
Builds on liability safe harbors: we regulate by
allocating responsibility and sharing burden, not pinning blame on one actor. Victims
are partly responsible for mitigation of harms, providers, and users. DSA
renews democratic support for this, which is not a small thing among publics
and courts.

(3)  
Look at ecosystem, not platform; everyone should
be part of the solution. Users, providers, notifiers, and more need tools. DSA
promises priority for high quality notifications, and notifiers that misbehave
can be suspended. Instead of focusing on damages, we’re focusing only on
suspensions and giving both carrots and sticks.

(4)  
Separating new regulatory expectations from underlying
social contract around liability. In DMCA, repeat infringer policy is connected
to liability protection; in DSA it is not. DSA prioritizes taking action over
compensation. Lack of statutory damages/attorneys fees is an improvement.

US caselaw was instrumental in early days, as were DMCA
notification standards. At this point the EU approach has matured and many DSA
tools can’t be transplanted into the US First Amendment environment, but these
four principles could help guide thought about reform.

“Human review” as the New Panacea of European Platform Law
and Beyond? The Emerging European Standards for the Interplay of Algorithmic
Systems and Human Review in the DSM-Directive, the DSA and the proposed AI Act   

Matthias Leistner, LMU Munich Faculty of Law

Algorithms are strong at pattern recognition and identifying
protecting content and to a certain extent the degree of similarity to
protected content. Encourage best possible human/AI models; we know too little
to decisively regulate. Need to keep it flexible and encourage competition.

Art. 17 was problematic due to heavy political lobbying. DSA
stands a chance building on transparency obligations. Red flag: maximum
transparency isn’t optimal—information overload and maximum transparency in
content moderation can lead to users gaming the system and create a battle of
algorithms. Some transparency is needed for users, others for researchers and
auditors. Notice and action mechanism/internal complaint handling system
requirements of DSA relate to algorithms.

Proposed AI Act is a sector-specific regulation of AI
techniques that might overlay onto the DSA; also GDPR might have an impact.

Art. 17: on the one hand, accepted algorithmic blocking, on
the other, tried to make sure wouldn’t affect legit users but only by way of
the redress mechanism which often comes too late. German implementation: manifestly
illegal, blocking; if unclear, notice and delayed take/staydown—only ex post. This
is easier where we have a remuneration provision for content owners when the
content remains online. Easier for music than movies which depend on
exclusivity (well, whole movies).

DSA starts from premise that algorithms will be used; notice
and action can be purely algorithmic, w/o human review, just statement of
reasons. But internal complaint-handling must be taken under supervision of
appropriately qualified staff and not solely on the basis of automated means.
Human content moderation isn’t necessarily better: of course the audits can
also relate to the status and situation and role of human content moderators.
So noticeàdelayed
blocking/staydown, while algorithmic decisions can lead to blocking first.
Problem of belated complaint handling in regard to dynamic, potentially viral
content is ignored.

DSA covers all illegal content without prioritization, but
there might be greater/lesser ones. There is a flexible standard for reaction
times—expeditious/timely. The only prioritization is for trusted flaggers, but
how to specify those standards and roles? Need to prioritize certain policy
issues, but DSA doesn’t seem to allow this. Is there leeway to limit trusted
flaggers to offenses of certain substantiality? Art. 22 says that trusted
status “shall be awarded” on certain conditions; raises possibility of trolling
business models.

Proposed AI Act: risk-based approach; based on sector of use—critical
infrastructure, access to essential services, law enforcement, health services.
But also tech based: stricter w/r/t to biometric identification and categorization
of natural persons. Requires human supervision which might interfere w/automated
systems for, e.g., identifying a person. That interferes w/DSA system.

Interventions  

Xiyin Tang, UCLA Law School

230 reform has also focused on accountability, human review,
and transparency. Most content that is taken down is for copyright reasons: why
not talk about copyright along with other content moderation? In part b/c of
agreements b/t large platforms and large © owners. These agreements are highly
confidential, which makes it unclear what counts as “infringement” under this
privatized system. When we think about platforms engaging in content
moderation, they don’t have carte blanche: when there are © claims, legit or
otherwise, there are other claimants setting policy, then passed down to users
through platform as intermediary. When Art 17 was adopted, including good faith
efforts to get authorization from © owners, the largest platforms had already
done so. [As I say, the copyright industries hated Content ID so much they made
it a universal law.] They’re rewriting © policy altogether.

Big problem for transparency. During covid, when live tours
were cancelled, artists broadcasted themselves from their bedrooms. FB Live let
users do this for a minute or two at the time, then user accounts were blocked
or suspended; Instagram, in a rare act of transparency, disclosed that Meta had
agreements with large content owners requiring this blocking. But it didn’t
disclose any guidelines—we can’t tell you what they say; use less music, but we
can’t say what the threshold is. Leaked agreements online show the deal parameters
at which a user is deemed to be a bad faith actor leading to suspension,
muting, blocking. But no party wants to disclose those terms. Transparency
requires us to decide how much platforms are required to disclose. E.g., what
constitutes a clear infringement? Copyright owners don’t want to transpose public
law; what would be the point of private ordering otherwise? So they rewrite the
law. Crops up in the US w/fair use—rightsholders don’t like the idea of fair
uses. The Sony presumption of commercial use being unfair was rolled
back in Campbell, but privatized © agreements override Campbell.
Delineate b/t users that can pay and users that can’t. Rightsholders allow
latter to be covered by large lump sum from platforms; no one was going to pay
anyway. But commercial users, in the leaked agreements, had their uses blocked/put
into commercial review queue to allow rightsholders to go into system and
identify high-value users who could afford a license. Substitutes for fair use.

Eric Goldman, Santa Clara Law School

Implications of DSA on legacy © industries—unintentional benefit.
DSA is written w/expectation that companies will keep doing what they’re doing today,
but level up certain practices. But laws have unintended consequences; what will
change? Seems obvious that platforms will change their behavior, b/c DSA
increases costs of doing business. Minor changes: cost of ADR, cost of audits.
Content moderation is no-win since you can’t make anyone happy; appellate
rights are structural costs, as are transparency mandates. How will services
decrease these costs?

Community of “authors” and “readers”: people flip between
those statuses, but only a small percentage of people who have accounts act as authors
consistently. General rule: 10-20% of content creates 80-90% of revenue. The
DSA will affect the treatment of the long tail.

As practical matter, most authors are in long tail w/ relatively
small audiences that aren’t commercially valuable. Increased costs of catering
to them makes their content less profitable or even unprofitable. Obvious
reaction: cut off long tail. Alternative: charge authors to contribute b/c we
can’t make money in existing business model—Musk’s moves w/Twitter Blue.

Over the long term, hits come from pro producers, despite
occasional viral hits. So services will look for hits; will structurally shift
from prioritizing amateur content to professional content.

His predicted countermoves: web was predicated on amateur
content; producers who lacked mechanism to reach an audience would provide that
content for free—massive databases of free content that could be ad-supported
b/c it didn’t cost much to obtain. DSA shoves that model towards professionally
produced content, making services need something more than ad-supported
business, resulting in more paywalls.

Why does Hollywood oppose 230? Systemic battle to reduce the
overall amateur content ecosystem. That’s why they supported FOSTA—changing the
overall ecosystem.

Losers: niche communities. Fewer places to talk to one
another; hits will focus on majority interests.

Stallman: Statements of reasons for certain types of
takedowns—will that help? [Who doesn’t do that already? Even if you find current
statements vague, the DSA mandate doesn’t seem to create anything new.]

Leistner: these statements will be algorithm-written and thus
at a rather high level. Sometimes this makes sense so the system can’t be
played. The algorithm will just come up with the part of the policy that was
violated, and if the list is long that won’t help much. Still an improvement
b/c the platforms don’t do anything more than they have to. Compare FB/Google
to Amazon: Amazon is efficient on TM but not ©, whereas FB/Google are efficient
w/© and not TM—might be more accountability. [Isn’t this justified by the kinds
of harm that the different services are more likely to cause? That seems like
good resource allocation, not bad.] No standardized complaint procedure/no
human to speak to—the jury is still out on whether the DSA will help.

Goldman: statements of reasons are great example of
accuracy/precision tradeoff. Services will emphasize simplicity over accuracy.
Have seen lawsuits over explanations, so services will want to be as generic as
possible. Appellate options: for every good faith actor who might be appealing,
we should expect X bad faith actors to use the appellate process to hope that
they can reverse a decision by draining service resources. For more precise
explanations, assume that bad faith actors will exploit them; explanations for
them just drain resources.  

Justin Hughes: don’t understand why long tail content would
disappear—assuming a person puts unauthorized long tail content online, that
won’t be as common by hypothesis, but why would it decrease authorized long
tail content?

Goldman: turn off authorship capacity for many existing
users. Twitter has taken away my blue check b/c I’m not of sufficient status to
retain the blue check & I’m not willing to pay. More of those kinds of
moves will be made by more services. Don’t think that existing userbase will keep
authorship/reach powers.

Tang: Art. 17: more money in authors’ pockets by requiring
licenses is the aim. But that concentrates money and which authors get paid.
Makes legacy © holders stronger; creates antitrust problems.

Husovec: Would resist Goldman’s view. Companies that produce
externalities are being asked to pay for them where others are paying now. When
FB doesn’t do proper content moderation, creates externalities for users, so
newspaper has to moderate the comment section on its FB page. Forcing FB to
internalize the costs just means a different entity is paying. [I think that’s
Goldman’s point: FB will try to reassert its position and if it can it will make
the newspaper pay directly.] We might go towards more subscription products,
but not necessarily only b/c of regulation but also b/c we’ve reached the
limits of an ad-supported model.

Q: What about misuse/trolling? Will Art. 23 of DSA address
this? Allows temporary suspension for abuse of process. If you as rightsholder
already have access to Content ID, will you have an incentive to become a
trusted flagger/subject yourself to this regime?

Husovec: DSA is just a bunch of tools; outcomes are up to
the actors. Does have tools to disincentivize—suspensions are superior to
damages. Also applies to appeals, and collective action for consumer
organizations if companies don’t terminate repeat offenders. The problem is
whether the supervision of this will be sufficient. Regulator’s role is obvious—can
strip trusted flaggers of status if not good, but will they be monitored? If
services don’t tell regulators b/c of private ordering, or if they don’t become
trusted flaggers b/c they already have Content ID, then it won’t wore

Leistner: questions are interlinked: if the trusted flagger system
is rigorously policed, it’s less attractive to rightsholders. In theory we want
a Lenz type system with human review for exceptions, but maybe they’ll
be more comfortable with private system. NetzDG was of limited effect b/c just
adapts existing policies; maybe privatized systems remain preferable to this
regulated system, but may offer opportunities to those beyond © like human rights
organizations—should be more opportunities for non-© owners to achieve same
results. Small © owners are relatively disadvantaged where large © owners have
access to monetization and they don’t—we already have this problem.

Tang: Songwriters have complained that authors get worse
outcomes through direct licensing. Under consent decrees, they have to report
to authors first. Under direct licensing w/platforms, large publishers skim off
the top first.

Leistner: extended collective licensing would be the European
answer. [My understanding is that those also overallocate to the top of the
distribution.] Would increase costs, but introduce more fairness. Doesn’t fly
right now b/c lack of supranational ECL. But he’s certain Europe will look into
this b/c the link is so obvious. But that would also mean that every post could
cost money.

from Blogger http://tushnet.blogspot.com/2023/04/27th-annual-btlj-bclt-symposium-from_7.html

Posted in Uncategorized | Tagged , | Leave a comment