But in that sleep what dreams of liability may come?

When you sue a competitor for false advertising, be prepared to get sued back.  In this pair of opinions, most of the parties’ claims against each other survived, paving the way for a messy trial.
GhostBed, Inc. v. Casper Sleep, Inc., 2018 WL 2213002, No. 15-cv-62571-WPD
(S.D. Fla. May 3, 2018)
GhostBed and Nature’s Sleep (hereinafter GhostBed), owned by
the same family, sued Casper, a competitor in the online mattress business, for
various causes of action.  Nature’s Sleep
alleged that it was among the first in the mattress business to deliver a “bed
in a box” concept direct to consumers: a mattress vacuum-sealed in a box, which
inflates when the packaging is open, though Casper did well after its launch in
2014.  In 2015, Nature’s Sleep launched a
competing DTC company, GhostBed.
Casper argued that GhostBed copied many of its product
features, website design, and marketing techniques, down to the name, GhostBed,
“designed for customers to associate the ‘ghost’ name with Casper based on the
popular cartoon character ‘Casper the Friendly Ghost.’” Casper thus sued for trademark
infringement and false advertising under the Lanham Act, along with related
state law claims.
GhostBed accused Casper of intentionally infringing Nature’s
Sleep’s “BETTER SLEEP FOR BRIGHTER DAYS” and false advertising; in this
opinion, the court granted Casper partial summary judgment on the false
advertising claims.
GhostBed registered naturessleep.com (with two ‘s’s). ICS, apparently
a known cybersquatter, registered naturesleep.com (one s). In 2015, Casper
allegedly arranged for users who visited the one-s site to be redirected to
Casper’s website. GhostBed argued that this constituted direct or contributory
infringement and violated ACPA.  Casper
argued that it didn’t register or use the domain name.  AdMarketplace, “a company hired as part of an
advertising campaign by Casper, had some role in the redirection” to Casper’s
site.  The ACPA claim only imposes
liability for using a domain name if a person “is the domain name registrant or
that registrant’s authorized licensee.” Multiple factual issues, also including
damages, precluded summary judgment on these claims.
Likewise, alleged infringement of Nature’s Sleep’s
unregistered mark, BETTER SLEEP FOR BRIGHTER DAYS, couldn’t be decided on
summary judgment.  Whether Casper’s use
of BETTER SLEEP in commerce preceded Nature’s Sleep’s use was disputed.
GhostBed also alleged that Casper engaged in false
advertising by: (1) posting false and misleading comments about GhostBed on the
internet; (2) coercing mattress reviewers into posting fake, favorable reviews
of Casper mattresses on the internet; (3) utilizing search engine optimization techniques
to increase visibility of favorable Casper content on the internet; and (4)
entering into settlement agreement with three mattress reviewers that resulted
in elimination of negative reviews of Casper content.
These claims failed because, first, GhostBed didn’t provide
evidence that Casper posted false/misleading comments about GhostBed. GhostBed
argued that Casper’s use of affiliate relationships with online reviewers was “part
of a concerted effort to reward reviewers to post favorable reviews and ‘strong-arm’
reviewers into posting fake positive reviews of Casper’s mattresses.” However, GhostBed
didn’t prove that this conduct involved false or misleading statements that
deceived consumers.  Casper also
purchased the Google Ad Word “Ghostbed” and directed that an ad saying “Why Buy
a Copycat?” and “Surely you Meant Casper” would appear as a sponsored link in
search results when users googled “GhostBed.” “Here, the Lanham Act claim fails
because these are not false or misleading statements of fact. Instead, these
are advertisements suggesting Casper’s opinion that GhostBed is a copycat and
that the consumer should also investigate Casper’s mattress.”
GhostBed argued that Casper manipulated search results with
negative SEO techniques that caused favorable Casper mattress reviews to appear
higher in search results and unfavorable Casper reviews to appear lower.  But this “common marketing strategy” wasn’t an
actionable false or misleading “statement.” 
So too with entering into settlement agreements with online mattress
reviewers to remove negative reviews of Casper mattresses.
Ghostbed, Inc. v. Casper Sleep, Inc., 2018 WL 2213008, No.
15-cv-62571-WPD (S.D. Fla. May 3, 2018)
Here, the court denies GhostBed’s motion for summary
judgment on Casper’s claims for trademark infringement/false advertising.
Casper alleged that GhostBed used Casper’s name in social
media posts, creating a likelihood of customer confusion and that a Google
AdWords campaign stating “GhostBed vs. The Competition—Pick your Ghost
Carefully” contributed to consumer confusion by associating Casper with “Casper
the Friendly Ghost.” Use of the trademark “GhostBed” also allegedly caused
consumer confusion with the trademark “Casper.” Given Casper’s numerous allegations
of consumer confusion. GhostBed’s argument that the confusion is de minimis was
a question for trial.
Whether GhostBed’s use of the phrase “SuperNATURAL Comfort” misled
consumers into believing that Ghostbed mattresses are made from all-natural
fibers, or just suggested a connection with the “ghost” in “GhostBed,” was a
question of fact for the factfinder at trial. 
So too with whether GhostBed’s claim of being in business for 15 years was
true because it could legitimately attach its length in business to that of its
related company, Nature’s Sleep. There were also factual issues about whether
GhostBed falsely represented reviews as “Verified Purchaser[s]” on Amazon.com
when GhostBed practically gave the product to the reviewer for free (at a 99%
discount) in violation of the terms of use defining a “Verified Purchaser.”
In a slightly different scenario, GhostBed’s “GhostBed vs.
Casper Mattress Review” stated that Casper didn’t offer a matching mattress
foundation. This statement was initially true when made, in April 2016, and was
updated at some point after GhostBed became aware that the statement was no
longer true, but it was unclear whether GhostBed timely corrected the statement
once it became false. “While Plaintiffs do not have an obligation to monitor a
competitor’s offerings minute-to-minute to correct a comparison that may later
become untrue, Plaintiffs do have an obligation not to make misleading
statements in advertising. A fact finder could find that a substantial delay,
if there was one, in correcting a statement that became untrue, was misleading.”  This is actually more defendant-favorable
than other rulings on the subject, which do find falsity the moment the claim
becomes false (although of course the amount of damages from a short-term
falsity may be limited).
Finally, an image GhostBed’s website depicted the Google
logo and falsely reported that GhostBed had a 4.99 rating (a non-existent
rating). The creator stated that it was designed to poke fun at Casper’s
purported 4.9 rating—“they have a 4.9 rating. I put ours at 4.99.” Misleadingness
and damages were factual issues.
Other claims were only raised as state law (FDUTPA) claims.
Casper targeted an article written by non-party Ryan Monahan of Honest Reviews,
LLC, a purported affiliate of GhostBed: “Casper’s Newest Product Might Be at
the Expense of Animal Cruelty.” The article could suggest that Casper sources
its down feathers from suppliers who “live pluck” birds, but again this was a
factual issue, as was whether GhostBed “used social media to harass Casper’s
customers who posted comments about Casper’s mattresses online” in a way that
was unfair or deceptive under FDUTPA.

from Blogger https://ift.tt/2Gpn5oR

Advertisements
Posted in Uncategorized | Tagged , | Leave a comment

TM exam question: the right of publicity v. comparative advertising

What if Coco Chanel had been the plaintiff in Smith v. Chanel?  This question made me very happy, and I got a bunch of interesting answers on my final:

Kim Kardashian is famous for being famous. She is a highly successful influencer whose Instagram endorsements cost hundreds of thousands of dollars. She has lent her name to a perfume, KARDASHIAN BY KIM.

Beautified also sells perfume. Beautified begins an ad campaign that states, “If you like Kardashian by Kim, you’ll love Beautified, with the same yummy smell but a lower price!”

Assume there are no choice of law or other procedural issues. Explain why Beautified is or is not liable on Kardashian’s right of publicity claim under California law.

from Blogger https://ift.tt/2ru800S

Posted in Uncategorized | Tagged , | Leave a comment

exercise company affiliation and ad revenue don’t make diet review into commercial speech

GOLO, Inc. v. HighYa, LLC, 2018 WL 2086733, No. 17-2714
(E.D. Pa. May 4, 2018)
The court here declines to apply the Lanham Act to “companies
that generate income through websites that review the products of others,
without selling any products of their own.” GOLO sells a weight loss dieting
program that can be purchased through its website. Defendants are review
websites that purportedly assist consumers; HighYa has a marketing affiliation
with a limited number of suppliers (e.g., BowFlex Max Trainer), but both defendants’
principal source of revenue comes from ads. 
GOLO contested the fairness and accuracy of defendants’ online reviews,
leading to revision on one site and removal on the other, but GOLO wanted to
recover for the initial period.
Defendants’ editorial reviews principally rely on “publicly
available information,” rather than defendants’ own use or testing. GOLO’s website
contained a description of its program, backed by references to research
purportedly supporting the merits of the program. Defendants’ editorial reviews
primarily, if not exclusively, critiqued the statements in that description.
HighYa’s editorial review spurred dozens of comments from purported users, with
an average customer rating of 2.8 out of 5 stars. The link “was posted” across
different social media platforms, one of which contained the statement:
“Weight-loss #scams are everywhere. Is GOLO one of them?”
GOLO alleged that the title, “GOLO Weight Loss Diet Reviews
– Is it a Scam or Legit?” was misleading; much of the review was was based on
an outdated version of the GOLO program site; and  the focus of the GOLO program was not simply
combatting “insulin resistance,” as the review states. The challenged portions were
eventually removed.
The BrightReview article appeared in a similar form. The
average customer rating was 2 out of 5 stars, with three purported users giving
“highly negative ‘reviews.’ ” GOLO challenged statements about its study
evidence and claims.
GOLO alleged that the websites were “designed to appear
trustworthy, [and to] resemble internet versions of more traditional consumer
review publications”  but were owned by
or secretly related to the competitors of the products defendants review.
False advertising and false association claims only apply to
commercial speech. Though there was a specific product reference, the articles
still weren’t ads.  On their face, the
reviews didn’t promote any competing product, and didn’t explicitly propose a
commercial transaction. The court analogized to Tobinick v. Novella, 848 F.3d
935 (11th Cir. 2017). As there, the defendants “gained no direct economic
benefit from readers of the reviews’ decision,” and “[t]he content of the
reviews had no direct bearing on the revenue generated by traffic to the site.”  To the extent that the reviews were based
only on the content of GOLO’s website, “[t]he value of such a review to
consumers may be limited,” but that didn’t make it an ad.  Ad-based financial benefit was merely
incidental to the content.
The Lanham Act does allow liability “if websites purporting
to offer reviews are in reality stealth operations intended to disparage a
competitor’s product while posing as a neutral third party.”  However, GOLO hadn’t plausibly pleaded that
these review sites were shams.
Although “in the absence of discovery, a plaintiff’s ability
to confirm what might be well-founded suspicion is limited,” that wasn’t enough
here.  The court considered the general
content of the sites, including the fact that defendants responded to GOLO’s
objections by amending the reviews and specifically advising readers that
changes to the reviews were based on further information provided by GOLO. “Such
conduct does not plausibly support an inference that the purpose of the reviews
is to create an advantage for competing products.” Defendants also disclosed the
commercial relationship with BowFlex and other commercial affiliations, which
made the allegedly covert competition less plausible.  And to the extent that GOLO pled that defendants’
revenues were a product of web traffic, the favorable/unfavorable nature of a
review seemed irrelevant; sellers might even promote favorable reviews.
Nor did the affiliation with BowFlex render this a Lexmark situation in which “one
competitor directly injures another by making false statements about his own
goods or the competitor’s goods and thus inducing customers to switch.” “The
review discussing GOLO’s dieting program does not at all reference, or provide
a direct link to any exercise equipment, let alone to Bowflex.” Even if there
were a prompt to try exercise, it doesn’t follow that diet and exercise
compete; GOLO designed its program to work with exercise.  While direct commercial competition isn’t an “absolute”
requirement, these observations bore on the plausibility of the conclusory
allegation that defendants’ websites were covert competitors.
With Lanham Act false advertising and state coordinate
claims out of the way, only a Pennsylvania trade libel claim remained.  But Pennsylvania has a one-year statute of
limitations for trade libel claims, running from the date of the first
publication. GOLO alleged that HighYa’s initial review was posted in “March
2016,” and filed on June 16, 2017. GOLO argued that the revised version of the
article was published within the limitations period, and that it was re-published
when HighYa posted links to it through its social media accounts. But the only
HighYa social media post referenced dates back more than a year before filing,
and GOLO didn’t object to the revised article.
As for user comments, GOLO’s allegation that HighYa was the
true source of the comments “on information and belief” was insufficient in the
context of the other allegations.
As to BrightReviews, GOLO didn’t adequately plead falsity. Each
challenged statement was prefaced with language indicating that they are
observations based primarily on GOLO’s website: “ ‘The 2010 study [was]
performed with diabetics, not otherwise healthy individuals looking to optimize
insulin…[T]his seems to be their target market;…None of [GOLO’s] studies
appear to be peer reviewed for accuracy…;…and [W]e didn’t encounter any
clinical evidence on leading medical websites…that directly linked insulin management…and
weight loss.’ ” Though GOLO argued that these statements were inaccurate, it
didn’t address whether those observations could reasonably and fairly been made
based upon the information posted on its website at the time.
GOLO also argued that the reviews created a false impression
that its product was a scam, citing low the average user rating; HighYa’s
Twitter post, which stated, “Weight-loss #scams are everywhere. Is GOLO one of
them?”; the initial title of the article, “GOLO Weight Loss Diet Reviews – Is
it a Scam or Legit?”; and the fact that the reviews would appear prominently in
web searches for GOLO. But in the context of the review, the court didn’t see
an accusation of “a scam in the illegal, fraudulent sense, as compared to
communicating that the product might not produce its intended result.”

from Blogger https://ift.tt/2rsyf73

Posted in Uncategorized | Tagged , , | Leave a comment

Content Moderation at Scale, 2/2

You Make the Call: Audience Interactive (with a trigger
warning for content requiring moderation)
Emma Llanso, Center for Democracy & Technology & Mike
Masnick, Techdirt
Hypo: “Grand Wizard Smith,” w/user photo of a person in a
KKK hood, posts a notice for the annual adopt-a-highway cleanup project.  TOS bans organized hate groups that advocate
violence.  This post is flagged for
review.  What to do?  Majority wanted takedown, but 12 said leave
it up, 12 flag (leave up w/a content warning), 18 said escalate, and over 40
said take down.  Take down: he’s a member
of the KKK.  Keep up: he’s not a verified
identity; it doesn’t say KKK and requires cultural reference point to know what
the hood means/what a grand master is. 
Escalate: if the moderator can only ban the post, the real problem is
the user/the account, so you may need to escalate to get rid of the account.
Hypo: “glassesguru123” says same sex marriage is great, love
is love, but what do I know, I’m just a f—-t. 
Flagged for hate speech. What to do? 
83 said leave it up.  5 for flag,
2 escalate, 1 take it down.  Comment: In
Germany, you take down some words regardless of content, so it may depend on
what law you’re applying.  Most people
who leave it up are adding context: not being used in a hateful manner. But
strictly by the policy, it raises issues, which is why some flag it.
Hypo: “Janie, gonna get you, bitch, gun emoji, gun emoji, is
that PFA thick enough to stop a bullet if you fold it up & put it in your
pocket?”  What to do? 57 take it down, 27
escalate, and 1 said leave it up/flag the content.  For escalate: need subject matter expert to
figure out what a PFA is.  [Protection
from Family Abuse.] Language taken from Supreme Court case about what
constituted a threat.  I wondered whether
there were any rap lyrics, but decided that it was worrisome enough even if
those were lyrics.  Another argument for
escalation: check if these are lyrics/if there’s an identifiable person
“Janie.” [How you’d figure that out remains unclear to me—maybe you’ll be able
to confirm that there is a Janie by
looking at other posts, but if you don’t see mention of her you still don’t
know she doesn’t exist.]  Q: threat of
violence—should it matter whether the person is famous or just an ex?
Hypo: photo of infant nursing at human breast with
invitation to join breast milk network. 
Flagged for depictions of nudity. What to do? 65 said leave it up, 13
said flag the content, 5 said escalate, and 1 said take it down.  Nipple wasn’t showing (which suggests
uncertainty about what should happen if the baby’s latch were different/the
woman’s nipple were larger).  Free speech
concerns: one speaker pointed that out and said that this was about free speech
being embodied—political or artistic expression against body shame.  You have this keep-it-up sentiment now but
that wasn’t true on FB in the past. 
Policy v. person applying the policy.
Hypo: jenniferjames posts a site that links to Harvey
Weinstein’s information: home phone, emails, everything— “you know what to do:
get justice” Policy: you may not post personal information about others without
their consent.  This one was the first
that I found genuinely hard.  It seemed
to be inciting, but not posting directly and thus not within the literal terms
of the policy. I voted to escalate. 
Noteworthy: fewer people voted. Plurality voted to escalate; substantial
number said to take it down, and some said to leave it/flag it.  One possibility: the other site might have
that info by consent!  Another response
would block everything from that website (which is supposed to host personal
info for lots of people).
Hypo: Verified world leader tweets: “only one way to get
through to Rocket Man—with our powerful nukes. Boom boom boom. Get ready for
it!”  Policy: no specific credible
threats.  I think it’s a cop out to say
it’s not a credible threat, though that doesn’t mean there’s a high probability
he’ll follow up on it. I don’t think high probability is ordinarily part of the
definition of a credible threat. But this is not an ordinary situation, so.
Whatever it is, I’m sure it’s above my pay grade if I’m the initial screener:
escalate. Plurality: leave it up. Significant number: escalate.  Smaller number of flag/deletes.  Another person said that this threat couldn’t
be credible b/c of its source; still, he said, there shouldn’t be a
presidential exception—there must be something he could say that could cross
the line. Same guy: Theresa May’s threat should be treated differently.  Paul Alan Levy: read the policy narrowly: a
threat directed to a country, not an individual or group.
Hypo: Global Center for Nonviolence: posts a video, with a
thumbnail showing a mass grave. Caption: source “slaughter in Duma.”   “A victorious scene today,” is another
caption apparently from another source. I wasn’t sure whether victorious could
be read as biting sarcasm. Escalate for help from an area expert. Most divided—most
popular responses were flag or escalate, but substantial #s of leave it up and
take it down too. The original video maybe could be interpreted as glorifying
violence, but sharing it to inform people doesn’t violate the policy and
awareness is important. The original post also needs separate review. If you
take down the original video, though, then the Center’s post gets stripped of
content. Another argument: don’t censor characterizations of victory v. defeat;
compare to Bush’s “Mission Accomplished” when there were hundreds of thousands
of Iraqis dead.
Hypo: Johnnyblingbling: ready to party—rocket ship, rocket
ship, hit me up mobile phone; email from City police department: says it’s a
fake profile in the name of a local drug kingpin. Only way we can get him, his
drugs, and his guns off the street. Policy: no impersonation; parody is ok.
Escalate because this is a policy decision: if I am supposed to apply the
policy as written then it’s easy and I delete the profile (assuming this too
doesn’t require escalation; if it does I escalate for that purpose). But is the
policy supposed to cover official impersonation?  [My inclination would be yes, but I would
think that you’d want to make that decision at the policy level.] 41 said
escalate, 22 take down, 7 leave it up, 1 flag. Violate user trust by creating
special exceptions.  Goldman points out
that you should verify that the sender of the email was authentic: people do
fake these.  Levy said there might be an
implicit law enforcement exception. But that’s true of many of these
rules—context might lead to implicit exceptions v. reading the rules strictly.
1:50 – 2:35 pm: Content Moderation and Law Enforcement
Clara Tsao, Chief Technology Officer, Interagency Countering
Violent Extremism Task Force, Department of Homeland Security
Jacob Rogers, Wikimedia Foundation: works w/LE requests
received by Foundation. We may not be representative of different companies b/c
we are small & receive a small number of requests that vary in what they
ask for—readership over a period of time v. individual info. Sometimes we only
have IP address; sometimes we negotiate to narrow requests to avoid revealing
unnecessary info.
Pablo Peláez, Europol Representative to the United States: Cybercrime
unit is interested in hate speech & propaganda. 
 
Dan Sutherland, Associate General Counsel, National
Protection & Programs Directorate, U.S Department of Homeland Security:
Leader of a “countering foreign influence” task force. Work closely w/FBI but
not in a LE space.  Constitution/1A:
protects things including simply visiting foreign websites supporting terror.  Gov’t influencing/coercing speech is
something we’re not comfortable with. Privacy Act & w/in our dep’t Congress
has built into the structure a Chief Privacy Officer/Privacy Office. Sutherland
was formerly Chief Officer for Civil Rights/Civil Liberties.  These are resourced offices w/in dep’t and
influence issues.  DHS is all about info
sharing, including sensitive security information shared by companies.
Peláez: Europol isn’t working on foreign influence. Relies
on member states; referrals go through national authorities.  EU Internet Forum brings together decisionmakers
from states and private industry. About 150-160 platforms that they’ve looked
at; in contact w/about 80. Set up internet referral management tool to access
the different companies.  Able to analyze
more than 54,000 leads.  82% success
rate.
Rogers: subset of easy LE requests for Wikipedia & other
moderated platforms—fraudulent/deceptive, clearly threats/calls to violence.
Both of those, there is general agreement that we don’t want them around. Some
of this can feed back into machine learning. 
Those tools are imperfect, but can help find/respond to issues. More
difficult: where info is accurate, newsworthy, not a clear call to violence:
e.g., writings of various clerics that are used by some to justify violence.
Our model is community based and allows the community to choose to maintain
lawful content.
LE identification requests fall into 2 categories: (1)
people clearly engaged in wrongdoing; we help as we can given technical
limits.  (2) Fishing expeditions, made
b/c gov’t isn’t sure what info is there. Company’s responsibility is to
educate/work w/company to determine what’s desired and protect rights of users
where that’s at issue.
YT started linking to Wikipedia for controversial videos; FB
has also started doing that.  That is
useful; we’ll see what happens.
Sutherland: We aren’t approaching foreign influence as a LE
agency like FBI does, seeking info about accounts under investigation or
seeking to have sites/info taken down. Instead, we support stakeholders in
understanding scope & scale & identifying actions they can take against
it. Targeted Action Days: one big platform or several smaller—we focus on them
and they get info on content they must remove. 
Peláez: we are producing guidelines so we understand what
companies need to make requests effective. 
Toolkit w/18 different open source tools that will allow OSPs and LE to
identify and detect content.
What Machines Are, and Aren’t, Good At
Jesse Blumenthal, Charles Koch Institute: begins with a
discussion that reminds me of this xkcd
cartoon
.
Frank Carey, Machine Learning Engineer, Vimeo: important to
set threshold for success up front. 80% might be ok if you know that going
in.  Spam fighting: video spam, looks
like a movie but then black screen + link + go to this site for full download
for the rest of the 2 hours.  Very visual
example; could do text recognition. 
These are adversarial examples. Content moderation isn’t usually about
making money (on our site)—but that was, and we are vastly outnumbered by them.
Machine learning is being used to generate the content.  It’s an arms race. Success threshold is thus
important.  We had a great model with a
low false positive rate, and we needed that b/c if it was even .1% that would
be thousands of accounts/day. But as we’d implement these models, they’d go
through QA, and within days people would change tactics and try something else.
We needed to automate our automation so it could learn on the fly.
Casey Burton, Match: machines can pick up some signs like
100 posts/minute really easily but not others. Machines are good at ordering
things for review—high and low priority. 
Tool to assist human reviewers rather than the end of the process. [I
just finished a book, Our Robots, Ourselves, drawing this same conclusion about
computer-assisted piloting and driving.]
Peter Stern, Facebook: Agrees. We’re now good at spam, fake
accounts, nudity and remove it quickly. 
Important areas that are more complicated: terrorism.  Blog posts about how we’ve used automation in
service of our efforts—a combo of automation and human review.  A lot of video/propaganda coming from
official terrorist channels—removed almost 2 million instances of ISIS/Al Qaeda
propaganda; 99% removed before it was seen. We want to allow counterspeech—we
know terror images get shared to condemn. Where we find terror accounts we fan
out for other accounts—look for shared addresses, shared devices, shared
friends. Recidivism: we’ve gotten better at identifying the same bad guy with a
new account. Suicide prevention has been a big focus. Now using pattern
recognition to identify suicidal ideation and have humans take a look to see
whether we can send resources or even contact LE.  Graphic violence: can now put up warning
screens, allow people to control their experience on the platform.  More difficult: for the foreseeable future,
hate speech will require human judgment. We have started to bubble up slurs for
reviewers to look at w/o removing it—that has been helpful.  Getting more eyes on the right stuff. Text is
typically more difficult to interpret than images.
Burton: text overlays over images challenged us. You can OCR
that relatively easily, but it is an arms race. So now you get a lot of
different types of text designed to fool the machine.  Machines aren’t good at nuance.  We don’t get too much political, but we see a
lot of very specific requests about who they want to date—“only whites” or
“only blacks.”  Where do you draw the
line on deviant sexual behavior? Always a place for human review, no matter how
good your algorithms.
Carey: Rule of thumb: if it’s something you can do in under
a second, like nudity detection, machine learning will be good at it.  If you have to think through the context, and
know a bunch about the world like what the KKK is and how to recognize the
hood, that will be hard—but maybe you can get 80% of the way.  Challenge is adversarial actors.  Laser beam: if they move a little to the
left, the laser doesn’t hit them any more. So we create two nets, narrow and
wide. Narrow: v. low false positive rate. With wider net that goes to review
queue.  You can look at confidence
scores, how the model is trained, etc.
Ryan Kennedy, Twitch?: You always need the human
element.  Where are your adversaries
headed?  Your reviewers are R&D.
Burton: Humans make mistakes too. There will be disagreement
or just errors, clicking the wrong button, and even a very low error rate will
mean a bunch of bad stuff up and good stuff down. 
Blumenthal: we tend not to forgive machines when they err,
but we do forgive humans. What is an acceptable error rate?
Carey: if 1-2% of the time, you miss emails that end up in
your spam folder, that can be very bad for the user, even if it’s a low error
rate.  For cancer screening, you’re
willing to accept a high false positive rate. 
[But see mammogram recommendations.] 
Stern, in response to a Q about diversity: We are seeking to
build diverse reviewers, whose work is used for the machine learning that
builds classifications.  Also seeking
diversity on the policy team, b/c that’s also an issue in linedrawing. When we
are doing work to create labels, we try to be very careful about whether we’re
seeing outlying results from any individual—that may be a signal that somebody
needs more education.We also try to be very detailed and objective in the tasks
that we set for the labelers, to avoid subjective judgments of any kind.  Not “sexually suggestive” but do you see a
swimsuit + whatever else might go into the thing we’re trying to build. We are
also building a classifier from user flagging. 
User reports matter and one reason is that they help us get signals we
can use to build out the process.
Kennedy, in response to Q about role of tech in dealing w/
live stream & live chat: snap decisions are required; need machines to help
manage it.
Carey: bias in workforce is an issue but so is implicit bias
in the data; everyone in this space should be aware of that. Training sets:
there’s a lot of white American bias toward the people in photos.  Nude photos are mostly of women, not men. You
have to make sure you’re thinking about those things as you put these systems
in place.  Similar thing w/wordnet, a
list of synonyms infected w/gender bias. English bias is also a thing.
Q: outsourced/out of the box solutions to close the resource
gap b/t smaller services and FB: costs and benefits?
Burton: vendors are helpful. 
Google Vision has good tools to find & take down nudity.  That said, you need to take a look and say
what’s really affecting our platform.  No
one else is going to care about your issues as much as you do.
Carey: team issues; need for lots of data to train on, like
fraud data; for Vimeo, nudity detection was a special issue b/c we don’t have a
zero nudity policy.  We needed to ID
levels of nudity—pornographic v. HBO. We trained our own model that did pretty
well. Then you can add human review. But off the shelf models didn’t allow
that.  Twitch may have unique memes—site
tastes are different.  Vendors can be
great for getting off the ground, but they might not catch new things or might
catch too many given the context of your site.
Kennedy: vendors can get you off the ground, but we have
Twitch-specific language.  Industry
standards can be helpful, raising all ships around content moderation.  [I’d love to hear from someone from reddit or
the like here.]
Q re automation in communication/appeals: Stern says we’re
trying to improve. It’s important for people to understand why something
did/didn’t get taken down. In most instances, you get a communication from us
about why there was a takedown. Appeals are really important—allow more
confidence in the process b/c you know mistakes can be corrected.  Always a conundrum about enabling evasion,
but we believe in transparency and want to show people how we’re interacting
w/their content. If we show them where the line is, we hope they know not to
cross.
Burton: There are ways to treat bots differently than
humans: don’t need to give them notice & can put them in purgatory. We keep
info at a high level to avoid people tracking back the person who reported them
and going after them.
Transparency
David Post, Cato Institute
Kaitlin Sullivan, Facebook: we care about safety, voice, and
fairness: trust in our decisionmaking process even if you don’t always agree
w/it. Transparency is a way to gain your trust. 
New iteration of our Community Standards is now public w/full definition
of “nudity” that our reviewers use. We also want to explain why we’re using
these standards. You may not agree that female nipples shouldn’t be allowed
(subject to exceptions such as health contexts) but at least you should be able
to understand the rule.  Called us
“constituents,” which I found super interesting.  Users should be able to tell whether there is
an enforcement error or a policy decision. 
We also are investing more in appeals; used to have appeals just for
accounts, groups, pages. We’ve been experimenting w/individual content reviews,
and now we have an increased commitment to that.  We hope to have more numbers than IP, gov’t
requests, terror content soon.
Kevin Koehler, Automattic: 30% of internet sites use
WordPress, though we don’t host them all. Transparency report lists what sites
we geoblock due to local law & how we respond to gov’t requests. We try to
write/blog as much as we can about these issues to give context to the raw
numbers. Copyright reports have doubled since 2015; gov’t info requests 3x;
gov’t takedowns gone up 145x from what they once were. Largely driven by
Russia, former Soviet republics, and Turkey; but countries that we never heard
from before are also sending notices, sometimes in polite and sometimes in
threatening terms.
Alex Walden, Google: values freedom of expression,
opportunity, and ability to belong.  400
hours of content uploaded every minute. Doubling down on machine learning,
particularly for terrorist content. Including experts as part of how we ID
content is key.  Users across the board
are flagging lots content; the accuracy rates of ordinary users are relatively
low, while trusted flaggers are relatively high in accuracy. 8 million videos
removed for violating community guidelines, 80% flagged by machine learning.
Flagà
human review. Committed to 10,000 reviewers in 2018.  Spam detection has informed how we deal
w/other content.  Also dealing w/scale by
focusing on content we’ve already taken down, preventing its reupload.  Also important that there’s an appeals
process. New user dashboard also shows users where flagged content is in the
review process—was available to trusted flaggers, but is now available to
others as well.
Rebecca MacKinnon, New America’s Open Technology Institute:
Deletions can be confusing and disorienting. Gov’ts claim to have special
channels to Twitter, FB to get things taken down; people on the ground don’t
know if that’s true. Transparency reports are for official gov’t demands but it’s
not clear whether gov’ts get to be trusted flaggers or why some content is
going down. Civil society and human rights are under attack in many countries—lack
of transparency on platforms destroys trust and adds to sense of lack of
control.
Human rights aren’t measured by lack of rules; that’s the
state of nature, nasty brutish and short. We look to see whether companies
respect freedom of expression. We expect that the rules are clear and that the
governed know what the rules are and have an ability to provide input into the
rules, also there is transparency and accountability about how the rules are
enforced.  Also looking for impact
assessment: looking for companies to produce data about volume and nature of
information that’s been deleted or restricted to enforce TOS and in response to
external requests.  Also looking in governance
for whether there’s human rights impact assessment.  More info on superusers/trusted flaggers is
necessary to understand who’s doing what to whom. We’re seeing increasing
disclosure about process over time.
If the quality of content moderation remains the same, then
more journalists and activists will be caught in the crossfire.  More transparency for gov’ts and people could
allow conversations w/stakeholders who can help w/better solutions.
Koehler: reminder that civil society groups may not be
active in some countries; fan groups may value their community very strongly
and so appeals are an important way of getting feedback that might not
otherwise be available.  Scale is the challenge. 
Post asked about transparency v. gaming the system/machine
learning [The stated concern for disclosing detection mechanisms as part of
transparency doesn’t seem very plausible for most of the stuff we’re talking
about.  Not only is last session’s point
about informing bots v. informing people a very good point, “flagged as ©
infringement” is often pretty clear without disclosing how it was flagged.]
Sullivan: gaming the system is often known as “following the
rules” and we want people to follow the rules. They are allowed to get as close
to the line as they can as long as they don’t go over the line.  Can we give people detailed reasons with
automated removal?  We have improved the information
we have reviewers identify—ask reviewers why something should be removed for
internal tracking as well as so that the user can be informed.  A machine can say it has 99% confidence that a
post matches bad content, but that’s different—being transparent about that
would be different.
Koehler: the content/context that a user needs to tell you the
machine is doing it wrong is not the same content that the machine needs to
identify content for removal: nudity as a protest, for example.

from Blogger https://ift.tt/2KIrNl9

Posted in Uncategorized | Tagged , , | Leave a comment

Content Moderation at Scale, DC Version

Foundations: The Legal and Public Policy Framework for
Content
Eric Goldman gave a spirited overview of 230 and related
rules, including his outrage at the canard that federal criminal law hadn’t
applied to websites until recently—he pointed out that online gambling and drug
ads had been enforced, and that Backpage was shut down based on conduct that
had always been illegal despite section 230. 
Also a FOSTA/SESTA rant, including about supplementing federal
prosecutors with state prosecutors with various motivations: new enforcers, new
focus on knowledge which used to be irrelevant, and new ambiguities about what’s
covered.
Tiffany Li, Yale ISP Fellow: Wikimedia/YLS initiative on
intermediaries: Global perspective: a few basic issues. US is relatively unique
in having a strong liability framework. In many countries there aren’t even
internet-specific laws, much less intermediary-specific.  Defamation, IP, speech & expression,
& privacy all regulated.  Legal
issues outside content are also important: jurisdiction, competition, and
trade. Extremist content, privacy, child protection, hate speech, fake news—all
important around the world.
EU is a leader in creating law (descriptive, not normative
claim).  There is a right to receive
information, but when rights clash, free speech often loses out. (RTBF, etc.)  E-Commerce Directive: no general monitoring
obligation.  Draft copyright directive
requires (contradictorily) measures to prevent infringement.  GDPR (argh). 
Terrorism Directive—similar to anti-material support to terror
provisions in US.  Hate speech
regulations.  Hate speech is understood
differently in the EU. Germany criminalizes a form of speech US companies don’t
understand: obviously illegal speech; high fines & short notice &
takedown period.  AV Services directive—proposed
changes for disability rights.  UK
defamation is particularly strong compared to US.  New case: Lewis v. FB, in which someone is
suing FB for false ads w/his name or image.
Latin America: human rights framework is different.  Generally, many free expression laws but also
regulation requiring takedown.  Innovative
as to intermediary liability but also many legislative threats to
intermediaries, especially social media.
Asia: less intermediary law generally.  India has solid precedent on intermediary
liability: restrictions on intermediaries and internet websites are subject to
freedom of speech protections.  China: developing
legal system. Draft e-commerce law tries to put in © specifically, as well as
something similar to the RTBF. Singapore: proposed law to criminalize fake
news.  Privacy & fake news are often
wedges for govts to propose/enact greater regulation generally.
Should any one country be able to regulate the entire
world?  US tech industry is exporting US
values like free speech.
Under the Hood: UGC Moderation (Part 1)
Casey Burton, Match: Multiple brands/platforms: Tinder,
Match, Black People Meet.  Over 300
people involved in community & content moderation issues, both in house and
outsourced. 15 people do anti-fraud at match.com; 30 are engaged fulltime in
content moderation in different countries. 
Done by brand, each of which has written guidelines.  Special considerations: their platforms are
generally where people who don’t already know each other meet. Give reporters
of bad behavior the benefit of the doubt. 
Zero tolerance for bad behavior. 
Also not a place for political speech; not a general use site: users
have only one thing on their minds. If your content is not obviously working
towards that goal you & your content will be removed. Also use some
automated/human review for behavior—if you try to send 100 messages in the
first minute, you’re probably a bot.  And
some users take the mission of the site to heart and report bad actions.
Section 230 enables us to do the moderation we want.
Becky Foley, TripAdvisor: Fraud is separate from content moderation—reviews
intended to boost or vandalize a ranking. 
Millions of reviews and photos. 
Have little to no upfront moderation; rely on users to report. Reviews
go through initial set of complex machine learning algorithms, filters, etc. to
determine whether they’re safe to be posted. A small percentage are deemed
unsafe and go to the team for manual review prior to publication. Less than 1%
of reviews get reported after they’re posted.  Local language experts are important.  Relevance is also important to us, uniquely
b/c we’re a travel site.  We need to
determine how much of a review can go off the main focus.  E.g., someone reviews a local fish &
chips shop & then talks about a better place down the street: we will try
to decide how much additional content is relevant to the review.
Health, safety & discrimination committee which includes
PR and legal as well as content: goal is to make sure that content related to
these topics is available to travelers so they’re aware of issues. There’s
nobody from sales on that committee. Strict separation from commerce side.
Dale Harvey, Twitter: Behaviors moderation, which is
different from content moderation. Given size, we know there’s stuff we don’t
know. In a billion tweets, 99.99% ok is 10,000 not ok, and that’s our week.
Many different teams, including information quality, IP/identity, threats,
spam, fraud.  Contributors: have a voice
but not a vote—may be subject matter experts, members of Trust & Safety
Council—organizations/NGOs from around the world, or other external or internal
experts.
Best practices: employee resilience efforts as a feature.
The people we deal with are doing bad things; it’s not always pleasant.
Counseling may be mandatory; you may not realize the impact or you may feel
bravado.  Fully disclose to potential
employees if they’re potentially going to encounter this.  Cultural context trainings: Silicon Valley is
not the world.  Regular cadence of refreshers
and updates so you don’t get lost.  Cross
functional collaborations & partnerships, mentioned above.  Growth mindset.
Shireen Keen, Twitch: real time interactions. Live chat
responds to broadcast and vice versa, increasing the moderation challenge. Core
values: creators first.  Trust and safety
to help creators succeed. When you have toxicity/bad behavior, you lose users
and creators need users on their channels. Moderation/trust & safety as
good business. Community guidelines overlay the TOS, indicating expectations.  Tools for user reporting, processing, Audible
Magic filtering for music, machine learning for chat filtering. Goal:
consistent enforcement.  5 minute SLA for
content.  
Gaming focus allowed them to short circuit many policy
issues because if it wasn’t gaming content it wasn’t welcome, but that has
changed. 2015 launched category “creative,” still defining what was allowed. Over
time have opened it further—“IRL” which can be almost anything.  Early guidelines used a lot of gaming language;
had to change that.  All reported
incidents are reviewed by human monitors—need to know gaming history and lingo,
how video and chat are interacting, etc. 
Moderators come from the community. Creators often monitor/appoint
moderators for their own channels, which reduces what Twitch staff has to deal
with. Automated detection, spam autodetection, auto-mod—creator can choose
level of auto-moderation for their channel. 
Sean McGillivray, Vimeo: largest ad-free open video
platform, 70 million users in 150 countries. 
A place for intentional videos, not accidental (though they’ll take
those too).  No porn.  [Now I really want to hear from a Tube site
operator about how it does content moderation.] 
Wants to avoid being blocked in any jurisdiction while respecting free
speech.  5 person team (about half legal
background, half community moderation background) + developer, working w/others
including community support, machine learning. 
We get some notices about extremist content, some demands from
censorship bodies around the world. We have algorithmic detection of everything
from keywords to user behavior (velocity from signupàaction).  Some auto-mod for easy things like spam and rips
of TV shows. Some proactive investigation, though the balance tips in favor of
user flagging. We may use that as a springboard depending on the type of
content. Find every account that interacted w/ a piece of content to take down
networks of related accounts—for child porn, extremist content.  We can scrub through footage pretty quickly
for many things. 
There are definitely edge cases/outliers/oddballs, which is
usually what drives a decision to update/add new policy/tweak existing policy.  When new policy has to be made it can go to
the top, including “O.G. Vimeans”—people who’ve been w/the community from the beginning.  If there’s disagreement it can escalate, but usually
if you kill it, you clean it: if user appeals/complains, you explain.  If you can’t explain why you took it down,
you probably shouldn’t have taken it down. 
There’s remediation—if we think an account can be saved, if they show
willingness to change behavior or explain how they misunderstood the
guidelines, there’s no reason not to reverse a decision. We’re not parents and
we don’t say “because I said so.”
Challenges: we do allow nudity and some sexual content, as
long as it serves an artistic, narrative or documentary purpose. We have always
been that way, and so we have to know it when we see it. He might go for
something more binary, but that’s where we are. We make a lot of decisions based
on internal and external guidelines that can appear subjective (our nipple
appearance/timing index).  Scale is an
issue; we aren’t as large as some, but we’re large and growing with a small
team.
We may need help w/language & context—how do you tell if
a rant to the camera is a Nazi rant if you can’t speak the language?
Bots never sleep, but we do.
Being ad free: we don’t have a path to monetization.  We comply w/DMCA. No ad-sharing agreement we
can enter into w/them.  Related: we have
pro userbase.  Almost 50% of user are
some form of pro filmmaker, editor, videographer. They can be very
temperamental. Their understanding of © and privacy may require a lot of handholding.  It’s more of a platform to just share work.
We do have a very positive community that has always been focused on sharing
and critique in a positive environment. 
That has limited our commitment to free speech—we remove abusive
comments/user-to-user interaction/harassing videos.  We also have an advantage of just dealing
w/videos, not all the different types of speech, w/a bit of comments/discussion.  Users spend a lot of time monitoring/flagging
and we listen to them.  We weight some of
the more successful flaggers so their flags bubble up to review more quickly.
Goldman: what’s not working as well as you’d like?
Foley: how much can we automate w/o risking quality? We don’t
have unlimited resources so we need to figure out where we can make compromises,
reduce risk in automation.
McGillivray: you’re looking to do more w/less.
Keen: Similar. Need to build things as quickly as possible.
Harvey: Transparency around actions we take, why we take
those actions. Twitter has a significant amount of work planned in that
space.  Relatedly, continuing to share
best practices across industry & make sure that people know who to reach
out to if they’re new in this space.
Burton: Keep in mind that we’re engaged in automation arms race
w/spambots, fake followers, highly automated adversaries. Have to keep
human/automated review balanced to be competitive.
Under the Hood: UGC Moderation (Part 2)
Tal Niv, Github: Policy depends a lot on content hosted,
users, etc. Github = world’s largest software development platform. The heart
of Github is source control/version system, allowing many users to coordinate
on files with tracked changes. Useful for collaboration on many different types
of content, though mostly software development. 
27 million users worldwide, including individuals & companies, NGOs,
gov’t.  85 million repositories. Natural
community.
Takedowns must be narrow. 
Software involves contribution of many people over time; often a full
project will be identified for takedown, but when we look, we see it’s sometimes
just a file, a few lines of code, or a comment. 
15 people out of 800 work on relevant issues, e.g., support subteam for
TOS support, made of software programmers, who receive initial intake of
takedowns/complaints.  User-facing
policies are all open on the site, CC-licensed, and open to comment.  Legal team is the maintainer & engages
w/user contributions.  Users can open
forks. Users can also open issues.  Legal
team will respond/engage.  List of
repositories as to which a takedown has been upheld: Constantly updated in near
real time, so no waiting for a yearly transparency report.
Nora Puckett, Google
Legal removals (takedowns) v. content policies (what we don’t
want): hate speech, harassment; scaled issues like spam and malware.  User flags are important signals. Where
request is sufficiently specific, we do local removals for violation of local
law (general removals for © and child exploitation).  Questions we prompt takedown senders to
answer in our form help you understand what our removal policies are.  YT hosts content and has trusted flaggers who
can be 90% effective in flagging certain content.  In Q4 2017, removed 8.2 million videos
violating community guidelines, found via automation as well as flags and
trusted flags.  6.5 million were flagged
by automated means; 1.1 million by trusted users; 400,000 by regular users.  We got 20 million flags during the same
period  [?? Does she mean DMCA notices,
or flags of content that was actually ok?]. 
We use these for machine learning: we have human reviewers verifying
automated flags are accurate and use that to train machine learning algorithms
so content can be removed as quickly as possible. 75% of automatically flagged videos
are taken down before a single view; can get extremist videos down in 8 hours
and half in less than 2 hours. Since 2014, 2.5 million URL requests under RTBF
and removed over 940,000 URLs since then. In 2018, 10,000 people working on
content policies and legal removal.
Best practices: Transparency. We publish a lot of info about
help center, TOS, policies w/ exemplars.
Jacob Rogers, Wikimedia Foundation: Free access to
knowledge, but while preserving user privacy; self-governing community allowing
users to make their own decisions as much as possible. Where there are clear
rules requiring removal, we do so. Sometimes take action in particularly problematic
situations, e.g. where someone is especially technically adept at disrupting
the site/evading user actions. Biannual transparency report. No automated tools
but tools to rate content & draw volunteers’ attention to it.  E.g., will rate quality of edits to
articles.  70-90% accurate depending on
the type of content. User interaction timeline: can identify users’ interactions
across Wikipedia and determine if there’s harassment going on.  Relatively informal b/c of relatively small #
of requests. Users handle the lion’s share of the work. Foundation gets 300-500
content requests per year.  More
restrictive than many other communities—many languages don’t accept fair use
images at all, though they could have them. 
Some removals trigger the Streisand effect—more attention than if you’d
left it alone.
Peter Stern, Facebook: Community standards are at core of content
moderation.  Cover full range of
policies, from bullying to terrorism to authentic ID and many other areas.
Stakeholder engagement: reaching out to people w/an interest in policies.  Language is a big issue—looking to fill many
slots w/languages.  Full-time and outsourced
reviewers.  Automation deals w/spam and
flags for human review and prioritizes certain types of reports/gets them to
people w/relevant language/expertise. Humans play a special role b/c of their ability
to understand context.  Training tries to
get them to be as rigid as possible and not interpret as they go; try to break
things down to a very detailed level tracking the substance of the guidelines,
now available on the web.  It only takes
one report for a policy violation to be removed; multiple reports don’t
increase the likelihood of removal, and after a certain point automation shuts
off the review so we don’t have 1000 people reviewing the same piece of content
that’s been deemed ok. Millions of reports/week, usually reviewed w/in 24
hours. Route issues of safety & terrorism more quickly into the queue.
Most messaging explains the nature of the violation to users.  Appeals process is new—will discuss on Transparency
panel. 
Resiliency training is also part of the intake—counseling available
to all reviewers; require that for all our vendors who provide reviewers. Do audits
for consistency; if reviewers are having difficulty, then we may need to rewrite
the policy.
Community integrity creates tools for operations to tools,
e.g. spotting certain types of images.
Strategic response team. E.g., there’s an active
shooter.  Would have to decide whether he’s
a terrorist, which would change the way they’d have to treat speech praising
him. Would scan for impersonation accounts.
Q: how is content moderation incorporated into product
development pipeline?
Niv: input from content moderation team—what tools will they
need?
Puckett: either how current policies apply or whether we
need to revise/refine existing policies—a crucial part.
Rogers: similar, review w/legal team. Our product development
is entirely public; the community is very vocal about content policy and will
tell us if they worry about spam/low quality content or other impediments to
moderation.
Stern: Similar: we do our best to think through how a
product might be abused and that we can enforce existing policies. Create new
if needed.

from Blogger https://ift.tt/2HVT4TA

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

CC-licensed or not? you be the judge

A knitting pattern I’m using comes with a CC license and license terms that seem distinctly un-CC.  For contracts folks out there, what license do I have?

It says CC-BY-NC-SA, but then “What does this copyright notice mean?” purports to explain “You do not have permission to make copies for anyone else (including your mother, mother-in-law, children, or friends)…. [Y]ou may not publish anything based on these patterns without prior permission. And finally, it means you may not use these patterns to make any items for sale, even if you’ve made minor modifications of the patterns.”  None of these limits are entailed by NC-SA.  (I think even the items for sale part isn’t, inasmuch as you wouldn’t be charging for the pattern but for the item.)

I take it that if the licensor were sophisticated, it would be uncomplicated to treat the “what does this mean?” notice as irrelevant, because it’s not true license language, which is above.  Does the fact that the licensor clearly doesn’t understand the CC license she’s using change matters?

from Blogger https://ift.tt/2I49LIw

Posted in Uncategorized | Tagged , | Leave a comment

Showing good-looking cuts of meat is puffery for pet food

Wysong Corp. v. APN, Inc. 2018 WL 2050449, — F.3d – (6th
Cir. May 3, 2018)|
Wysong, which sells pet food, sued six competitors for
violating the Lanham Act through pictures like this one:

“The bag features a photograph of a delicious-looking lamb
chop—but Wysong says the kibble inside is actually made from the
less-than-appetizing ‘trimmings’ left over after the premium cuts of lamb are
sliced away. The
district court dismissed the claims
, and the court of appeals affirmed.
Wysong argued literal falsity because the photographs on the
packages told consumers the kibble was made from premium cuts of meat, when it was
actually made from the trimmings left over after the premium cuts are gone.  But this wasn’t unambiguously false—a
reasonable consumer could understand the images as indicating the type of
animal from which the food was made (e.g., chicken) but not the precise cut
used (e.g., chicken breast).
Without a survey, pleading misleadingness required facts
supporting “a plausible inference that the challenged advertisements in fact
misled a significant number of reasonable consumers.” The complaint alleged
that contemporary pet-food consumers prefer kibble made from fresh ingredients
like those they would feed their own families, and that the accused packaging
tricked those consumers into thinking their kibble was in fact made from such
ingredients. But context matters, and “reasonable consumers know that marketing
involves some level of exaggeration.”  A
reasonable consumer at a fast-food drive-through doesn’t expect that his
hamburdger will look just like the one pictured on the menu.  Likewise, without more facts, “it is not
plausible that reasonable consumers believe most of the (cheap) dog food they
encounter in the pet-food aisle is in fact made of the same sumptuous (and more
costly) ingredients they find a few aisles over in the people-food sections.”
Wysong responded that  some pet foods, such as Wysong’s, do contain
premium-quality ingredients. But Wysong failed to explain “how that fact
impacts consumer expectations. Are these premium sellers even known to the
Defendants’ intended audience? Do their products compete with the Defendants’,
or do they cater to a niche market? Are there obvious ways consumers can
distinguish between the Defendants’ products and the fancier brands?” The
ingredient lists’ effect on consumers also needed to be explained: many of the
packages listed animal “meal” or “by-product” as an ingredient. “And that
information certainly suggests that the kibble is not made entirely from
chicken breasts and lamb chops.” 
Ultimately, the relevant market and the products’ labeling are crucial
in evaluating plausibility, but Wysong said next to nothing about them. And
that is fatal here, since the puffery defense is such an obvious impediment to
Wysong’s success.”

from Blogger https://ift.tt/2rq3WP4

Posted in Uncategorized | Tagged | Leave a comment