DMCA hearings, security research, opponents

Security research continued, opponents
 
Copyright Office: Jacqueline Charlesworth
Michelle Cho
Reagan Smith
Cy Donnelly
Steve Ruhe
John Riley
Stacy Cheney (NTIA)
 
Opponents:
Christian Troncoso, BSA | The Software Alliance: we support
good faith security testing. We are surrounded by the good guys, and we have an
interest in working with academic and independent security community. But any
possible exemption also has the potential to be exploited by the bad guys. [Has
that happened with previous exemptions?] User trust is instrumental, as is
collaboration with research community. We worry about specific authorization
for researchers to make disclosures based on the researcher’s sole judgment
before the provider has had the opportunity to address the problem. Authorizing
zero-day disclosures may enable identity theft, financial fraud, and other
serious threats. Objective must be to thwart malefactors. Congress is
considering laws on info-sharing proposals, which BSA supports. How best to
create incentives? Limit liability w/o unintended consequences. Administration
is also considering policies, such as export controls on hacking tools. Concern
is balance: responsibly disseminate tools while guarding against their falling
into the hands of those w/bad intent. Congress enacted exemptions w/careful
checks and balances to prevent ill use. Proponents argue that ambiguity =
chilling effects. Were proponents seeking narrow clarifications we wouldn’t
oppose their efforts. Proposed class does much more—broad and w/no important
safeguards.
 
Congressional intent: class 25 should be amended to permit
circumvention only when software lawfully obtained, researcher has made good
faith effort to obtain permission, solely for purpose of testing, and info is
used primarily to promote security and is maintained in a manner that doesn’t
facilitate copyright infringement or violation of the CFAA. Must avoid
unintended consequences. Info should first be shared w/developer, in best
position to fix it. Time to fix before shared more broadly. Otherwise bad
actors get window of opportunity. Not speculative: already a thriving market
for security research on zero-day vulnerabilities.
 
Class should be tailored in a manner consistent
w/Congressional intent, mindful of broader cybersecurity debate. Not
inadvertently help bad actors.
 
Charlesworth: How would we do this?
 
A: you’d have to find a big chilling effect, but there’s a
lot of research going on. BSA has a big interest in partnering w/community;
many actively try to incentivize by providing rewards to those who provide info
responsibly—enough time to issue a patch.
 
Charlesworth: how much time is this?
 
A: no set time. Every vulnerability is different.  Particularly w/enterprise software.  Complex systems.
 
Charlesworth: what percentage of members authorize research?
 
A: don’t know; trend for software companies to do that. Some
of them probably work behind the scenes. Many have visible programs advertised
on their websites.
 
Q: do your members have specific concerns about trade
secrets?
 
A: absolutely.
 
Q: you said you would be ok with a narrow exemption. How to
address that?
 
A: build in the standard that it couldn’t involve any other
violation of applicable law, including violation of trade secrets.  [Why is 1201 needed if another law is
violated?  Why use copyright law to
enforce a different regime?]
 
Harry M. Lightsey, III, General Motors, LLC with Anna Shaw,
counsel for GM with Hogan & Lovells (not testifying)
 
Comments are solely directed at auto industry.  Controls range from engine to safety,
braking, speed, steering, airbags.  ECU
software is protected by TPMs.  If
circumvented, could present real and present concerns for the safety of the
occupants, as well as compliance w/regulatory and environmental requirements.
 
Proponents have no evidence of chilling effect in auto
industry, which has every incentive to encourage responsible security research.
We have, as we said in Class 22, relations with various independent
researchers/academic institutions/industry fora.  We attend Black Hat and Defcon.  We engage in efforts w/DARPA. We do our part
to encourage responsible security research. Our concerns are that a broad
exemption would harm ability to control research and have opportunity to fix
vulnerabilities before they’re widely disclosed, creating safety concerns.
 
Charlesworth: you asked about limiting exemptions to
vulnerabilities caused by access controls.
 
Troncoso: those are the only past exemptions—limited to
access controls creating security vulnerabilities. The proposal here is very
broad, applied to any type of software.
 
Charlesworth: but are you asking for that as a limit? Are you
ok with a narrow exemption limited to vulnerabilities caused by access
controls?
 
Troncoso: we’d be comfortable with that.
 
Charlesworth: that’s a fairly considerable limitation.
 
Troncoso: our motivation is the disclosure issue. If that
can be addressed and congressional intent can be integrated, we would be
comfortable w/an exemption broader than vulnerabilities specific to the access
controls.
 
Charlesworth: hacking into live systems—how should we think
about that issue in practical terms? There’s not a huge record of need to look
at live nuclear power plants.  How should
the Office be thinking about the concern about publishing research where a
breach could be catastrophic?
 
Green: two issues. Should you be testing live systems?  That can be dangerous. However, there are
other directly applicable laws, like the CFAA, specifically designed to deal
with that. I have never viewed the DMCA as specifically applicable to that case.  Is it something we should be using 1201
for?  Does that benefit us as a
society?  Clearly it does not. We know
that there are a number of systems that whether you’re accessing in real time
or as separate copies, the results can lead to finding major safety issues. The
value of fixing them is very high.
 
Charlesworth: saw a news report about someone who allegedly
hacked into a live operating airplane system. [Are they being charged with a criminal
violation of 1201?] They may be doing it for what they perceive to be good
purposes. Security researchers could make a mistake—exposed a flaw, but also
isn’t that scary?  Would you be willing
to limit this to not-live systems?  Maybe
that should be debated in Congress.
 
Green: I speak for all the researchers here when I say that
story is not something we endorse. No ethical researcher should be working on
live systems.
 
Reid: In addition to distinguishing that story, the vast
majority of the research we’re talking about is aimed at fixing problems in a
safe way.   
 
Charlesworth: but how do we limit the exemption to ethical
research? There needs to be linedrawing to notify the public of what they can
and can’t do. [Is that copyright law’s job, to reintroduce the entire legal
code into 1201 exemptions?]  We’re trying
to consider potential narrowing so that people feel that the exemption would be
consistent w/congressional intent and the goals of the proceeding.  So are you willing to exclude live systems?
Doesn’t think there’s much of a record on live systems. 
 
Reid: Urge you to consider that however this gets treated in
this proceedings, as Green mentioned there are a number of other laws here.
Collateral concerns about tampering are illegal under a whole bunch of laws.
The question you ought to ask: is the DMCA the last line of defense for
airplanes? Are we relying on © to protect airplanes? (A) we’re not, (B) if we
were that would be troubling, (C) we are so far away from the purpose of the
DMCA to protect (c) works from (c) infringement.  Nothing in the airplane story involves circumvention,
FBI affidavit doesn’t cite 1201.  Legal
and policy venues exist to address these; the Office need not worry about
enabling behavior that’s illegal under other laws b/c it will still be illegal.
There are complicated contours to this discussion, and these discussions should
happen in other venues.  We’re in support of having those venues
participate and apply those laws and policies. But (c) is not the place to do
it, and you don’t need to, and 1201 doesn’t require you to. [Applause!]
 
Belovin: We are here to avoid breaking laws. We don’t want
to violate the CFAA or airplane hijacking laws. 
© infringement is almost never a concern unless you have a copy of the
system.  The guy who allegedly tried to
hack the airplane in flight wasn’t copying Boeing’s software. As a pragmatic
matter, if I’m testing a system for security flaws in a way that could possibly
involve copying, I have to have the thing in my possession. This is not a CFAA
exemption request.
 
Charlesworth: couldn’t you hack in through the internet?
 
Belovin: you’d have to violate the CFAA first.  The larger violation there is the hacking.
The more probable case is not involving the DMCA, but stealing source code—this
is not protected by TPMs under the DMCA, it’s protected by ordinary enterprise
security controls and firewalls. The DMCA was intended to protect copyright
violations, not a CFAA supplement. 
 
Matwyshyn: Airplane incident facts are in dispute, but the
security community is not rallying. 
Homicide laws are the first line of defense. Whether a TPM was
circumvented is irrelevant.
 
Charlesworth: b/c of how the law is written, we have to
consider these issues. [Why?  That’s not
in the exemption standard—it’s noninfringing use, as Betsy
Rosenblatt eloquently said
.]
 
Blaze: back to the issue of disclosure—remember that repair
is important, but so is warning consumers against defective products.  The Snort toy: if I were a parent, even
before it’s fixed, I’d want to know. Disclosure to parents is important even at
the price of embarrassment to the vendor. Give the benefit of the process not merely
to the developer: users are stakeholders as well.
 
Lightsey: no evidence of chill in auto industry. Given
dramatic consequence on safety, proponents have not met burden of showing need
for an exemption. Saying there are other laws and regulations is not sufficient
in this context. We feel the DMCA is a relevant protection and we encourage the
ability to engage w/security researchers responsibly.
 
Troncoso: Stanislav explained that he reached out first to
mfgr, notwithstanding the bluster he was ultimately able to work w/them to
ensure the vulnerability was fixed. He didn’t disclose until after it was
fixed. That gets to the norm that we’re seeing even in researchers in this
room. Consistent w/companies’ interests in protecting consumers.  Professor Green’s initial filing: he
indicates he always provides disclosure before disclosing vulnerabilities to
the public. It’s a key issue to us, critical to public safety.

Green: I always attempt to provide disclosure. Sometimes it’s not possible, as when
there are 1000s of websites. Sometimes you notify, and they are not able to
remediate it. They tell you there’s no fix or they’ll take a year. Then you
have the obligation to look at the end user/consumers and that has to affect
your calculation. Android is rarely updated by carriers. Google will make a
patch, but 90% of consumers may be vulnerable a year later. You have to decide
based on what’s right for consumers and not based on what’s good for software
companies.
 
Stanislav: In the case of the Snort and camera, with both
were reported through the helpdesk because there was no front door. Took days
to convince them that there was an issue to kick upstairs. Had a ticket closed
on him and had to reopen w/Snort. Only reason this got solved was that my
company was going to disclose publicly. At that point reporter reached out;
vendor said they’d never heard from a researcher before [i.e., it did not tell
the truth]; then the CEO reached out to him on the thread they’d already been
having. The internet of things comes from innovators—not large legal teams that
understand complex legal situations; they will fight back in an attempt to shut
you up.
 
Matywyshn: indeed, car companies like Tesla are state of the
art. But unfortunately there’s a large degree of variation across car
manufacturers. Some haven’t fully staffed security teams and have many openings—it
would be beneficial to engage with security community. Tesla, for example, is ISO
compliant and doesn’t oppose our approach. 
If every car company was on the level of Tesla, we wouldn’t be
concerned, but security researchers are concerned.
 
Belovin: I’m in favor of notification, but one issue is
whether or not the vendor would have the legal right to block or delay
publication interacts in a bad way w/university policies. I may not accept a
grant that gives the funding agency the right to block outside publication.
University sees this as a matter of academic freedom. Mirrored in an odd place
in the law on export controls.  What is “export”?  You can’t teach foreign nationals certain
things—one of the things it says in the law is that fundamental research is ok,
but what is that?  One criterion: can
someone else block publication?  If
someone else can block publication, then export controls apply, which causes
very serious chilling effects of its own.
 
Blake: we found sweeping vulnerabilities in election
software.  Research authorized by
customers (state gov’ts) not by voting machine vendors.  We were indemnified under state law and there
was some contractual back and forth w/the vendors that I wasn’t privy to—grey area.
One of the issues we addressed was whether to give the vendors advance notice
to fix. We normally do try to give notice, we felt that allowing end users to
remediate immediately outweighed the benefits of not notifying the users and
allowing vendors time to repair things that would take more time to fix than
the next election. Vendors didn’t see our results until they were made public.
 
Moy: Emphasize again the importance of disclosing not only
so the vulnerability can be remedied but so that consumers can make an informed
choice.  If a vendor can stall
publication for 6 months/year but continue to market the product in the
meantime, that’s an enormous problem w/major implications for consumers.
 
Charlesworth: could some be addressed by high-level
communication: there is a security problem?
 
Moy: maybe for some, not all. There will be cases where the
nature of the vulnerability is important. Consider the BMW vulnerability
publicized in January—remote unlocking. 
Details might be important to certain consumers—couldn’t be exploited to
unlock other people’s cars, not your own; don’t know if that’s true but
consumers could make decisions for themselves.
 
Charlesworth: but it’s not step by step instructions. Why
would an ordinary consumer need to know that?
 
Moy: ordinary consumers include people who understand how
the tech works. I wouldn’t be able to
exploit a vulnerability even if you handed me a detailed paper about it.  [Likewise.]
 
Charlesworth: but what about enabling a certain group of
people who might not otherwise have known about it—not sophisticated ones.  [So, sophisticated enough to understand the
disclosure’s detailed, but not sophisticated enough to do it themselves.  Charlesworth is suggesting that researchers
publish “step by step instructions” for a hack. But I don’t think that
describes most of what they do, or not in that sense.  I read
descriptions of Heartbleed, but that doesn’t mean it was step by step.]  Why would I need to know the way in which someone
can exploit the Snort?
 
Moy: Who’s going to translate the nature of the
vulnerability?
 
Charlesworth: Stanislav will.  The company refuses to fix it, so he
publishes an article saying this toy has a problem.  I wouldn’t then need line by line
instructions in order to make a decision about possessing that toy.  Why is that so hard to concede?
 
Moy: that would be enough for some consumers, not for
others.
 
Charlesworth: Why?
 
Moy: sufficient for some, but not for other  more sophisticated consumers. I’m having a
difficult time imagining how to write a disclosure requirement that would be
written so that you could disclose, but not enough to replicate it technically.
 
Charlesworth: (j): solely to promote the owner/operator’s
security. Part of the policy was that you weren’t necessarily advising the
world how to do this.  Doing the research
in a way that didn’t enable malicious actors. 
Congress put the test in here to deal w/the complications—whether you
use the research responsibly. [With
respect to copyright
, though, is a very different question than “are you
providing a net benefit to the world?”]
 
Moy: Q depends also on how the company deals w/security. Is it
something that could be fixed, or does it represent a major flaw?  Security experts should be able to analyze
that and explain to us if necessary.
 
Charlesworth: is a high level disclosure better than none?
 
Moy: more information for consumers in the market is
generally a good thing, but that doesn’t get to the reasons we want disclosure.

Stanislav: (1) At the time of the webcam—CEO said my research was inaccurate
and misleading. I’ve presented it publicly now; when a story like this comes
out and the vendor says I’m lying I can prove it. (2) Prevention: if the
intermediary-users (web companies etc.) don’t know the specific details of the
vulnerability in the meantime until the vendor patches it, then they can’t fix
it on an intermediate basis.
 
Sayler: the individual disclosure is useful for consumers
who may recognize that the problem may be replicated in other devices.
Replication is hugely important, and it requires public disclosure for those of
us in the community who do this kind of work.
 
Many of the flaws we discover, we’re not the first—many are
already available on the black market. Allowing disclosure will not increase the
number of zero-day exploits.
 
Charlesworth: the concern is you may be educating people
about the unknowns.
 
Sayler: it’s a balance: it might happen, but you are also
protecting millions of people. Extraordinarily hard to codify what the proper
behavior is.  Thus we should rely on
researchers’ good faith (and other laws). 
Far outweighs the downsides.
 
Lightsey: to protect the record, on behalf of GM,
cybersecurity is something we take very seriously. We have a senior leader at
GM. The industry is committed to voluntary privacy principles, including
promise to maintain reasonable security, enforceable under §5 of FTCA.  [Though as Moy says, how will you know if
they’re following through?]
 
Troncoso: Potential for companies to decide not to fix a
problem. But we do have regulators in place to handle those issues. If they
encounter pushback from software companies unwilling to fix problems, urge them
to go to the FTC.  [Right, because they
have so many resources.]
 
Charlesworth: what would the FTC do?
 
Tronsoco: they’ve been willing to bring enforcement actions
against companies not employing sufficient security standards.  Building in a disclosure requirement is
critical to avoid perverse incentives to keep research hidden so it’s more
valuable on black/gray markets. 
Potential for exemption to be exploited by bad actors.
 
Stallman: part of the value of exploits trafficked in black
market is secrecy. Publication is a way to make an existing but unknown
vulnerability lose its value. 
 
Blaze: There is a bright line between legitimate research
and black market: we publish our work and we’re required to do so by the
scientific method.  You asked about a
compromise disclosure in which we describe existence of vulnerability w/o
describing how to exploit. With some examples it might be possible to describe
the vulnerability/remediation w/o enough detail to exploit. But many, many
others describing the existence would make exploit trivially easy: the
difference between the exploit and who’s vulnerable is nonexistent.  No line to be drawn unless we want “there’s a
terrible, lifethreatening problem with GM cars” to be the disclosure—“this
model has a brake problem” is better.
 
Charlesworth: but saying there’s a brake problem is
different than line by line discussions.
 
Blaze: sometimes it is possible, but in other cases it’s
not. [Perhaps we should trust the programmer/security researcher and not the
person who doesn’t program here?] Vulnerability might be: if you turn the key
three times the brake stops working. The only way to know is to try it. There
is no other way to describe it. This varies across the spectrum. There is not a
generally applicable line meaningfully separating them.
 
Charlesworth: when you publish, sometimes you refrain from
giving detailed information. [Charlesworth has a specific idea of “line by line
instructions” that is not consistent w/the programmers’.]
 
Blaze: sometimes.  We
ask whether it’s necessary to include details. Sometimes it’s in the middle,
and you can disclose 90% and a determined person could ferret it out. An
essential property of the scientific process is to publish reproducible,
testable results that others can build upon. Readers of scientific papers need
to be able to verify and reproduce.
 
Matwyshyn: There’s a whole array of mitigation measures
researchers regularly use—timing, detail, a bundle of best practices.
 
Charlesworth: are those written down?
 
Matwyshyn: they’re contingent on the nature of the reproducibility.
The ISO standards are the closest.
 
On the point of 0-day vulnerability markets – the researcher’s
perspective is: I know a vulnerability. (1) Do I sell it and make a quick buck,
or (2) undertake laborious and personally risky process of contacting vendors
and maybe having them threaten me w/DMCA, work for months.
 
Charlesworth: so there’s overlap w/bad guys?
 
Matwyshyn: the US gov’t purchases zero-days regularly. But
most vulnerabilities are known—a researcher will find that this product hasn’t
been patched with a ten-year-known vulnerability. Don’t want the DMCA to deter
contacting the company.
 
FTC: I served as privacy advisor. But it is an agency with
limited resources.  There isn’t a formal
intake mechanism for security researchers to report problem. The FTC can’t
mediate DMCA threats from vendors.
 
Charlesworth: you’re suggesting that people might sell
research on the black market if they don’t get the exemption.
 
Matwyshyn: The zero-day market is a very small sliver.
 
Charlesworth: how does it play into the exemption process?
 
Matwyshyn: in the absence of a regulatory regime, which we
don’t have.
 
Charlesworth: well, we have 1201. You’re assuming someone has
discovered—have they broken the law or not?
 
Matwyshyn: if they may
have circumvented, we want them to report it. 
 
Charlesworth: why would they care?
 
Matwyshyn: because the act of disclosure currently exposes
them to liability. We want to nudge them towards disclosure.
 
Charlesworth: does that actually happen?
 
Belovin: an ex-NSA hacker has stated that he sold an exploit
to the US gov’t. Here’s someone who’s finding and publishing vulnerabilities
and also sold it to the intelligence community.
 
I served as chief technologist to the FTC for a year. FTC
doesn’t have the resources to act as intermediary in these cases. It does not
resolve individual cases about kinds of research people can do.  Security researchers: take auto hacking. One
case involved vulnerabilities in the wireless tire pressure monitor. I never
would’ve looked there, but once I was pointed in that direction, any competent
researcher could replicate the issue within a few weeks. Asking the right
question is often the very hardest part of this kind of research.  Different remediation measures are indicated
depending on the type of issue.
 
Reid: Underscore Belovin’s point about remedies. It’s not just
about understanding and explaining vulnerability. Sometimes consumers can take
an actual remedial action, which sometimes takes some detail.  If your car has a software problem, you may
want to know how to fix it. Look at how auto industry handles other types of
problems: airbag recall; we now know every detail, including every factory the
airbags came from.  That is useful
information.  We lack that useful
information about how to deal with the risks of hackers hacking our cars, which
allows consumers to apply pressure.
 
Q: Talk about norms—is there anything in standards that
could identify a security researcher v. a black hat?
 
Matwyshyn: someone who discloses flaws for security and
works to better systems. ISO standards are evolving. The leads have stated that
they are happy to directly consider any issues the Copyright Office panel feels
should be discussed.
 
ISO is an organization that has traditionally been closed;
lots of corporate standards; will push for openness of these standards because
of the tremendous social value of an exemption.
 
Charlesworth: it’s a little hard to draft a law based on
something no one can see.  [From your
lips to Congress’s ears! [TPP reference]]
 
Reid: we’d be comfortable w/ a limitation that makes clear
it has to be for noninfringing purposes, the statute is geared for that and it’s
easy to write in.
 
Q: what about not in violation of any other laws?
 
Reid: defers to papers.
 
Matwyshyn: suboptimal framing b/c many of the chilling
effects involve people leveraging DMCA to threaten with CFAA etc.
 
Charlesworth: we will not grant an exemption that says you
can violate other laws.  [I don’t think
that’s what’s been asked for; see Betsy Rosenblatt again.  Shall we say “you can’t use the exemption if
you’re going to commit murder”?]
 
Belovin: one reason there’s no consensus on reporting—it’s
often very hard to understand how best to disclose; judgment calls. More
germane: there’s a fear of vendors not acting in good faith. There is a
chilling effect. Rightly or wrongly, we’ve seen enough instances where the DMCA
has been used as a club, even with no copyright interests, that researchers don’t
want to give someone else the power to suppress them.

from Blogger http://ift.tt/1FNgi5f

Posted in Uncategorized | Tagged , , , , | 1 Comment

DMCA hearings: security research proponents

Library of Congress DMCA exemption hearings
 
Proposed Class 25: Software – security research
This proposed class would allow researchers to circumvent
access controls in relation to computer programs, databases, and devices for
purposes of good-faith testing, identifying, disclosing, and fixing of
malfunctions, security flaws, or vulnerabilities.
 
Copyright Office: Jacqueline Charlesworth
Michelle Cho
Reagan Smith (main questioner)
Cy Donnelly
Steve Ruhe
John Riley
Stacy Cheney (NTIA)
 
Charlesworth: goal is to clarify record, hone in on areas of
controversy rather than restating written comments. Interested in refining/defining
broad classes in relation to the support in the record.  Court reporter (so my report is far from
definitive!). (Also note that I am not good at ID’ing people.)
 
Proponents:
Matthew Green, Information Security Institute, Department of
Computer Science, Johns Hopkins University
Research in area of computer security and applied
cryptography.  Risks posed by DMCA to
legitimate security research: discovered serious vulnerabilities in a computer
chip used to operate one of the largest wireless payments systems and widely
used automotive security system.  Naïve:
didn’t know what expected to happen when notified manufacturer, but believed it
would involve discussion and perhaps repairs and mitigations we developed. That’s
not what happened. Instead, a great deal of resistance from chip manufacturer,
and active effort to get us to suppress our research and not publish
vulnerabilities.  Instead of repairing
the system, mfgr spent considerable resources to stop us from publishing,
including raising specter of expensive lawsuit based on 1201. Small component
was reverse engineering of software and bypassing extraordinarily simple TPM.
1201 was never intended to prevent security researchers from publishing, but it’s
hard to argue merits/intent of law when you’re a penniless grad student.
 
Charlesworth: why isn’t 1201(j) enough?
 
A: My understanding is that there’s the bypass issue and the
trafficking issues. Both potentially an issue depending on what it means to
traffic.  Bypassing the TPM was raised to
us at the time.
 
Blake Reid, Samuelson-Glushko Technology Law & Policy
Clinic at Colorado Law: Existing exemptions for (j), (g), and (f) for
research/reverse engineering, but as we detailed in comments, there are
shortcomings in each.  (j) fails to
provide the up-front certainty needed for an exemption, because, e.g., it’s got
a multifactor test that depends on things like how the info was used and
whether the info was used/maintained in a manner that doesn’t facilitate
infringement.  We might well try an
argument for applying the exemptions if god forbid he was sued, but as we’ve
asked the Office for before we want further up-front clarity for good faith
security testing/research. That was the basis of 2006 and 2010 exemptions and
we hope for them again.
 
Green: my incident was 2004.
 
Charlesworth: would the activities described fall into one
of the exemptions?
 
Reid: Don’t want to opine—again if we were in court I’d
absolutely they were covered, there was no ©’d work, etc. but if advising Prof.
Green beforehand, hypothetically, there would be reason to be nervous b/c of
the ambiguous provisions of the law. The issue of certainty.

Green: we were advised at the time by the EFF, pro bono. We were told they
could provide no guarantee that any of the exemptions would protect us if we
were sued. They didn’t say we were violating the law, but the complexities of
the exemptions were such that they provided no guarantee.
 
Charlesworth: did you know that before or after?
 
Green: before, during, after.
 
Charlesworth: you sought legal advice before?
 
Green: yes, my prof had a similar experience.
 
Charlesworth: but you proceeded anyway.
 
Green: yes, b/c we believed it was necessary. We were
fortunate to have the EFF, which gave us the confidence to go forward; we felt
that the probability was relatively low. 
The system’s been repaired. But w/out that the system might still be
broken today. I now begin every project w/ a call to a lawyer for a 1201
mitigation possibility. I still get pro bono representation, but many
researchers aren’t so fortunate. Also, good faith research shouldn’t require
lawyers; increases the cost of every project.
 
Reid: We predicted in 2006 that Sony rootkit wouldn’t be the
last dangerous/malfunctioning TPM. We vastly underestimated the widespread
vulnerabilities that can be caused by and concealed by TPMs—intermingled with
everyday consumer goods including cars, medical devices, internet
software.  Chilling effects have become
ever more pernicious—a roomful of nation’s top security researchers stand
before you today highlighting the threats they, their colleagues, and their
students face in trying to make America a safer place to live. Existing
exemptions show security to be a priority but are not enough to avoid attempts
to silence their work, which is protected by the First Amendment and protects
the public.  In 2006, Peters rejected
projection of worsening TPMs and recommended against a broad exemption, but
that prediction was prescient.  Lengthy
record of security vulnerabilities that could have been avoided w/a workable
exemption.  Researchers before you today
are the good guys. They care about abiding by the law and they need breathing
space. W/out your help they will lose an arms race to bad guys who don’t care
about violating 1201.
 
Q: 1201 exemption for video games—was that too small?
 
Reid: the issue was not with the piece of the exemption that
was granted, but that the vulnerabilities around DRM patched w/video games was
just one piece of evolving threat.  Evolving
piece was in things like cars, medical devices. It was the narrow piece that
said security researchers could look at TPMs only for video games.
 
Q: but the exemption had other limits—info must be used
primarily to promote security of owner/operator, and must not be used to
promote copyright infringement.  Does
that restrict research?
 
Reid: it’s hard to tell you—the subsequent vulnerabilities were
not necessarily in video games. Folks took a look at exemptions and said video
game exemptions were too narrow to do research. 
If added to broad exemption, we’d have some of the same concerns—don’t
have certainty w/words like “primarily.”
 
Q: your proposal is “for the purpose”—how much more
certainty does that provide you? Existing statute says “solely”—congressional intent?
 
Reid: the more certainty we can get, the more mileage
researchers will get. Post hoc judgments are problematic b/c it’s hard to say
up front what the primary purpose is.
 
Q: but there will always be post hoc judgments.  We also have to ask what is good faith.  We have to draft language for an exemption—we
want to understand what kinds of limitations might be appropriate in language
that balances need for less post hoc analysis with some definition of what it
is we are allowing.  Congress did act in
this area, which is guidance about what Congress was thinking at the time.  [But the exemption procedure is also guidance about what Congress was
thinking at the time.]
 
Reid: clarity about what these limits mean: being used to
facilitate © infringement—opponents have said that simply publishing
information about a TPM/software might facilitate copyright infringement.
Guidance that the acts we’re concerned about here, outlined in the comments:
investigating, research in classroom environment mostly, being able to publicly
disclose in responsible way the results are covered. If you enable that, that’s
the most important piece we’re looking for in limitations.
 
Charlesworth: tell me more about a classroom environment.
Should an exemption be tied to academic community.
 
Reid: Student ability to work on this is really important,
but we wouldn’t support a classroom use limit. Private sector and amateur
security researchers are really important, building skillsets.
 
Charlesworth: should a university researcher oversee all of
this research?
 
Green: very concerned about that. The most dynamic/important
research is being done by people in the private sector, commercial security
researchers. The vehicle security research is funded by DARPA but worked on by
Charlie Miller, unaffiliated w/university. Very similar with other kinds of
research. Some is authorized, but the vast majority is done by private
individuals w/access to devices. Recent cases: researchers told to back off b/c
of DMCA.  One happened just a couple of
weeks ago.
 
Andy Sayler, Samuelson-Glushko Technology Law & Policy
Clinic at Colorado Law: Heartbleed, Shellshock, numerous vulnerabilities in the
last year.  Logjam—a week ago.
 
Q: was that done without circumvention?

Green: we don’t know.  Some public spec,
some looking at devices.
 
Sayler: note that much security research is funded by the
gov’t. 1201 is used to discourage independent security research.  Congress didn’t intend good faith research to
be suppressed, but they’re ambiguous/undue burdens.  Ioactive researcher was threatened w/DMCA for
exposing vulnerabilities in Cyberlock locks. 
Significant personal risk/unreasonable legal expenses to mitigate risk.
 
Mark Stanislav, Rapid7 security consultant: Last year
assessed Snort, a toy that lets
parents communicate with children over the internet.  Oinks to signal new message. Child can reply.
The security features were flawed; unauthorized person could communicate
w/child’s device and could access name, DOB, and picture of child as well.
Contacted the vendor; despite my offer to go into details w/engineers, vendor
wouldn’t engage and made legal threat, saying I must’ve hacked them.  Productive dialogue eventually occurred and
resolved issues. Situation made me fear for my livelihood.
 
Q: did you discuss DMCA exemptions? 
 
Stanislav: I wasn’t privy to the lawyers’ conversations. I
understood that I was at risk.  My goal
was protecting children, but it wasn’t worth a lawsuit.  I found vulnerabilities in my own webcam that
would allow a criminal to access it. 
Direct risks to privacy and safety. I contacted the vendor and offered
assistance. Final email, after friendly to threatening, wanted me to meet
w/them b/c they said I might have accessed confidential information.  Entrepreneurs who made Snort have gone on to
win numerous awards.  What if a criminal
had abused these and put children in harm’s way? Webcam: new leadership came in
and apologized. Research prevented harm, privacy violations, allowed businesses
to fix critical flaws before adverse impact. We help people/businesses who don’t
know they’re in harm’s way/putting people in harm’s way.  We live in a time when a mobile phone can
control an oven. Smart TVs have microphones listening to us all the time.
Please help widen the collective efforts of security research; the researchers
who stay away from research b/c of DMCA are problems.
 
Steve Bellovin, Columbia University: Researched in private
sector for decades. Academic research is generally concerned with new classes
of vulnerabilities. Finding a buffer overflow in a new device is unpublishable,
uninteresting. Most of the flaws we see in devices we rely on are serious but
known vulnerabilities. Not the subject of academic research; the independent
researchers are the ones actively protecting us. Students unlikely to do it; I
discourage my PhD students from looking for known vulnerabilities because it
won’t get them credit.
 
As a HS student, I wrote a disassembler so I could study
source code. That’s what got me to where I am today.  Arguably would be illegal today, if I wanted
to look at a smartphone. Four years later, I caught my first hackers. I teach
my students how to analyze and attack programs. I coauthored the first book on
firewalls and internet security.  You
have to know how to attack in order to secure a system. To actually try an
attack is a hallmark assignment; it’s not the only way, but it is one of the
ways.  Is a copyright owner who profited
a great deal from copyright, but wants a balance.  1853 treatise on whether it’s ok to discuss
lockpicking: truthful discussion is a public advantage.  Harm is counterbalanced by good.
 
Andrea Matwyshyn, Princeton University: These questions are
about frivolous litigation that attempts to suppress discussion around existing
flaws that may harm consumers, critical infrastructure, economy.  Help curb frivolous 1201 litigation.
 
Charlesworth: on the issue of disclosure: you’re suggesting
that mfgrs tend to shut down the conversation, but isn’t there a countervailing
interest in giving mfgr some time to correct it before public
dissemination?  I understand bad hats are
out there already, but hacking into something more mundane like a video console
there are probably people who don’t know how to do that who might be educated
by disclosure.
 
Matwyshyn: there are two types of companies. Some are very
receptive to this—FB, Google, Tesla have bounty programs who compensate
researchers.  Processes in place w/clear
reporting mechanism on websites and internal ID’d personnel. The second type
has not yet grown into that sophisticated model.  So it’s this second category that doesn’t
possess the external hallmarks of sophistication that react viscerally, through
overzealous legal means and threats.  The
Powerpoint I shared has a copy of one of the DMCA threats received Apr. 29,
2015.  Hearing Exh. 10, letter
from Jones Day to Mike Davis
, security researcher at Ioactive, a security
consultant.  Regards repeated attempts to
contact Cyberlock about their product. 
They used DMCA as a threat.
 
Q: Cyberlock seemed to have taken the position that Davis
insufficiently disclosed. [Actually it indicates that he didn’t want to talk to
Jones Day, not that he didn’t want to talk to Cyberlock, which makes sense.]
 
Matwyshyn: he was ready to share that with technical
team.  Subsequent followup email in
record explains and identifies prior instances of threat.
 
Q: if you granted the proposed exemption in full, would that
change the outcome? If a company is going to engage in frivolous litigation, we
can’t stop that.
 
Matwyshyn: I believe it would help a lot.  The note from Ioactive’s general counsel: GC’s
perspective is that it seeks a strong basis for defense.  Expresses concern that litigation to the
point of discovery can cost $250,000. When we’re talking about a small security
consultancy or independent researcher, engaging w/the legal system is cost
prohibitive.  A roadmap exemption would
give a one-line statement of reassurance that a GC or security researcher could
send to a potential plaintiff. W/exemption, Jones Day would be less likely to
threaten DMCA as basis for potential litigation.  Provided that Cyberlock has in place a
reporting channel that the researcher used, and researcher disclosed the list
of disclosables we proposed, that would provide a clear roadmap for both sides’
relationship in the context of a vulnerability disclosure.  Significant improvement in murkiness, more
easily discernable question of fact.
 
Q: One of the elements is that the manufacturer had an
internal management process. How would a researcher verify that?
 
Matwyshyn: the researcher needs a prominently placed
reporting channel. The additional requirements are not researcher-centric, but
assist figuring out what happened if something went awry. The researcher need
only assess whether there is a prominently placed reporting channel. 
 
Q: you want a front door, but you’ve put other elements in
your proposal—the creation of an internal corporate vulnerability handling
process. Opponents have said a researcher wouldn’t even know if the company had
such processes in place. How would they know?
 
Matwyshyn: the later parts are only for a subsequent finder
of fact. Supposed the researcher used the front door and then the sales
department loses the report—the exemption protects the researcher.
 
Q: but does it give ex ante comfort?  The researcher won’t know if that will
happen. 
 
A: if it goes off the rails b/c the internal processes weren’t
in place, the researcher has a second
tier ability to defend if the disclosure results in a threat.
 
Bellovin: In almost 30 years, it’s been remarkably hard to
find ways to report security vulnerabilities. I know security people and can
generally find an artificial channel. 
But put yourself in the position of someone who has found a flaw and
doesn’t know me.  If this vulnerability
is a threat to safety, public disclosure is a boon.
 
Matwyshyn: Henninger attempted to contact 61 companies about
a vulnerability. 13 had contact info; the rest she had to guess at a point of
contact. 28 humans responded out of 61.  A
different 13 said they’d already fixed it. 
6 subsequently released security advisories b/c of her report. 3 were
after the intervention of ICS-Cert contacting the provider in question.
 
Q: the suggestion is that she made a good faith attempt that
she documented attempts to notify.  Isn’t
that a more objective standard than having her know the internal
processes. 
 
Matwyshyn: the judgment point for the researcher is “is
there a front door”?
 
Q: in many cases they may have a front door [note:
contradicted by the record], but you could try to figure that out and keep a
record if your attempt was unsuccessful.   You shouldn’t have to know the internal
workings of the company. [This is the proposal, though!  Right now you have to know the internal
workings to reach someone if there’s no front door. Under the proposal, you don’t
have to know the internal workings, but you know that if you deliver through
the front door you are protected!]
 
Matwyshyn: right now you get a chill even with documented
attempts.
 
Q: but some companies will just threaten you no matter
what.  You won’t avoid that entirely. If
we go down this road, how will people know? 
If the standard relies on how people handle things, how will they know?
 
Matwyshyn: if the front door exists, the researcher should
use it. If the disclosure goes off the rails, the researcher gets an extra
boost.  W/out legal team, you can assess
whether there is a front door and thus whether you should use it.
 
Q: why shouldn’t they try other methods if there isn’t a
front door? You try to figure out who owns something in copyright all the time.
You’re saying we should have a standard that everyone has to have a front door.
[Why is this a copyright issue? What is the nexus with copyright infringement?
Why not follow the ISO security recommendations? Why would the Copyright Office
have a basis for knowing better than the ISO standard how vulnerabilities
should be reported?]
 
Matwyshyn: The ISO standard was negotiated across years and
stakeholders.
 
Q: those are big companies, and this law would apply across
the board.  Manufacturers who don’t have the
resources might not know.  We have to
think of them as well. [B/c of their copyright interests?]
 
Matwyshyn: could identify copyright contact as point of
contact. 
 
Q: for DMCA that’s a statutory requirement. [Um, “requirement”
if the companies want the DMCA immunity. If they want people to report
vulnerabilities to them, why not have them follow a similar process?]
 
Matwyshyn: Congress did discuss security as well—you can
expand the concept/clarify it.
 
Matthew Blaze, University of Pennsylvania: History of
security research, including on Clipper chip and electronic voting
systems.  Two specific examples of DMCA
issues, though it loomed over every nontrivial work I’ve done since 1998.  Analogous to Ioactive/Cyberlock issue, in
2003 I decided to look at applications of crypto techniques to other types of
security: mechanical locks. Discovered a remarkably similar flaw to that
discovered by Ioactive: could take ordinary house key and convert it into
master key into one that would open all locks in a building.  Real world impact and interesting use of
crypto; master key systems need to have their security evaluated. Purely
mechanical, no TPMs. And so publishing was simple and without fear.  But other work is chilled by the DMCA.
Example: in 2011, w/grad students studied P25, a communication system used as a
digital 2-way radio by first responders, including federal gov’t.  Examined standards for the system as well as
the broad behavior of a variety of radio products that used them. Discovered a
number of weaknesses and usability failures, and discovered ways in which the
protocols could lead to implementation failures. To study those failures, we
would’ve needed to extract the firmware from actual devices. But we were
sufficiently concerned that in order to extract the firmware and reverse
engineer it, and in particular develop tools that would allow us to extract the
firmware, we would run afoul of the DMCA. So we left a line of research
untouched. If we had the resources and the time to engage a large legal effort
to denote parameters, we could possibly navigate that, but under the DMCA as
written we decided it was too risky.
 
Q: why not 1201(j)?
 
Blaze: w/o getting into atty-client privilege, the essential
conclusion was that we were in treacherous territory, primarily b/c we would
have needed to reverse engineer, see if implementation failures we anticipated
were present, and effectively build our own test bed along the way. We
approached a few manufacturers and attempted to engage with them and were
ignored or rebuffed every time. We realized the relationship would be hostile
if we proceeded.  The anti-trafficking
provision would have been particularly problematic b/c we needed tools for
extracting—a colleague in Australia examining the same system had developed
some tools and expressed interest in working w/us, but we couldn’t.
 
Q: is there a norm of trying to disclose before publication?
 
Blaze: certainly there are simple cases and hard cases. In
simple case, we find particular flaw in particular product w/well defined
manufacturer w/a point of contact. Sometimes we can find an informal
channel.  As someone who is an academic
in the security community and wants to work in the public interest, I don’t
want to do harm.  Disclosing to the
vendor is certainly an important part. But in other cases, even identifying the
stakeholders is often not so clear. Flaws found in libraries used to build a
variety of other products: we won’t always know what all, most or even some of
the dominant stakeholders.
 
Q: when you do know, is it a norm to disclose in advance as
opposed to concurrently?
 
Blaze: it has to be case by case. There is a large class of
cases when we have a specific product that is vulnerable, and we can say “if
this mfgr repairs, we can mitigate.” But other cases it’s less clear where the
vulnerability is present and it may be more prudent to warn the public
immediately that the product is fundamentally unsafe. Reluctant to say there’s
a norm b/c of the range of circumstances.
 
Green: In some cases like last week there’s mass disclosure—you
can’t notify the stakeholders all at once. If you notify, they may leak it
before you want it public which can cause harm. 
Sometimes you want to be very selective. 
If, let’s say, 200 companies are affected, you can go to Google/Apple
and trust the info won’t leak, but beyond that the probability that the problem
becomes public before you want it to is almost one—I’ve had that happen.  Heartbleed was an unintended leak—too many
people were notified of a mass vulnerability, and many systems including Google
and Yahoo! were not patched as a result of the two weeks early disclosure.
 
Charlesworth: so are you saying that disclosure should be
limited?
 
Green: there is no single answer you can write down to cover
it all. Heartbleed: massive vulnerability affected 1000s of sites. Google =
Google would fix and protect maybe 50% of end users on internet. Yahoo! =
protect 25%. As you go to a smaller website, now you’re protecting 200 people
but probability of leak is fairly high. Then criminals can exploit
vulnerability before it’s patched.  Has
to be customized to potential vulnerabilities.
 
Reid: you’re hearing a theme that this is an issue for the
judgment of security researchers, and it’s only b/c of the DMCA that suddenly
this is the realm of copyright law. Getting pretty fair afield of Congressional
intent to mediate these judgments and their complexities, which take a lot of
negotiation, as Matwyshyn underscored w/ISO. We would strongly caution the
Office against being too prescriptive. (1) If there wasn’t a lock involved, we’d
just be talking about fair use. In that case it would be up to the researcher
how to disclose. Whatever copying was necessary for research would be the only
issue; the fruits of the research would be free and clear. (2) Remember the
First Amendment interests and the prohibition on prior restraint. Rigid
structure on when someone is allowed to speak, even if the policy judgments
weren’t complicated.
 
Charlesworth: did you brief the First Amendment issues?
 
Reid: not in detail.
 
Charlesworth: Congress considered this in making disclosure
a factor.  What you’re saying is that
sometimes you should disclose, sometimes not. 
 
Reid: Congress can’t contravene the 1A, even in enacting the
DMCA.
 
Charlesworth: but looking at disclosure to manufacturer is a
factor—maybe that’s not such a bad way to think about it.
 
Reid: factors mentioned in (j), to extent compatible w/1A,
can be read as probative of intent to do security testing or something else.
Reading them as limitations of speech after circumvention performed is
constitutionally troubling.
 
Charlesworth: that’s a brand new argument, and I’m not
troubled by (j), but there’s a lot of commentary about disclosure.  Google has a 90-day disclosure standard; you’re
saying there should be no standard, though Congress clearly had something in
mind.  [Would having a front door be
consistent with being the kind unlikely to leak?]
 
Blake: As academics and members of the public research
community, the aim of our work is to disclose it. The scientific method demands
disclosure.  Someone building tools to
infringe is not engaging in research. 
The issue is whether or not the work is kept secret or disclosed to the
vendor, not whether it’s disclosed to the vendor in advance. No one here is
advocating keeping research secret—trying to protect research we will publish
and will benefit everyone.
 
Mellovin: twice in my career I’ve withheld from publication
significant security flaws—once in 1991 to delete a description of an attack we
didn’t know how to counter. Because security community wasn’t aware of this
publicly, the bad guys exploited the flaws before fixes were put in place. It
was never seen as urgent enough. 
Published the paper in 1995, once we saw it being used in the wild and
b/c original memo shared only with a few responsible parties ended up on a
hacker’s site. Security community didn’t care until it became public.
 
Other case: vendors were aware of the problem and didn’t see
a fix; once it was in the wild, others in the community applied more eyes and
found a fix. In both cases, trying private disclosure actually hurt security.
 
Matwyshyn: (1) 1201(i) concerns are slightly different. (2)
In our findings we did discuss the First Amendment, should the panel wish to
review the cited law review article. (3) Google’s a member of the Internet Association,
which supports our approach. (4) Frivolous litigation: the benefit of a clear
exemption allows them to feel more comfortable contacting vendors earlier,
rather than needing to weigh the risk of litigation to themselves; later
contacting is now something you do to mitigate risk that they’ll sue you before
you disclose. Providing comfort would encourage earlier contacts.
 
Laura Moy, New America’s Open Technology Institute
I’ve encouraged the Office to focus on © issues and not
weigh the policy issues as opponents have suggested.  But consumer privacy is relevant b/c the
statutory exemption for privacy indicates Congress’s concern therefor. Some
opposition commenters have cited privacy concerns to grant an exemption—but that’s
wrong.  Remove roadblocks to discover
security vulnerabilities.  As many others
have pointed out, vulnerabilities often expose consumer info and they need to
be found. Malicious attackers are not waiting for the good guys; they race to
do their own research. They are succeeding. Last year, CNN reported 110 million
Americans’ info had been exposed—these are just the ones we know about.  Need to find them as soon as possible,
dismantling roadblocks.
 
Vulnerabilities should be disclosed so consumers can
incorporate security concerns into decisionmaking. Consumers have a right to
information they can use to make informed choices. Also bolsters vendors’
economic incentives to invest in security—publicity can be harmful if a product
is insecure, and that’s as it should be. As a consumer, you should know of
known vulnerabilities to Cyberlock’s product before you purchase.
 
Vulnerabilities should be disclosed so that regulators
enforcing fair trade practices know whether vendors are adhering to the
promises they’ve made and using good security practices. FTC says failure to
secure personal information can violate FTCA; state laws too. Enforcement requires
understanding of security. Often rely on independent researchers.  FTC recognizes that security researchers play
a critical role in improving security.
 
Erik Stallman, Center for Democracy & Technology:
security testing done only with authorization of network operator in statutory
exemption—in a world of internet enabled devices, it can be very difficult to
determine who the right person is.
 
Q: says owner/operator of computer, not owner of
software.  Even if I don’t own the
software, can’t I authorize the testing?
 
Stallman: it’s unclear if that’s sufficient—if your banking
network is connected to a VPN, it may be the source of a vulnerability.  Are the computers at your ISP covered by
this? 
 
Q: presumably if I hire a VPN provider I can ask them for
permission to test the security. 
[Really? I can’t imagine the VPN provider saying yes to that under most
circumstances.]  I can buy a pacemaker
and run tests.  If you own that
pacemaker, you can run tests.
 
Stallman: You may need to go up the chain. The moment you
fall outside, you fall outside the exemption [e.g. if you communicate with
another network].
 
Q: so I want to test HSBC’s systems to know if they’re
secure. Will the exemption allow this test without permission of third party
server?
 
A: Something like Heartbleed—a ubiquitous exploit on many
systems. You shouldn’t need to go around and get permission. Accidental
researcher: may come across vulnerability when engaged in different research.
Often researchers won’t know what they’re looking for when they start looking.
1201(j) limits what they can ask.
 
Q: I want to test a website’s security. Can I test it under
your proposal? Say I bank at HSBC and want to test it.
 
Stallman: So long as you’re doing good faith security
research, yes.
 
Q: but Congressional history says shouldn’t test locks once
installed in someone else’s door.  Does
your proposal require any authorization, or is there a proposal requiring at
least an attempt to seek authorization?
 
Stallman: the problem is it’s hard for the researcher to
know/stay within stated scope. Or authorization can be revoked/cabined.  You could ask HSBC, but then what do you do
if they say no?  Then you’re out of luck.
 
Q: legislative history suggests authorization is important.
 
Stallman: the internet environment is very different from
enactment of 1201(j); House Report said that the goal would be poorly served if
they had undesirable consequence of chilling legitimate research activity, and
that’s the problem we have now.  1201(j)
is not providing the protection that researchers now need.
 
Q: nuclear power plants/mass transit systems—should we allow
testing of live systems?  How would this
research be conducted?
 
Stallman: many critical infrastructure systems depend on the
same software/applications that run other systems. Should not be able to stop
research on systems widely in use by other people. 
 
Q: but if this can be tested by off the shelf software,
shouldn’t it have to be?
 
Reid: to the extent the Office reads (j) very broadly, you
could put that in the record/conclusions in the proceeding, that would be very
helpful.  The primary concern: one
interpretation is that the computer system is the bank. The concern is to the
analogy in the legislative history—the TPM on that system is not protecting the
bank’s property. It’s protecting the software. The company will claim we aren’t
the owner and that we aren’t engaged in accessing a computer system, but rather
engaged in accessing software running on that computer. That is the ambiguity
that has crippled (j) in the past. We would agree with your interpretation of
(j) in court, but when we’re trying to advise folks we have to acknowledge the
multiple interpretations.
 
Charlesworth: we haven’t come to any conclusions about the
meaning of (j). Your point is well taken. 
We may get there, but we aren’t there yet.
 
Reid: Think about the standard for granting an exemption:
the likelihood of adverse effects. You’ve heard that uncertainty produces the
adverse effects. You need not have an ironclad conclusion about (j) in order to
grant this exemption. If you conclude that there’s multiple interpretations but
a chill, then you need to grant the exemptions.
 
Bellovin: (j) is for testing my own bank as an employee. I
might be able to take precautions, or not. Even the most sophisticated users
would have trouble mediating a flaw in an iPhone. We aren’t talking about
device ownership, but vulnerabilities not in the device but rather in the
software—not the flaw in our particular copy but the class of copies which
manufacturers often don’t want to hear about/don’t want anyone to hear about
it. If a flaw is serious enough I may not use my copy, but it’s the other
instances owned by others that need protection.
 
Stallman: Just note that security experts have signed on to
comments about the chill. General point is that because (j) has CFAA, Wiretap
Act, Stored Communications etc. references, it has the unfortunate effect of
compounding/amplifying uncertainty. It’s not satisfying to say that other legal
murkiness means we shouldn’t address this issue—this is one thing the Office can
do and send a clear signal that this is an area that Congress should look at.

from Blogger http://ift.tt/1Evr2Qe

Posted in Uncategorized | Tagged , , | Leave a comment

DMCA hearings, security research, opponents

Security research continued, opponents
 
Copyright Office: Jacqueline Charlesworth
Michelle Choe
Regan Smith
Cy Donnelly
Steve Ruhe
John Riley
Stacy Cheney (NTIA)
 
Opponents:
Christian Troncoso, BSA | The Software Alliance: we support good faith security testing. We are surrounded by the good guys, and we have an interest in working with academic and independent security community. But any possible exemption also has the potential to be exploited by the bad guys. [Has that happened with previous exemptions?] User trust is instrumental, as is collaboration with research community. We worry about specific authorization for researchers to make disclosures based on the researcher’s sole judgment before the provider has had the opportunity to address the problem. Authorizing zero-day disclosures may enable identity theft, financial fraud, and other serious threats. Objective must be to thwart malefactors. Congress is considering laws on info-sharing proposals, which BSA supports. How best to create incentives? Limit liability w/o unintended consequences. Administration is also considering policies, such as export controls on hacking tools. Concern is balance: responsibly disseminate tools while guarding against their falling into the hands of those w/bad intent. Congress enacted exemptions w/careful checks and balances to prevent ill use. Proponents argue that ambiguity = chilling effects. Were proponents seeking narrow clarifications we wouldn’t oppose their efforts. Proposed class does much more—broad and w/no important safeguards.
 
Congressional intent: class 25 should be amended to permit circumvention only when software lawfully obtained, researcher has made good faith effort to obtain permission, solely for purpose of testing, and info is used primarily to promote security and is maintained in a manner that doesn’t facilitate copyright infringement or violation of the CFAA. Must avoid unintended consequences. Info should first be shared w/developer, in best position to fix it. Time to fix before shared more broadly. Otherwise bad actors get window of opportunity. Not speculative: already a thriving market for security research on zero-day vulnerabilities.
 
Class should be tailored in a manner consistent w/Congressional intent, mindful of broader cybersecurity debate. Not inadvertently help bad actors.
 
Charlesworth: How would we do this?
 
A: you’d have to find a big chilling effect, but there’s a lot of research going on. BSA has a big interest in partnering w/community; many actively try to incentivize by providing rewards to those who provide info responsibly—enough time to issue a patch.
 
Charlesworth: how much time is this?
 
A: no set time. Every vulnerability is different.  Particularly w/enterprise software.  Complex systems.
 
Charlesworth: what percentage of members authorize research?
 
A: don’t know; trend for software companies to do that. Some of them probably work behind the scenes. Many have visible programs advertised on their websites.
 
Q: do your members have specific concerns about trade secrets?
 
A: absolutely.
 
Q: you said you would be ok with a narrow exemption. How to address that?
 
A: build in the standard that it couldn’t involve any other violation of applicable law, including violation of trade secrets.  [Why is 1201 needed if another law is violated?  Why use copyright law to enforce a different regime?]
 
Harry M. Lightsey, III, General Motors, LLC with Anna Shaw, counsel for GM with Hogan & Lovells (not testifying)
 
Comments are solely directed at auto industry.  Controls range from engine to safety, braking, speed, steering, airbags.  ECU software is protected by TPMs.  If circumvented, could present real and present concerns for the safety of the occupants, as well as compliance w/regulatory and environmental requirements.
 
Proponents have no evidence of chilling effect in auto industry, which has every incentive to encourage responsible security research. We have, as we said in Class 22, relations with various independent researchers/academic institutions/industry fora.  We attend Black Hat and Defcon.  We engage in efforts w/DARPA. We do our part to encourage responsible security research. Our concerns are that a broad exemption would harm ability to control research and have opportunity to fix vulnerabilities before they’re widely disclosed, creating safety concerns.
 
Charlesworth: you asked about limiting exemptions to vulnerabilities caused by access controls.
 
Troncoso: those are the only past exemptions—limited to access controls creating security vulnerabilities. The proposal here is very broad, applied to any type of software.
 
Charlesworth: but are you asking for that as a limit? Are you ok with a narrow exemption limited to vulnerabilities caused by access controls?
 
Troncoso: we’d be comfortable with that.
 
Charlesworth: that’s a fairly considerable limitation.
 
Troncoso: our motivation is the disclosure issue. If that can be addressed and congressional intent can be integrated, we would be comfortable w/an exemption broader than vulnerabilities specific to the access controls.
 
Charlesworth: hacking into live systems—how should we think about that issue in practical terms? There’s not a huge record of need to look at live nuclear power plants.  How should the Office be thinking about the concern about publishing research where a breach could be catastrophic?
 
Green: two issues. Should you be testing live systems?  That can be dangerous. However, there are other directly applicable laws, like the CFAA, specifically designed to deal with that. I have never viewed the DMCA as specifically applicable to that case.  Is it something we should be using 1201 for?  Does that benefit us as a society?  Clearly it does not. We know that there are a number of systems that whether you’re accessing in real time or as separate copies, the results can lead to finding major safety issues. The value of fixing them is very high.
 
Charlesworth: saw a news report about someone who allegedly hacked into a live operating airplane system. [Are they being charged with a criminal violation of 1201?] They may be doing it for what they perceive to be good purposes. Security researchers could make a mistake—exposed a flaw, but also isn’t that scary?  Would you be willing to limit this to not-live systems?  Maybe that should be debated in Congress.
 
Green: I speak for all the researchers here when I say that story is not something we endorse. No ethical researcher should be working on live systems.
 
Reid: In addition to distinguishing that story, the vast majority of the research we’re talking about is aimed at fixing problems in a safe way.   
 
Charlesworth: but how do we limit the exemption to ethical research? There needs to be linedrawing to notify the public of what they can and can’t do. [Is that copyright law’s job, to reintroduce the entire legal code into 1201 exemptions?]  We’re trying to consider potential narrowing so that people feel that the exemption would be consistent w/congressional intent and the goals of the proceeding.  So are you willing to exclude live systems? Doesn’t think there’s much of a record on live systems. 
 
Reid: Urge you to consider that however this gets treated in this proceedings, as Green mentioned there are a number of other laws here. Collateral concerns about tampering are illegal under a whole bunch of laws. The question you ought to ask: is the DMCA the last line of defense for airplanes? Are we relying on © to protect airplanes? (A) we’re not, (B) if we were that would be troubling, (C) we are so far away from the purpose of the DMCA to protect (c) works from (c) infringement.  Nothing in the airplane story involves circumvention, FBI affidavit doesn’t cite 1201.  Legal and policy venues exist to address these; the Office need not worry about enabling behavior that’s illegal under other laws b/c it will still be illegal. There are complicated contours to this discussion, and these discussions should happen in other venues.  We’re in support of having those venues participate and apply those laws and policies. But (c) is not the place to do it, and you don’t need to, and 1201 doesn’t require you to. [Applause!]
 
Belovin: We are here to avoid breaking laws. We don’t want to violate the CFAA or airplane hijacking laws.  © infringement is almost never a concern unless you have a copy of the system.  The guy who allegedly tried to hack the airplane in flight wasn’t copying Boeing’s software. As a pragmatic matter, if I’m testing a system for security flaws in a way that could possibly involve copying, I have to have the thing in my possession. This is not a CFAA exemption request.
 
Charlesworth: couldn’t you hack in through the internet?
 
Belovin: you’d have to violate the CFAA first.  The larger violation there is the hacking. The more probable case is not involving the DMCA, but stealing source code—this is not protected by TPMs under the DMCA, it’s protected by ordinary enterprise security controls and firewalls. The DMCA was intended to protect copyright violations, not a CFAA supplement. 
 
Matwyshyn: Airplane incident facts are in dispute, but the security community is not rallying.  Homicide laws are the first line of defense. Whether a TPM was circumvented is irrelevant.
 
Charlesworth: b/c of how the law is written, we have to consider these issues. [Why?  That’s not in the exemption standard—it’s noninfringing use, as Betsy Rosenblatt eloquently said.]
 
Blaze: back to the issue of disclosure—remember that repair is important, but so is warning consumers against defective products.  The Snort toy: if I were a parent, even before it’s fixed, I’d want to know. Disclosure to parents is important even at the price of embarrassment to the vendor. Give the benefit of the process not merely to the developer: users are stakeholders as well.
 
Lightsey: no evidence of chill in auto industry. Given dramatic consequence on safety, proponents have not met burden of showing need for an exemption. Saying there are other laws and regulations is not sufficient in this context. We feel the DMCA is a relevant protection and we encourage the ability to engage w/security researchers responsibly.
 
Troncoso: Stanislav explained that he reached out first to mfgr, notwithstanding the bluster he was ultimately able to work w/them to ensure the vulnerability was fixed. He didn’t disclose until after it was fixed. That gets to the norm that we’re seeing even in researchers in this room. Consistent w/companies’ interests in protecting consumers.  Professor Green’s initial filing: he indicates he always provides disclosure before disclosing vulnerabilities to the public. It’s a key issue to us, critical to public safety.
Green: I always attempt to provide disclosure. Sometimes it’s not possible, as when there are 1000s of websites. Sometimes you notify, and they are not able to remediate it. They tell you there’s no fix or they’ll take a year. Then you have the obligation to look at the end user/consumers and that has to affect your calculation. Android is rarely updated by carriers. Google will make a patch, but 90% of consumers may be vulnerable a year later. You have to decide based on what’s right for consumers and not based on what’s good for software companies.
 
Stanislav: In the case of the Snort and camera, with both were reported through the helpdesk because there was no front door. Took days to convince them that there was an issue to kick upstairs. Had a ticket closed on him and had to reopen w/Snort. Only reason this got solved was that my company was going to disclose publicly. At that point reporter reached out; vendor said they’d never heard from a researcher before [i.e., it did not tell the truth]; then the CEO reached out to him on the thread they’d already been having. The internet of things comes from innovators—not large legal teams that understand complex legal situations; they will fight back in an attempt to shut you up.
 
Matywyshn: indeed, car companies like Tesla are state of the art. But unfortunately there’s a large degree of variation across car manufacturers. Some haven’t fully staffed security teams and have many openings—it would be beneficial to engage with security community. Tesla, for example, is ISO compliant and doesn’t oppose our approach.  If every car company was on the level of Tesla, we wouldn’t be concerned, but security researchers are concerned.
 
Belovin: I’m in favor of notification, but one issue is whether or not the vendor would have the legal right to block or delay publication interacts in a bad way w/university policies. I may not accept a grant that gives the funding agency the right to block outside publication. University sees this as a matter of academic freedom. Mirrored in an odd place in the law on export controls.  What is “export”?  You can’t teach foreign nationals certain things—one of the things it says in the law is that fundamental research is ok, but what is that?  One criterion: can someone else block publication?  If someone else can block publication, then export controls apply, which causes very serious chilling effects of its own.
 
Blake: we found sweeping vulnerabilities in election software.  Research authorized by customers (state gov’ts) not by voting machine vendors.  We were indemnified under state law and there was some contractual back and forth w/the vendors that I wasn’t privy to—grey area. One of the issues we addressed was whether to give the vendors advance notice to fix. We normally do try to give notice, we felt that allowing end users to remediate immediately outweighed the benefits of not notifying the users and allowing vendors time to repair things that would take more time to fix than the next election. Vendors didn’t see our results until they were made public.
 
Moy: Emphasize again the importance of disclosing not only so the vulnerability can be remedied but so that consumers can make an informed choice.  If a vendor can stall publication for 6 months/year but continue to market the product in the meantime, that’s an enormous problem w/major implications for consumers.
 
Charlesworth: could some be addressed by high-level communication: there is a security problem?
 
Moy: maybe for some, not all. There will be cases where the nature of the vulnerability is important. Consider the BMW vulnerability publicized in January—remote unlocking.  Details might be important to certain consumers—couldn’t be exploited to unlock other people’s cars, not your own; don’t know if that’s true but consumers could make decisions for themselves.
 
Charlesworth: but it’s not step by step instructions. Why would an ordinary consumer need to know that?
 
Moy: ordinary consumers include people who understand how the tech works. I wouldn’t be able to exploit a vulnerability even if you handed me a detailed paper about it.  [Likewise.]
 
Charlesworth: but what about enabling a certain group of people who might not otherwise have known about it—not sophisticated ones.  [So, sophisticated enough to understand the disclosure’s detailed, but not sophisticated enough to do it themselves.  Charlesworth is suggesting that researchers publish “step by step instructions” for a hack. But I don’t think that describes most of what they do, or not in that sense.  I readdescriptions of Heartbleed, but that doesn’t mean it was step by step.]  Why would I need to know the way in which someone can exploit the Snort?
 
Moy: Who’s going to translate the nature of the vulnerability?
 
Charlesworth: Stanislav will.  The company refuses to fix it, so he publishes an article saying this toy has a problem.  I wouldn’t then need line by line instructions in order to make a decision about possessing that toy.  Why is that so hard to concede?
 
Moy: that would be enough for some consumers, not for others.
 
Charlesworth: Why?
 
Moy: sufficient for some, but not for other  more sophisticated consumers. I’m having a difficult time imagining how to write a disclosure requirement that would be written so that you could disclose, but not enough to replicate it technically.
 
Charlesworth: (j): solely to promote the owner/operator’s security. Part of the policy was that you weren’t necessarily advising the world how to do this.  Doing the research in a way that didn’t enable malicious actors.  Congress put the test in here to deal w/the complications—whether you use the research responsibly. [With respect to copyright, though, is a very different question than “are you providing a net benefit to the world?”]
 
Moy: Q depends also on how the company deals w/security. Is it something that could be fixed, or does it represent a major flaw?  Security experts should be able to analyze that and explain to us if necessary.
 
Charlesworth: is a high level disclosure better than none?
 
Moy: more information for consumers in the market is generally a good thing, but that doesn’t get to the reasons we want disclosure.
Stanislav: (1) At the time of the webcam—CEO said my research was inaccurate and misleading. I’ve presented it publicly now; when a story like this comes out and the vendor says I’m lying I can prove it. (2) Prevention: if the intermediary-users (web companies etc.) don’t know the specific details of the vulnerability in the meantime until the vendor patches it, then they can’t fix it on an intermediate basis.
 
Sayler: the individual disclosure is useful for consumers who may recognize that the problem may be replicated in other devices. Replication is hugely important, and it requires public disclosure for those of us in the community who do this kind of work.
 
Many of the flaws we discover, we’re not the first—many are already available on the black market. Allowing disclosure will not increase the number of zero-day exploits.
 
Charlesworth: the concern is you may be educating people about the unknowns.
 
Sayler: it’s a balance: it might happen, but you are also protecting millions of people. Extraordinarily hard to codify what the proper behavior is.  Thus we should rely on researchers’ good faith (and other laws).  Far outweighs the downsides.
 
Lightsey: to protect the record, on behalf of GM, cybersecurity is something we take very seriously. We have a senior leader at GM. The industry is committed to voluntary privacy principles, including promise to maintain reasonable security, enforceable under §5 of FTCA.  [Though as Moy says, how will you know if they’re following through?]
 
Troncoso: Potential for companies to decide not to fix a problem. But we do have regulators in place to handle those issues. If they encounter pushback from software companies unwilling to fix problems, urge them to go to the FTC.  [Right, because they have so many resources.]
 
Charlesworth: what would the FTC do?
 
Tronsoco: they’ve been willing to bring enforcement actions against companies not employing sufficient security standards.  Building in a disclosure requirement is critical to avoid perverse incentives to keep research hidden so it’s more valuable on black/gray markets.  Potential for exemption to be exploited by bad actors.
 
Stallman: part of the value of exploits trafficked in black market is secrecy. Publication is a way to make an existing but unknown vulnerability lose its value. 
 
Blaze: There is a bright line between legitimate research and black market: we publish our work and we’re required to do so by the scientific method.  You asked about a compromise disclosure in which we describe existence of vulnerability w/o describing how to exploit. With some examples it might be possible to describe the vulnerability/remediation w/o enough detail to exploit. But many, many others describing the existence would make exploit trivially easy: the difference between the exploit and who’s vulnerable is nonexistent.  No line to be drawn unless we want “there’s a terrible, lifethreatening problem with GM cars” to be the disclosure—“this model has a brake problem” is better.
 
Charlesworth: but saying there’s a brake problem is different than line by line discussions.
 
Blaze: sometimes it is possible, but in other cases it’s not. [Perhaps we should trust the programmer/security researcher and not the person who doesn’t program here?] Vulnerability might be: if you turn the key three times the brake stops working. The only way to know is to try it. There is no other way to describe it. This varies across the spectrum. There is not a generally applicable line meaningfully separating them.
 
Charlesworth: when you publish, sometimes you refrain from giving detailed information. [Charlesworth has a specific idea of “line by line instructions” that is not consistent w/the programmers’.]
 
Blaze: sometimes.  We ask whether it’s necessary to include details. Sometimes it’s in the middle, and you can disclose 90% and a determined person could ferret it out. An essential property of the scientific process is to publish reproducible, testable results that others can build upon. Readers of scientific papers need to be able to verify and reproduce.
 
Matwyshyn: There’s a whole array of mitigation measures researchers regularly use—timing, detail, a bundle of best practices.
 
Charlesworth: are those written down?
 
Matwyshyn: they’re contingent on the nature of the reproducibility. The ISO standards are the closest.
 
On the point of 0-day vulnerability markets – the researcher’s perspective is: I know a vulnerability. (1) Do I sell it and make a quick buck, or (2) undertake laborious and personally risky process of contacting vendors and maybe having them threaten me w/DMCA, work for months.
 
Charlesworth: so there’s overlap w/bad guys?
 
Matwyshyn: the US gov’t purchases zero-days regularly. But most vulnerabilities are known—a researcher will find that this product hasn’t been patched with a ten-year-known vulnerability. Don’t want the DMCA to deter contacting the company.
 
FTC: I served as privacy advisor. But it is an agency with limited resources.  There isn’t a formal intake mechanism for security researchers to report problem. The FTC can’t mediate DMCA threats from vendors.
 
Charlesworth: you’re suggesting that people might sell research on the black market if they don’t get the exemption.
 
Matwyshyn: The zero-day market is a very small sliver.
 
Charlesworth: how does it play into the exemption process?
 
Matwyshyn: in the absence of a regulatory regime, which we don’t have.
 
Charlesworth: well, we have 1201. You’re assuming someone has discovered—have they broken the law or not?
 
Matwyshyn: if they mayhave circumvented, we want them to report it. 
 
Charlesworth: why would they care?
 
Matwyshyn: because the act of disclosure currently exposes them to liability. We want to nudge them towards disclosure.
 
Charlesworth: does that actually happen?
 
Belovin: an ex-NSA hacker has stated that he sold an exploit to the US gov’t. Here’s someone who’s finding and publishing vulnerabilities and also sold it to the intelligence community.
 
I served as chief technologist to the FTC for a year. FTC doesn’t have the resources to act as intermediary in these cases. It does not resolve individual cases about kinds of research people can do.  Security researchers: take auto hacking. One case involved vulnerabilities in the wireless tire pressure monitor. I never would’ve looked there, but once I was pointed in that direction, any competent researcher could replicate the issue within a few weeks. Asking the right question is often the very hardest part of this kind of research.  Different remediation measures are indicated depending on the type of issue.
 
Reid: Underscore Belovin’s point about remedies. It’s not just about understanding and explaining vulnerability. Sometimes consumers can take an actual remedial action, which sometimes takes some detail.  If your car has a software problem, you may want to know how to fix it. Look at how auto industry handles other types of problems: airbag recall; we now know every detail, including every factory the airbags came from.  That is useful information.  We lack that useful information about how to deal with the risks of hackers hacking our cars, which allows consumers to apply pressure.
 
Q: Talk about norms—is there anything in standards that could identify a security researcher v. a black hat?
 
Matwyshyn: someone who discloses flaws for security and works to better systems. ISO standards are evolving. The leads have stated that they are happy to directly consider any issues the Copyright Office panel feels should be discussed.
 
ISO is an organization that has traditionally been closed; lots of corporate standards; will push for openness of these standards because of the tremendous social value of an exemption.
 
Charlesworth: it’s a little hard to draft a law based on something no one can see.  [From your lips to Congress’s ears! [TPP reference]]
 
Reid: we’d be comfortable w/ a limitation that makes clear it has to be for noninfringing purposes, the statute is geared for that and it’s easy to write in.
 
Q: what about not in violation of any other laws?
 
Reid: defers to papers.
 
Matwyshyn: suboptimal framing b/c many of the chilling effects involve people leveraging DMCA to threaten with CFAA etc.
 
Charlesworth: we will not grant an exemption that says you can violate other laws.  [I don’t think that’s what’s been asked for; see Betsy Rosenblatt again.  Shall we say “you can’t use the exemption if you’re going to commit murder”?]
 
Belovin: one reason there’s no consensus on reporting—it’s often very hard to understand how best to disclose; judgment calls. More germane: there’s a fear of vendors not acting in good faith. There is a chilling effect. Rightly or wrongly, we’ve seen enough instances where the DMCA has been used as a club, even with no copyright interests, that researchers don’t want to give someone else the power to suppress them.
Posted in dmca, drm, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

DMCA hearings: security research proponents

Library of Congress DMCA exemption hearings
 
Proposed Class 25: Software – security research
This proposed class would allow researchers to circumvent access controls in relation to computer programs, databases, and devices for purposes of good-faith testing, identifying, disclosing, and fixing of malfunctions, security flaws, or vulnerabilities.
 
Copyright Office: Jacqueline Charlesworth
Michelle Choe
Regan Smith (main questioner)
Cy Donnelly
Steve Ruhe
John Riley
Stacy Cheney (NTIA)
 
Charlesworth: goal is to clarify record, hone in on areas of controversy rather than restating written comments. Interested in refining/defining broad classes in relation to the support in the record.  Court reporter (so my report is far from definitive!). (Also note that I am not good at ID’ing people.)
 
Proponents:
Matthew Green, Information Security Institute, Department of Computer Science, Johns Hopkins University
Research in area of computer security and applied cryptography.  Risks posed by DMCA to legitimate security research: discovered serious vulnerabilities in a computer chip used to operate one of the largest wireless payments systems and widely used automotive security system.  Naïve: didn’t know what expected to happen when notified manufacturer, but believed it would involve discussion and perhaps repairs and mitigations we developed. That’s not what happened. Instead, a great deal of resistance from chip manufacturer, and active effort to get us to suppress our research and not publish vulnerabilities.  Instead of repairing the system, mfgr spent considerable resources to stop us from publishing, including raising specter of expensive lawsuit based on 1201. Small component was reverse engineering of software and bypassing extraordinarily simple TPM. 1201 was never intended to prevent security researchers from publishing, but it’s hard to argue merits/intent of law when you’re a penniless grad student.
 
Charlesworth: why isn’t 1201(j) enough?
 
A: My understanding is that there’s the bypass issue and the trafficking issues. Both potentially an issue depending on what it means to traffic.  Bypassing the TPM was raised to us at the time.
 
Blake Reid, Samuelson-Glushko Technology Law & Policy Clinic at Colorado Law: Existing exemptions for (j), (g), and (f) for research/reverse engineering, but as we detailed in comments, there are shortcomings in each.  (j) fails to provide the up-front certainty needed for an exemption, because, e.g., it’s got a multifactor test that depends on things like how the info was used and whether the info was used/maintained in a manner that doesn’t facilitate infringement.  We might well try an argument for applying the exemptions if god forbid he was sued, but as we’ve asked the Office for before we want further up-front clarity for good faith security testing/research. That was the basis of 2006 and 2010 exemptions and we hope for them again.
 
Green: my incident was 2004.
 
Charlesworth: would the activities described fall into one of the exemptions?
 
Reid: Don’t want to opine—again if we were in court I’d absolutely they were covered, there was no ©’d work, etc. but if advising Prof. Green beforehand, hypothetically, there would be reason to be nervous b/c of the ambiguous provisions of the law. The issue of certainty.
Green: we were advised at the time by the EFF, pro bono. We were told they could provide no guarantee that any of the exemptions would protect us if we were sued. They didn’t say we were violating the law, but the complexities of the exemptions were such that they provided no guarantee.
 
Charlesworth: did you know that before or after?
 
Green: before, during, after.
 
Charlesworth: you sought legal advice before?
 
Green: yes, my prof had a similar experience.
 
Charlesworth: but you proceeded anyway.
 
Green: yes, b/c we believed it was necessary. We were fortunate to have the EFF, which gave us the confidence to go forward; we felt that the probability was relatively low.  The system’s been repaired. But w/out that the system might still be broken today. I now begin every project w/ a call to a lawyer for a 1201 mitigation possibility. I still get pro bono representation, but many researchers aren’t so fortunate. Also, good faith research shouldn’t require lawyers; increases the cost of every project.
 
Reid: We predicted in 2006 that Sony rootkit wouldn’t be the last dangerous/malfunctioning TPM. We vastly underestimated the widespread vulnerabilities that can be caused by and concealed by TPMs—intermingled with everyday consumer goods including cars, medical devices, internet software.  Chilling effects have become ever more pernicious—a roomful of nation’s top security researchers stand before you today highlighting the threats they, their colleagues, and their students face in trying to make America a safer place to live. Existing exemptions show security to be a priority but are not enough to avoid attempts to silence their work, which is protected by the First Amendment and protects the public.  In 2006, Peters rejected projection of worsening TPMs and recommended against a broad exemption, but that prediction was prescient.  Lengthy record of security vulnerabilities that could have been avoided w/a workable exemption.  Researchers before you today are the good guys. They care about abiding by the law and they need breathing space. W/out your help they will lose an arms race to bad guys who don’t care about violating 1201.
 
Q: 1201 exemption for video games—was that too small?
 
Reid: the issue was not with the piece of the exemption that was granted, but that the vulnerabilities around DRM patched w/video games was just one piece of evolving threat.  Evolving piece was in things like cars, medical devices. It was the narrow piece that said security researchers could look at TPMs only for video games.
 
Q: but the exemption had other limits—info must be used primarily to promote security of owner/operator, and must not be used to promote copyright infringement.  Does that restrict research?
 
Reid: it’s hard to tell you—the subsequent vulnerabilities were not necessarily in video games. Folks took a look at exemptions and said video game exemptions were too narrow to do research.  If added to broad exemption, we’d have some of the same concerns—don’t have certainty w/words like “primarily.”
 
Q: your proposal is “for the purpose”—how much more certainty does that provide you? Existing statute says “solely”—congressional intent?
 
Reid: the more certainty we can get, the more mileage researchers will get. Post hoc judgments are problematic b/c it’s hard to say up front what the primary purpose is.
 
Q: but there will always be post hoc judgments.  We also have to ask what is good faith.  We have to draft language for an exemption—we want to understand what kinds of limitations might be appropriate in language that balances need for less post hoc analysis with some definition of what it is we are allowing.  Congress did act in this area, which is guidance about what Congress was thinking at the time.  [But the exemption procedure is also guidance about what Congress was thinking at the time.]
 
Reid: clarity about what these limits mean: being used to facilitate © infringement—opponents have said that simply publishing information about a TPM/software might facilitate copyright infringement. Guidance that the acts we’re concerned about here, outlined in the comments: investigating, research in classroom environment mostly, being able to publicly disclose in responsible way the results are covered. If you enable that, that’s the most important piece we’re looking for in limitations.
 
Charlesworth: tell me more about a classroom environment. Should an exemption be tied to academic community.
 
Reid: Student ability to work on this is really important, but we wouldn’t support a classroom use limit. Private sector and amateur security researchers are really important, building skillsets.
 
Charlesworth: should a university researcher oversee all of this research?
 
Green: very concerned about that. The most dynamic/important research is being done by people in the private sector, commercial security researchers. The vehicle security research is funded by DARPA but worked on by Charlie Miller, unaffiliated w/university. Very similar with other kinds of research. Some is authorized, but the vast majority is done by private individuals w/access to devices. Recent cases: researchers told to back off b/c of DMCA.  One happened just a couple of weeks ago.
 
Andy Sayler, Samuelson-Glushko Technology Law & Policy Clinic at Colorado Law: Heartbleed, Shellshock, numerous vulnerabilities in the last year.  Logjam—a week ago.
 
Q: was that done without circumvention?
Green: we don’t know.  Some public spec, some looking at devices.
 
Sayler: note that much security research is funded by the gov’t. 1201 is used to discourage independent security research.  Congress didn’t intend good faith research to be suppressed, but they’re ambiguous/undue burdens.  Ioactive researcher was threatened w/DMCA for exposing vulnerabilities in Cyberlock locks.  Significant personal risk/unreasonable legal expenses to mitigate risk.
 
Mark Stanislav, Rapid7 security consultant: Last year assessed Snort, a toy that lets parents communicate with children over the internet.  Oinks to signal new message. Child can reply. The security features were flawed; unauthorized person could communicate w/child’s device and could access name, DOB, and picture of child as well. Contacted the vendor; despite my offer to go into details w/engineers, vendor wouldn’t engage and made legal threat, saying I must’ve hacked them.  Productive dialogue eventually occurred and resolved issues. Situation made me fear for my livelihood.
 
Q: did you discuss DMCA exemptions? 
 
Stanislav: I wasn’t privy to the lawyers’ conversations. I understood that I was at risk.  My goal was protecting children, but it wasn’t worth a lawsuit.  I found vulnerabilities in my own webcam that would allow a criminal to access it.  Direct risks to privacy and safety. I contacted the vendor and offered assistance. Final email, after friendly to threatening, wanted me to meet w/them b/c they said I might have accessed confidential information.  Entrepreneurs who made Snort have gone on to win numerous awards.  What if a criminal had abused these and put children in harm’s way? Webcam: new leadership came in and apologized. Research prevented harm, privacy violations, allowed businesses to fix critical flaws before adverse impact. We help people/businesses who don’t know they’re in harm’s way/putting people in harm’s way.  We live in a time when a mobile phone can control an oven. Smart TVs have microphones listening to us all the time. Please help widen the collective efforts of security research; the researchers who stay away from research b/c of DMCA are problems.
 
Steve Bellovin, Columbia University: Researched in private sector for decades. Academic research is generally concerned with new classes of vulnerabilities. Finding a buffer overflow in a new device is unpublishable, uninteresting. Most of the flaws we see in devices we rely on are serious but known vulnerabilities. Not the subject of academic research; the independent researchers are the ones actively protecting us. Students unlikely to do it; I discourage my PhD students from looking for known vulnerabilities because it won’t get them credit.
 
As a HS student, I wrote a disassembler so I could study source code. That’s what got me to where I am today.  Arguably would be illegal today, if I wanted to look at a smartphone. Four years later, I caught my first hackers. I teach my students how to analyze and attack programs. I coauthored the first book on firewalls and internet security.  You have to know how to attack in order to secure a system. To actually try an attack is a hallmark assignment; it’s not the only way, but it is one of the ways.  Is a copyright owner who profited a great deal from copyright, but wants a balance.  1853 treatise on whether it’s ok to discuss lockpicking: truthful discussion is a public advantage.  Harm is counterbalanced by good.
 
Andrea Matwyshyn, Princeton University: These questions are about frivolous litigation that attempts to suppress discussion around existing flaws that may harm consumers, critical infrastructure, economy.  Help curb frivolous 1201 litigation.
 
Charlesworth: on the issue of disclosure: you’re suggesting that mfgrs tend to shut down the conversation, but isn’t there a countervailing interest in giving mfgr some time to correct it before public dissemination?  I understand bad hats are out there already, but hacking into something more mundane like a video console there are probably people who don’t know how to do that who might be educated by disclosure.
 
Matwyshyn: there are two types of companies. Some are very receptive to this—FB, Google, Tesla have bounty programs who compensate researchers.  Processes in place w/clear reporting mechanism on websites and internal ID’d personnel. The second type has not yet grown into that sophisticated model.  So it’s this second category that doesn’t possess the external hallmarks of sophistication that react viscerally, through overzealous legal means and threats.  The Powerpoint I shared has a copy of one of the DMCA threats received Apr. 29, 2015.  Hearing Exh. 10, letter from Jones Day to Mike Davis, security researcher at Ioactive, a security consultant.  Regards repeated attempts to contact Cyberlock about their product.  They used DMCA as a threat.
 
Q: Cyberlock seemed to have taken the position that Davis insufficiently disclosed. [Actually it indicates that he didn’t want to talk to Jones Day, not that he didn’t want to talk to Cyberlock, which makes sense.]
 
Matwyshyn: he was ready to share that with technical team.  Subsequent followup email in record explains and identifies prior instances of threat.
 
Q: if you granted the proposed exemption in full, would that change the outcome? If a company is going to engage in frivolous litigation, we can’t stop that.
 
Matwyshyn: I believe it would help a lot.  The note from Ioactive’s general counsel: GC’s perspective is that it seeks a strong basis for defense.  Expresses concern that litigation to the point of discovery can cost $250,000. When we’re talking about a small security consultancy or independent researcher, engaging w/the legal system is cost prohibitive.  A roadmap exemption would give a one-line statement of reassurance that a GC or security researcher could send to a potential plaintiff. W/exemption, Jones Day would be less likely to threaten DMCA as basis for potential litigation.  Provided that Cyberlock has in place a reporting channel that the researcher used, and researcher disclosed the list of disclosables we proposed, that would provide a clear roadmap for both sides’ relationship in the context of a vulnerability disclosure.  Significant improvement in murkiness, more easily discernable question of fact.
 
Q: One of the elements is that the manufacturer had an internal management process. How would a researcher verify that?
 
Matwyshyn: the researcher needs a prominently placed reporting channel. The additional requirements are not researcher-centric, but assist figuring out what happened if something went awry. The researcher need only assess whether there is a prominently placed reporting channel. 
 
Q: you want a front door, but you’ve put other elements in your proposal—the creation of an internal corporate vulnerability handling process. Opponents have said a researcher wouldn’t even know if the company had such processes in place. How would they know?
 
Matwyshyn: the later parts are only for a subsequent finder of fact. Supposed the researcher used the front door and then the sales department loses the report—the exemption protects the researcher.
 
Q: but does it give ex ante comfort?  The researcher won’t know if that will happen. 
 
A: if it goes off the rails b/c the internal processes weren’t in place, the researcher has a secondtier ability to defend if the disclosure results in a threat.
 
Bellovin: In almost 30 years, it’s been remarkably hard to find ways to report security vulnerabilities. I know security people and can generally find an artificial channel.  But put yourself in the position of someone who has found a flaw and doesn’t know me.  If this vulnerability is a threat to safety, public disclosure is a boon.
 
Matwyshyn: Henninger attempted to contact 61 companies about a vulnerability. 13 had contact info; the rest she had to guess at a point of contact. 28 humans responded out of 61.  A different 13 said they’d already fixed it.  6 subsequently released security advisories b/c of her report. 3 were after the intervention of ICS-Cert contacting the provider in question.
 
Q: the suggestion is that she made a good faith attempt that she documented attempts to notify.  Isn’t that a more objective standard than having her know the internal processes. 
 
Matwyshyn: the judgment point for the researcher is “is there a front door”?
 
Q: in many cases they may have a front door [note: contradicted by the record], but you could try to figure that out and keep a record if your attempt was unsuccessful.   You shouldn’t have to know the internal workings of the company. [This is the proposal, though!  Right now you have to know the internal workings to reach someone if there’s no front door. Under the proposal, you don’t have to know the internal workings, but you know that if you deliver through the front door you are protected!]
 
Matwyshyn: right now you get a chill even with documented attempts.
 
Q: but some companies will just threaten you no matter what.  You won’t avoid that entirely. If we go down this road, how will people know?  If the standard relies on how people handle things, how will they know?
 
Matwyshyn: if the front door exists, the researcher should use it. If the disclosure goes off the rails, the researcher gets an extra boost.  W/out legal team, you can assess whether there is a front door and thus whether you should use it.
 
Q: why shouldn’t they try other methods if there isn’t a front door? You try to figure out who owns something in copyright all the time. You’re saying we should have a standard that everyone has to have a front door. [Why is this a copyright issue? What is the nexus with copyright infringement? Why not follow the ISO security recommendations? Why would the Copyright Office have a basis for knowing better than the ISO standard how vulnerabilities should be reported?]
 
Matwyshyn: The ISO standard was negotiated across years and stakeholders.
 
Q: those are big companies, and this law would apply across the board.  Manufacturers who don’t have the resources might not know.  We have to think of them as well. [B/c of their copyright interests?]
 
Matwyshyn: could identify copyright contact as point of contact. 
 
Q: for DMCA that’s a statutory requirement. [Um, “requirement” if the companies want the DMCA immunity. If they want people to report vulnerabilities to them, why not have them follow a similar process?]
 
Matwyshyn: Congress did discuss security as well—you can expand the concept/clarify it.
 
Matthew Blaze, University of Pennsylvania: History of security research, including on Clipper chip and electronic voting systems.  Two specific examples of DMCA issues, though it loomed over every nontrivial work I’ve done since 1998.  Analogous to Ioactive/Cyberlock issue, in 2003 I decided to look at applications of crypto techniques to other types of security: mechanical locks. Discovered a remarkably similar flaw to that discovered by Ioactive: could take ordinary house key and convert it into master key into one that would open all locks in a building.  Real world impact and interesting use of crypto; master key systems need to have their security evaluated. Purely mechanical, no TPMs. And so publishing was simple and without fear.  But other work is chilled by the DMCA. Example: in 2011, w/grad students studied P25, a communication system used as a digital 2-way radio by first responders, including federal gov’t.  Examined standards for the system as well as the broad behavior of a variety of radio products that used them. Discovered a number of weaknesses and usability failures, and discovered ways in which the protocols could lead to implementation failures. To study those failures, we would’ve needed to extract the firmware from actual devices. But we were sufficiently concerned that in order to extract the firmware and reverse engineer it, and in particular develop tools that would allow us to extract the firmware, we would run afoul of the DMCA. So we left a line of research untouched. If we had the resources and the time to engage a large legal effort to denote parameters, we could possibly navigate that, but under the DMCA as written we decided it was too risky.
 
Q: why not 1201(j)?
 
Blaze: w/o getting into atty-client privilege, the essential conclusion was that we were in treacherous territory, primarily b/c we would have needed to reverse engineer, see if implementation failures we anticipated were present, and effectively build our own test bed along the way. We approached a few manufacturers and attempted to engage with them and were ignored or rebuffed every time. We realized the relationship would be hostile if we proceeded.  The anti-trafficking provision would have been particularly problematic b/c we needed tools for extracting—a colleague in Australia examining the same system had developed some tools and expressed interest in working w/us, but we couldn’t.
 
Q: is there a norm of trying to disclose before publication?
 
Blaze: certainly there are simple cases and hard cases. In simple case, we find particular flaw in particular product w/well defined manufacturer w/a point of contact. Sometimes we can find an informal channel.  As someone who is an academic in the security community and wants to work in the public interest, I don’t want to do harm.  Disclosing to the vendor is certainly an important part. But in other cases, even identifying the stakeholders is often not so clear. Flaws found in libraries used to build a variety of other products: we won’t always know what all, most or even some of the dominant stakeholders.
 
Q: when you do know, is it a norm to disclose in advance as opposed to concurrently?
 
Blaze: it has to be case by case. There is a large class of cases when we have a specific product that is vulnerable, and we can say “if this mfgr repairs, we can mitigate.” But other cases it’s less clear where the vulnerability is present and it may be more prudent to warn the public immediately that the product is fundamentally unsafe. Reluctant to say there’s a norm b/c of the range of circumstances.
 
Green: In some cases like last week there’s mass disclosure—you can’t notify the stakeholders all at once. If you notify, they may leak it before you want it public which can cause harm.  Sometimes you want to be very selective.  If, let’s say, 200 companies are affected, you can go to Google/Apple and trust the info won’t leak, but beyond that the probability that the problem becomes public before you want it to is almost one—I’ve had that happen.  Heartbleed was an unintended leak—too many people were notified of a mass vulnerability, and many systems including Google and Yahoo! were not patched as a result of the two weeks early disclosure.
 
Charlesworth: so are you saying that disclosure should be limited?
 
Green: there is no single answer you can write down to cover it all. Heartbleed: massive vulnerability affected 1000s of sites. Google = Google would fix and protect maybe 50% of end users on internet. Yahoo! = protect 25%. As you go to a smaller website, now you’re protecting 200 people but probability of leak is fairly high. Then criminals can exploit vulnerability before it’s patched.  Has to be customized to potential vulnerabilities.
 
Reid: you’re hearing a theme that this is an issue for the judgment of security researchers, and it’s only b/c of the DMCA that suddenly this is the realm of copyright law. Getting pretty fair afield of Congressional intent to mediate these judgments and their complexities, which take a lot of negotiation, as Matwyshyn underscored w/ISO. We would strongly caution the Office against being too prescriptive. (1) If there wasn’t a lock involved, we’d just be talking about fair use. In that case it would be up to the researcher how to disclose. Whatever copying was necessary for research would be the only issue; the fruits of the research would be free and clear. (2) Remember the First Amendment interests and the prohibition on prior restraint. Rigid structure on when someone is allowed to speak, even if the policy judgments weren’t complicated.
 
Charlesworth: did you brief the First Amendment issues?
 
Reid: not in detail.
 
Charlesworth: Congress considered this in making disclosure a factor.  What you’re saying is that sometimes you should disclose, sometimes not. 
 
Reid: Congress can’t contravene the 1A, even in enacting the DMCA.
 
Charlesworth: but looking at disclosure to manufacturer is a factor—maybe that’s not such a bad way to think about it.
 
Reid: factors mentioned in (j), to extent compatible w/1A, can be read as probative of intent to do security testing or something else. Reading them as limitations of speech after circumvention performed is constitutionally troubling.
 
Charlesworth: that’s a brand new argument, and I’m not troubled by (j), but there’s a lot of commentary about disclosure.  Google has a 90-day disclosure standard; you’re saying there should be no standard, though Congress clearly had something in mind.  [Would having a front door be consistent with being the kind unlikely to leak?]
 
Blake: As academics and members of the public research community, the aim of our work is to disclose it. The scientific method demands disclosure.  Someone building tools to infringe is not engaging in research.  The issue is whether or not the work is kept secret or disclosed to the vendor, not whether it’s disclosed to the vendor in advance. No one here is advocating keeping research secret—trying to protect research we will publish and will benefit everyone.
 
Mellovin: twice in my career I’ve withheld from publication significant security flaws—once in 1991 to delete a description of an attack we didn’t know how to counter. Because security community wasn’t aware of this publicly, the bad guys exploited the flaws before fixes were put in place. It was never seen as urgent enough.  Published the paper in 1995, once we saw it being used in the wild and b/c original memo shared only with a few responsible parties ended up on a hacker’s site. Security community didn’t care until it became public.
 
Other case: vendors were aware of the problem and didn’t see a fix; once it was in the wild, others in the community applied more eyes and found a fix. In both cases, trying private disclosure actually hurt security.
 
Matwyshyn: (1) 1201(i) concerns are slightly different. (2) In our findings we did discuss the First Amendment, should the panel wish to review the cited law review article. (3) Google’s a member of the Internet Association, which supports our approach. (4) Frivolous litigation: the benefit of a clear exemption allows them to feel more comfortable contacting vendors earlier, rather than needing to weigh the risk of litigation to themselves; later contacting is now something you do to mitigate risk that they’ll sue you before you disclose. Providing comfort would encourage earlier contacts.
 
Laura Moy, New America’s Open Technology Institute
I’ve encouraged the Office to focus on © issues and not weigh the policy issues as opponents have suggested.  But consumer privacy is relevant b/c the statutory exemption for privacy indicates Congress’s concern therefor. Some opposition commenters have cited privacy concerns to grant an exemption—but that’s wrong.  Remove roadblocks to discover security vulnerabilities.  As many others have pointed out, vulnerabilities often expose consumer info and they need to be found. Malicious attackers are not waiting for the good guys; they race to do their own research. They are succeeding. Last year, CNN reported 110 million Americans’ info had been exposed—these are just the ones we know about.  Need to find them as soon as possible, dismantling roadblocks.
 
Vulnerabilities should be disclosed so consumers can incorporate security concerns into decisionmaking. Consumers have a right to information they can use to make informed choices. Also bolsters vendors’ economic incentives to invest in security—publicity can be harmful if a product is insecure, and that’s as it should be. As a consumer, you should know of known vulnerabilities to Cyberlock’s product before you purchase.
 
Vulnerabilities should be disclosed so that regulators enforcing fair trade practices know whether vendors are adhering to the promises they’ve made and using good security practices. FTC says failure to secure personal information can violate FTCA; state laws too. Enforcement requires understanding of security. Often rely on independent researchers.  FTC recognizes that security researchers play a critical role in improving security.
 
Erik Stallman, Center for Democracy & Technology: security testing done only with authorization of network operator in statutory exemption—in a world of internet enabled devices, it can be very difficult to determine who the right person is.
 
Q: says owner/operator of computer, not owner of software.  Even if I don’t own the software, can’t I authorize the testing?
 
Stallman: it’s unclear if that’s sufficient—if your banking network is connected to a VPN, it may be the source of a vulnerability.  Are the computers at your ISP covered by this? 
 
Q: presumably if I hire a VPN provider I can ask them for permission to test the security.  [Really? I can’t imagine the VPN provider saying yes to that under most circumstances.]  I can buy a pacemaker and run tests.  If you own that pacemaker, you can run tests.
 
Stallman: You may need to go up the chain. The moment you fall outside, you fall outside the exemption [e.g. if you communicate with another network].
 
Q: so I want to test HSBC’s systems to know if they’re secure. Will the exemption allow this test without permission of third party server?
 
A: Something like Heartbleed—a ubiquitous exploit on many systems. You shouldn’t need to go around and get permission. Accidental researcher: may come across vulnerability when engaged in different research. Often researchers won’t know what they’re looking for when they start looking. 1201(j) limits what they can ask.
 
Q: I want to test a website’s security. Can I test it under your proposal? Say I bank at HSBC and want to test it.
 
Stallman: So long as you’re doing good faith security research, yes.
 
Q: but Congressional history says shouldn’t test locks once installed in someone else’s door.  Does your proposal require any authorization, or is there a proposal requiring at least an attempt to seek authorization?
 
Stallman: the problem is it’s hard for the researcher to know/stay within stated scope. Or authorization can be revoked/cabined.  You could ask HSBC, but then what do you do if they say no?  Then you’re out of luck.
 
Q: legislative history suggests authorization is important.
 
Stallman: the internet environment is very different from enactment of 1201(j); House Report said that the goal would be poorly served if they had undesirable consequence of chilling legitimate research activity, and that’s the problem we have now.  1201(j) is not providing the protection that researchers now need.
 
Q: nuclear power plants/mass transit systems—should we allow testing of live systems?  How would this research be conducted?
 
Stallman: many critical infrastructure systems depend on the same software/applications that run other systems. Should not be able to stop research on systems widely in use by other people. 
 
Q: but if this can be tested by off the shelf software, shouldn’t it have to be?
 
Reid: to the extent the Office reads (j) very broadly, you could put that in the record/conclusions in the proceeding, that would be very helpful.  The primary concern: one interpretation is that the computer system is the bank. The concern is to the analogy in the legislative history—the TPM on that system is not protecting the bank’s property. It’s protecting the software. The company will claim we aren’t the owner and that we aren’t engaged in accessing a computer system, but rather engaged in accessing software running on that computer. That is the ambiguity that has crippled (j) in the past. We would agree with your interpretation of (j) in court, but when we’re trying to advise folks we have to acknowledge the multiple interpretations.
 
Charlesworth: we haven’t come to any conclusions about the meaning of (j). Your point is well taken.  We may get there, but we aren’t there yet.
 
Reid: Think about the standard for granting an exemption: the likelihood of adverse effects. You’ve heard that uncertainty produces the adverse effects. You need not have an ironclad conclusion about (j) in order to grant this exemption. If you conclude that there’s multiple interpretations but a chill, then you need to grant the exemptions.
 
Bellovin: (j) is for testing my own bank as an employee. I might be able to take precautions, or not. Even the most sophisticated users would have trouble mediating a flaw in an iPhone. We aren’t talking about device ownership, but vulnerabilities not in the device but rather in the software—not the flaw in our particular copy but the class of copies which manufacturers often don’t want to hear about/don’t want anyone to hear about it. If a flaw is serious enough I may not use my copy, but it’s the other instances owned by others that need protection.
 
Stallman: Just note that security experts have signed on to comments about the chill. General point is that because (j) has CFAA, Wiretap Act, Stored Communications etc. references, it has the unfortunate effect of compounding/amplifying uncertainty. It’s not satisfying to say that other legal murkiness means we shouldn’t address this issue—this is one thing the Office can do and send a clear signal that this is an area that Congress should look at.
Posted in dmca, drm, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

“Commercial advertising and promotion” can’t be conclusorily alleged

Jus Punjabi, LLC v. Get Punjabi Inc., 2015 WL 2400182, No.
1:14–cv–3318 (S.D.N.Y. May 20, 2015)
 
Jus Punjabi, a cable and satellite television network
serving the U.S. Punjabi community, sued Get Punjabi, a rival television
network.  Jus Punjabi allegedly
introduced daily live programming—otherwise unavailable to Punjabi viewers in
the U.S.—covering news and current events, swiftly building a reputation as a
premium channel.  Defendants allegedly
undertook a scheme to defraud and destroy Jus Punjabi, stealing confidential
information and trying to appropriate Jus Punjabi’s business, including broadcasting
“copycat” live call-in news shows, each starting 30 minutes before each of Jus
Punjabi’s identically-formatted shows aired.
 
Jus Punjabi sued for violations of RICO, the Lanham Act, and
state law tortious interference and breach of contract.  The RICO claims failed because they were RICO
claims.
 
The court turned to whether Jus Punjabi had alleged
“advertising or promotion.”  The
touchstone is an organized campaign to penetrate the relevant market.  “Proof of widespread dissemination within the
relevant industry is a normal concomitant of meeting this requirement.” Interestingly,
the court reformulated the usual test to completely eliminated the now-dubious
“by a competitor” requirement, making it into: “(1) commercial speech; (2) for
the purpose of influencing consumers to buy defendant’s goods or services; and
(3) although representations less formal than those made as part of a classic
advertising campaign may suffice, they must be disseminated sufficiently to the
relevant purchasing public.”
 
Jus Punjabi failed to plead “advertising or promotion.”  It alleged that three people gossiped about
Jus Punjabi’s founder and that one told others in the Get Punjabi offices that she
was “scamming everyone, she is slick and a cheat.” Those statements were
plainly not commercial advertising or promotion.  Jus Punjabi also alleged that defendants
“defamed and maligned Ms. Sandhu and Jus Punjabi with its advertisers and with
cable and satellite providers in order to divert revenue to defendants.” Even
assuming that the relevant “consumers” were advertisers and cable and satellite
providers [which I think is a reasonable assumption], Jus Punjabi didn’t allege
any particular false statements made, nor did they allege sufficiently
widespread dissemination to qualify as “commercial advertising or promotion.”
 
Jus Punjabi alleged that “[d]efendants’ corrupt activities
also included extensive advertising and marketing campaigns to disparage
plaintiffs’ good[s], services and business operations and integrity,” but these
were mere “[t]hreadbare recitals of the elements of a cause of action,
supported by mere conclusory statements” that the Supreme Court has said are
insufficient to state a claim.
 
Without federal claims, the court dismissed the state
claims.

from Blogger http://ift.tt/1HIRjzp

Posted in Uncategorized | Tagged | Leave a comment

"Commercial advertising and promotion" can’t be conclusorily alleged

Jus Punjabi, LLC v. Get Punjabi Inc., 2015 WL 2400182, No. 1:14–cv–3318 (S.D.N.Y. May 20, 2015)
 
Jus Punjabi, a cable and satellite television network serving the U.S. Punjabi community, sued Get Punjabi, a rival television network.  Jus Punjabi allegedly introduced daily live programming—otherwise unavailable to Punjabi viewers in the U.S.—covering news and current events, swiftly building a reputation as a premium channel.  Defendants allegedly undertook a scheme to defraud and destroy Jus Punjabi, stealing confidential information and trying to appropriate Jus Punjabi’s business, including broadcasting “copycat” live call-in news shows, each starting 30 minutes before each of Jus Punjabi’s identically-formatted shows aired.
 
Jus Punjabi sued for violations of RICO, the Lanham Act, and state law tortious interference and breach of contract.  The RICO claims failed because they were RICO claims.
 
The court turned to whether Jus Punjabi had alleged “advertising or promotion.”  The touchstone is an organized campaign to penetrate the relevant market.  “Proof of widespread dissemination within the relevant industry is a normal concomitant of meeting this requirement.” Interestingly, the court reformulated the usual test to completely eliminated the now-dubious “by a competitor” requirement, making it into: “(1) commercial speech; (2) for the purpose of influencing consumers to buy defendant’s goods or services; and (3) although representations less formal than those made as part of a classic advertising campaign may suffice, they must be disseminated sufficiently to the relevant purchasing public.”
 
Jus Punjabi failed to plead “advertising or promotion.”  It alleged that three people gossiped about Jus Punjabi’s founder and that one told others in the Get Punjabi offices that she was “scamming everyone, she is slick and a cheat.” Those statements were plainly not commercial advertising or promotion.  Jus Punjabi also alleged that defendants “defamed and maligned Ms. Sandhu and Jus Punjabi with its advertisers and with cable and satellite providers in order to divert revenue to defendants.” Even assuming that the relevant “consumers” were advertisers and cable and satellite providers [which I think is a reasonable assumption], Jus Punjabi didn’t allege any particular false statements made, nor did they allege sufficiently widespread dissemination to qualify as “commercial advertising or promotion.”
 
Jus Punjabi alleged that “[d]efendants’ corrupt activities also included extensive advertising and marketing campaigns to disparage plaintiffs’ good[s], services and business operations and integrity,” but these were mere “[t]hreadbare recitals of the elements of a cause of action, supported by mere conclusory statements” that the Supreme Court has said are insufficient to state a claim.
 
Without federal claims, the court dismissed the state claims.
Posted in http://schemas.google.com/blogger/2008/kind#post | Leave a comment

DMCA hearings, LA: Guest post by Betsy Rosenblatt

Guest post from Betsy Rosenblatt
 
The triennial Copyright Office DMCA exemption hearings began
on May 19 for three days in Los Angeles before moving next week to Washington,
D.C. for four more days.  I attended the
first two days, which addressed a wide range of topics.  Some were exactly the sort of thing one would
expect the Copyright Office to be thinking about:  space-shifting and format-shifting of
audiovisual works; incorporating clips of audiovisual works into narrative and
documentary films.  But more of them
concerned what might be described as “tinkering”:  jailbreaking smart TVs and video game
consoles; re-enabling authentication for no-longer-supported multi-player video
games; conducting automotive safety research; and diagnosing, repairing and
modifying motor vehicles such as agricultural machinery and cars.   And that’s only two days’ worth of topics;
the sheer diversity of the
rulemaking agenda
reflects just how many corners of business and industry
are now touched by the growing tendrils of copyright law.
 
Several times over the course of the proceedings, the
Copyright Office representatives expressed wonder that we’d gotten to this
point; even a decade ago, I doubt many people would have expected that over
half a day of copyright hearings would be dedicated to motor vehicle safety and
repair.   This, in turn, led to a lot of
talk—especially during the motor vehicle sessions—about what is and isn’t a
“copyright interest.” The automotive industry representatives provided a lot of
scary hypotheticals about pollution, malicious misuse, and other horrible
things that could result from hacking car software, closing with, essentially,
“lives are at stake.”  Proponents,
reasonably, countered that while all of those things were indeed scary and bad,
they weren’t the sorts of ills copyright law was designed to prevent.  Noninfringing activity that harms people or
society is still noninfringing.  Most of
the time that sort of harmful or malicious activity violates other laws, and
those laws—not copyright—should be used to prevent it.  This reasoning is echoed in the recent Garcia
v. Google en banc opinion.
 
To my surprise, the Copyright Office didn’t seem entirely
convinced.  Office representatives asked
a number of questions about whether it had a responsibility to protect the
public from non-copyright ills.  Several
times, they probed the idea that by making it easier to do something illegal or
dangerous by allowing people to bypass TPMs, the Copyright Office might in some
way be endorsing or putting its seal of approval on those illegal or dangerous
activities.  Proponents objected to this characterization.  The question Congress instructed the
Copyright Office to ask was whether section 1201 interfered with a significant
amount of noninfringing activity—not to do a cost-benefit analysis.  (Compare NAM v. SEC, where the SEC was
instructed to make a rule about conflict minerals, not to determine if a rule
about conflict minerals was justified.) 
If 1201 interferes with noninfringing activities, then the Copyright
Office is neither statutorily empowered to inquire further nor does it have the
expertise to opine on the larger cost benefit analysis.  Other agencies—NHTSA, the EPA, or whoever
else has that specific expertise can and should opine, and if there’s a problem
with a particular function it’s going to be a problem whether or not the car makers
(etc.) authorized it.   Noninfringing
activities are noninfringing, and the fact that people could choose to
not-infringe in a way that is otherwise harmful or illegal makes no difference
from a copyright perspective.  (To
wit:  the law allows people to own and
operate cars even though it’s possible to do very harmful or illegal things
with them.  Those things are governed by
laws about operating cars, not by preventing people from owning them.)  
 
I’ll add to this that the existence or non-existence of an
exemption is, as a practical matter, extremely unlikely to have any impact on
malicious or illegal circumvention. 
People who are going to do malicious or illegal things will do them
regardless of whether there’s a copyright exemption along the way.  They’ve already established that they’re
willing to break laws—what’s one more? 
So the only people who will be chilled by a lack of copyright exemption
are the people who want to be law abiding and do good (or at least not
malicious!) things with their circumvention.
 
But back to the point: 
This led to a debate over whether proponents could discuss the benefits
of the noninfringing activities—in the vehicle case, safety improvements,
technological innovations, personal benefits such as enabling someone to make
their vehicle more suited to its purpose, and such.  Auto companies said that what was good for
the goose was good for the gander, and if they couldn’t rely on the risks that
might arise from breaking TPMs, then the proponents couldn’t rely on the benefits.  On the surface, this argument sounds
logical.  But on probing, I think it
doesn’t hold up.  The default state of
the world without 1201 is that copyright law permits people to do anything that
doesn’t infringe.  The Copyright Act
doesn’t necessarily endorse those things, and they may be harmful or illegal,
but the Copyright Act still allows them. 
In allowing them, the Copyright Act presumably takes into account many
benefits of carving certain types of activity out of infringement:  free expression, innovation, communication,
education, community-building, self-actualization…while entirely and
appropriately ignoring the myriad and diffuse ills that can arise from
noninfringing activity (a list too long to enumerate, since all of the ills in
the world except for copyright infringement arise from noninfringing activity,
and many of them are governed by non-copyright law).  Therefore, in considering whether section
1201 unduly chills noninfringing activity, it seems wholly appropriate to
consider the benefits of that noninfringing activity without considering its
potential ills, which can and should be governed by non-copyright law when
appropriate.
 
But the auto companies cast themselves in the role of
protector:  by holding the keys to
encryption, they are preventing the ills that may result from malicious
hacking, and the fact that they’re also preventing tinkerers from creating
benefits was (to their minds, slight) collateral damage.  And they no doubt believe that they actually
do have everyone’s best interests at heart. 
We should, the car companies imply, trust them to do all of the
beneficial things themselves, so the tinkerers are unnecessary.  This of course ignores all of the
community-building and self-actualization benefits of tinkering, the benefits
of independent research and development, and the benefits of disruptive or
non-market-driven creation and innovation, but in a larger sense it also
reveals the fundamental culture clash at the heart of this exemption process.
 
There is a cultural gulf between manufacturer and tinkerer,
big and small, outsider and incumbent, and it ran like a canyon through the
proceedings.  (It was notable, and to my
mind quite telling, that over the course of the two days, the opponents to the
exemptions were, with one exception out of eleven, all white and male.  I’m not saying the proponents were exactly a
rainbow of races and genders, but they were a lot more diverse.)  The proponents and opponents were, perhaps
necessarily, talking past each other.  Over
and over, the proponents explained they just wanted to be able to do what they
wanted with a physical object they had lawfully purchased.  In response, every opponent told the
Copyright Office that they provide official channels for allowing some people
to do some of what the proponents are asking to do.  Automotive companies “partner with” hired
researchers and sell tools to authorized repairers and modifiers to do some
things.  Film studios license some clips
for inclusion in films (even when using those clips without licensing would be
fair use).  Some smart-TV makers provide
tools that allow people to build some of their own applications.  Film and TV studios allow people to space-
and format-shift some things using paid services like Ultraviolet.  Video game publishers have given permission
to some museums and libraries to preserve some multiplayer video games.
 
But allowing “some” isn’t the same as allowing all, and each
of these limited permissions comes with a catch.  Two catches, actually.  First, for each one, the users had to ask
permission.  But people who want to do
these things might not feel empowered enough to take the risk of asking for
permission.  And even if they do, that
permission can be denied if the encryption holder doesn’t like what the user is
doing—if they don’t like what the user/creator’s film is saying, if they don’t
like that the researcher might be revealing a safety vulnerability in their
vehicle, if they want to make money by charging for format-shifting or
modification/repair tools.  And second,
none of the underlying activities is copyright infringement.  So even though the underlying activity
doesn’t infringe copyright, section 1201 allows the encryption holder to decide
who does and doesn’t participate in creation, innovation, research, or play,
and exactly what they get to do.  The
opponents, of course, relish this control, because they can use it to protect
their brands and make higher profits. 
They defend their opposition by explaining that they’re generously
allowing people to do good things, while protecting the world from the dangers
of unregulated tinkering (such as pollution, copyright piracy, and malicious
exploitation of technology).  But why do
they get to be the gatekeepers, when the activity isn’t infringing copyright,
and the malicious stuff is mostly governed by other laws (such as environmental
regulations and criminal copyright law)? 
Why do they get to choose who does safety research; who preserves video
games; who develops new after-market automotive innovations; who repairs
tractors; who controls the closed captioning print size for televisions; who
makes films? 
 
And that gatekeeper function is at the core of the culture
clash: the opponents think they’re fantastic at guarding henhouses, and don’t
seem to understand why the (yummy, yummy) chickens inside don’t trust them to
protect the chickens’ best interests. 
The proponents just want not to be ogled by foxes.

from Blogger http://ift.tt/1FMVRVS

Posted in Uncategorized | Tagged , , , | Leave a comment

DMCA hearings, LA: Guest post by Betsy Rosenblatt

Guest post from Betsy Rosenblatt
 
The triennial Copyright Office DMCA exemption hearings began on May 19 for three days in Los Angeles before moving next week to Washington, D.C. for four more days.  I attended the first two days, which addressed a wide range of topics.  Some were exactly the sort of thing one would expect the Copyright Office to be thinking about:  space-shifting and format-shifting of audiovisual works; incorporating clips of audiovisual works into narrative and documentary films.  But more of them concerned what might be described as “tinkering”:  jailbreaking smart TVs and video game consoles; re-enabling authentication for no-longer-supported multi-player video games; conducting automotive safety research; and diagnosing, repairing and modifying motor vehicles such as agricultural machinery and cars.   And that’s only two days’ worth of topics; the sheer diversity of the rulemaking agenda reflects just how many corners of business and industry are now touched by the growing tendrils of copyright law.
 
Several times over the course of the proceedings, the Copyright Office representatives expressed wonder that we’d gotten to this point; even a decade ago, I doubt many people would have expected that over half a day of copyright hearings would be dedicated to motor vehicle safety and repair.   This, in turn, led to a lot of talk—especially during the motor vehicle sessions—about what is and isn’t a “copyright interest.” The automotive industry representatives provided a lot of scary hypotheticals about pollution, malicious misuse, and other horrible things that could result from hacking car software, closing with, essentially, “lives are at stake.”  Proponents, reasonably, countered that while all of those things were indeed scary and bad, they weren’t the sorts of ills copyright law was designed to prevent.  Noninfringing activity that harms people or society is still noninfringing.  Most of the time that sort of harmful or malicious activity violates other laws, and those laws—not copyright—should be used to prevent it.  This reasoning is echoed in the recent Garcia v. Google en banc opinion.
 
To my surprise, the Copyright Office didn’t seem entirely convinced.  Office representatives asked a number of questions about whether it had a responsibility to protect the public from non-copyright ills.  Several times, they probed the idea that by making it easier to do something illegal or dangerous by allowing people to bypass TPMs, the Copyright Office might in some way be endorsing or putting its seal of approval on those illegal or dangerous activities.  Proponents objected to this characterization.  The question Congress instructed the Copyright Office to ask was whether section 1201 interfered with a significant amount of noninfringing activity—not to do a cost-benefit analysis.  (Compare NAM v. SEC, where the SEC was instructed to make a rule about conflict minerals, not to determine if a rule about conflict minerals was justified.)  If 1201 interferes with noninfringing activities, then the Copyright Office is neither statutorily empowered to inquire further nor does it have the expertise to opine on the larger cost benefit analysis.  Other agencies—NHTSA, the EPA, or whoever else has that specific expertise can and should opine, and if there’s a problem with a particular function it’s going to be a problem whether or not the car makers (etc.) authorized it.   Noninfringing activities are noninfringing, and the fact that people could choose to not-infringe in a way that is otherwise harmful or illegal makes no difference from a copyright perspective.  (To wit:  the law allows people to own and operate cars even though it’s possible to do very harmful or illegal things with them.  Those things are governed by laws about operating cars, not by preventing people from owning them.)  
 
I’ll add to this that the existence or non-existence of an exemption is, as a practical matter, extremely unlikely to have any impact on malicious or illegal circumvention.  People who are going to do malicious or illegal things will do them regardless of whether there’s a copyright exemption along the way.  They’ve already established that they’re willing to break laws—what’s one more?  So the only people who will be chilled by a lack of copyright exemption are the people who want to be law abiding and do good (or at least not malicious!) things with their circumvention.
 
But back to the point:  This led to a debate over whether proponents could discuss the benefits of the noninfringing activities—in the vehicle case, safety improvements, technological innovations, personal benefits such as enabling someone to make their vehicle more suited to its purpose, and such.  Auto companies said that what was good for the goose was good for the gander, and if they couldn’t rely on the risks that might arise from breaking TPMs, then the proponents couldn’t rely on the benefits.  On the surface, this argument sounds logical.  But on probing, I think it doesn’t hold up.  The default state of the world without 1201 is that copyright law permits people to do anything that doesn’t infringe.  The Copyright Act doesn’t necessarily endorse those things, and they may be harmful or illegal, but the Copyright Act still allows them.  In allowing them, the Copyright Act presumably takes into account many benefits of carving certain types of activity out of infringement:  free expression, innovation, communication, education, community-building, self-actualization…while entirely and appropriately ignoring the myriad and diffuse ills that can arise from noninfringing activity (a list too long to enumerate, since all of the ills in the world except for copyright infringement arise from noninfringing activity, and many of them are governed by non-copyright law).  Therefore, in considering whether section 1201 unduly chills noninfringing activity, it seems wholly appropriate to consider the benefits of that noninfringing activity without considering its potential ills, which can and should be governed by non-copyright law when appropriate.
 
But the auto companies cast themselves in the role of protector:  by holding the keys to encryption, they are preventing the ills that may result from malicious hacking, and the fact that they’re also preventing tinkerers from creating benefits was (to their minds, slight) collateral damage.  And they no doubt believe that they actually do have everyone’s best interests at heart.  We should, the car companies imply, trust them to do all of the beneficial things themselves, so the tinkerers are unnecessary.  This of course ignores all of the community-building and self-actualization benefits of tinkering, the benefits of independent research and development, and the benefits of disruptive or non-market-driven creation and innovation, but in a larger sense it also reveals the fundamental culture clash at the heart of this exemption process.
 
There is a cultural gulf between manufacturer and tinkerer, big and small, outsider and incumbent, and it ran like a canyon through the proceedings.  (It was notable, and to my mind quite telling, that over the course of the two days, the opponents to the exemptions were, with one exception out of eleven, all white and male.  I’m not saying the proponents were exactly a rainbow of races and genders, but they were a lot more diverse.)  The proponents and opponents were, perhaps necessarily, talking past each other.  Over and over, the proponents explained they just wanted to be able to do what they wanted with a physical object they had lawfully purchased.  In response, every opponent told the Copyright Office that they provide official channels for allowing some people to do some of what the proponents are asking to do.  Automotive companies “partner with” hired researchers and sell tools to authorized repairers and modifiers to do some things.  Film studios license some clips for inclusion in films (even when using those clips without licensing would be fair use).  Some smart-TV makers provide tools that allow people to build some of their own applications.  Film and TV studios allow people to space- and format-shift some things using paid services like Ultraviolet.  Video game publishers have given permission to some museums and libraries to preserve some multiplayer video games.
 
But allowing “some” isn’t the same as allowing all, and each of these limited permissions comes with a catch.  Two catches, actually.  First, for each one, the users had to ask permission.  But people who want to do these things might not feel empowered enough to take the risk of asking for permission.  And even if they do, that permission can be denied if the encryption holder doesn’t like what the user is doing—if they don’t like what the user/creator’s film is saying, if they don’t like that the researcher might be revealing a safety vulnerability in their vehicle, if they want to make money by charging for format-shifting or modification/repair tools.  And second, none of the underlying activities is copyright infringement.  So even though the underlying activity doesn’t infringe copyright, section 1201 allows the encryption holder to decide who does and doesn’t participate in creation, innovation, research, or play, and exactly what they get to do.  The opponents, of course, relish this control, because they can use it to protect their brands and make higher profits.  They defend their opposition by explaining that they’re generously allowing people to do good things, while protecting the world from the dangers of unregulated tinkering (such as pollution, copyright piracy, and malicious exploitation of technology).  But why do they get to be the gatekeepers, when the activity isn’t infringing copyright, and the malicious stuff is mostly governed by other laws (such as environmental regulations and criminal copyright law)?  Why do they get to choose who does safety research; who preserves video games; who develops new after-market automotive innovations; who repairs tractors; who controls the closed captioning print size for televisions; who makes films? 
 
And that gatekeeper function is at the core of the culture clash: the opponents think they’re fantastic at guarding henhouses, and don’t seem to understand why the (yummy, yummy) chickens inside don’t trust them to protect the chickens’ best interests.  The proponents just want not to be ogled by foxes.
Posted in dmca, drm, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

False claims of FDA approval actionable under Lanham Act and state law

Innovative Health Solutions, Inc. v. DyAnsys, Inc., 2015 WL
2398931, No. 14-cv-05207 (N.D. Cal. May 19, 2015)
 
IHS sells a medical device called P–STIM.  DyAnsys used to be the distributor of P-STIM
in the US, but after it lost the distributorship it allegedly started selling a
knockoff device, first using the P-STIM name (which allegedly has secondary
meaning) and misrepresenting the device as having the same §501(K) clearance as
P-STIM.  IHS alleged that defendants
lacked FDA clearance for the “knockoff,” but used the §501(K) number assigned
to P-STIM and misrepresented that their device was FDA-approved.  The FDA published an import alert that
allegedly effectively prohibited the importation of defendants’ device because
it doesn’t have FDA clearance, but defendants continued to misrepresent its
clearance status.  In addition, defendants’
sales representatives allegedly used an official FDA document – FDA’s Summary
of Safety and Effectiveness for the FDA 510(K) number assigned to the P–STIM
medical device – “as a marketing tool to confuse, deceive and steal away IHS’
customers.”  The inferiority of defendants’
device allegedly damaged IHS’s reputation and goodwill.
 
In addition, DyAnsys allegedly appeared before the Centers
for Medicare and Medicaid Systems (CMS) and “falsely claim[ed] standing as the
‘manufacturer’ and seller of the P–STIM medical device.” They told CMS that
“the billing codes for P–STIM are unclear or ambiguous, and that they need to
be revised or clarified.” In fact, defendants allegedly knew that this argument
would cause reimbursement concerns within CMS, which led CMS to determine that
there was no justification for reimbursing P-STIM use through insurance
payments.  This decision allegedly effectively
drove IHS out of the P–STIM business. 
Then, defendants allegedly created a new name, AnsiStim, for their
device and applied for their own FDA 510(K) clearance number.
 
The complaint alleged false designation of origin, false
advertising, and related state law claims including trade libel.
 
Defendants argued that the false advertising claims should
be dismissed to the extent they were based upon defendants’ alleged misuse of
an FDA clearance number or violations of the FDCA.  They argued that PhotoMedex, Inc. v. Irwin,
601 F.3d 919 (9th Cir. 2010), meant that—“especially in the medical device
field – claims that require the court to interpret FDA regulations stray too
close to the exclusive enforcement domain of the FDA and should not be
permitted to proceed.” IHS responded that it wasn’t trying to prove FDCA
violations, but rather that defendants falsely advertised that their device was
interchangeable with the approved P-STIM.  JHP Pharms., LLC v. Hospira, Inc., 52 F. Supp.
3d 992 (C.D. Cal. 2014), held that the FDCA did not bar a Lanham Act claim
alleging that the defendant misrepresented its products as being FDA-approved
because “where the issue of FDA approval is straightforward, a Lanham action is
viable.” The court here agreed.  Further,
although PhotoMedex was not
specifically overruled by POM Wonderful,
its precedential value may be limited.
 
However, IHS also alleged that the DyAnsys P–STIM device
“undercuts the FDA regulatory framework,” was “unsafe and hazardous,” was
“mislabeled,” was “ineffective,” posed “numerous health risks,” endangered
patients, and violated the FDCA. The claims not about misrepresenting FDA
approval were dismissed with leave to amend to clarify.
 
Defendants also received Noerr–Pennington immunity for
claims based on their petitioning CMS and related communications with the
government agency, because these were constitutionally-protected activities  seeking relief from a government agency. IHS didn’t
allege facts showing that the activity was objectively baseless and a sham.

from Blogger http://ift.tt/1ITBWFQ

Posted in Uncategorized | Tagged , , | Leave a comment

False claims of FDA approval actionable under Lanham Act and state law

Innovative Health Solutions, Inc. v. DyAnsys, Inc., 2015 WL 2398931, No. 14-cv-05207 (N.D. Cal. May 19, 2015)
 
IHS sells a medical device called P–STIM.  DyAnsys used to be the distributor of P-STIM in the US, but after it lost the distributorship it allegedly started selling a knockoff device, first using the P-STIM name (which allegedly has secondary meaning) and misrepresenting the device as having the same §501(K) clearance as P-STIM.  IHS alleged that defendants lacked FDA clearance for the “knockoff,” but used the §501(K) number assigned to P-STIM and misrepresented that their device was FDA-approved.  The FDA published an import alert that allegedly effectively prohibited the importation of defendants’ device because it doesn’t have FDA clearance, but defendants continued to misrepresent its clearance status.  In addition, defendants’ sales representatives allegedly used an official FDA document – FDA’s Summary of Safety and Effectiveness for the FDA 510(K) number assigned to the P–STIM medical device – “as a marketing tool to confuse, deceive and steal away IHS’ customers.”  The inferiority of defendants’ device allegedly damaged IHS’s reputation and goodwill.
 
In addition, DyAnsys allegedly appeared before the Centers for Medicare and Medicaid Systems (CMS) and “falsely claim[ed] standing as the ‘manufacturer’ and seller of the P–STIM medical device.” They told CMS that “the billing codes for P–STIM are unclear or ambiguous, and that they need to be revised or clarified.” In fact, defendants allegedly knew that this argument would cause reimbursement concerns within CMS, which led CMS to determine that there was no justification for reimbursing P-STIM use through insurance payments.  This decision allegedly effectively drove IHS out of the P–STIM business.  Then, defendants allegedly created a new name, AnsiStim, for their device and applied for their own FDA 510(K) clearance number.
 
The complaint alleged false designation of origin, false advertising, and related state law claims including trade libel.
 
Defendants argued that the false advertising claims should be dismissed to the extent they were based upon defendants’ alleged misuse of an FDA clearance number or violations of the FDCA.  They argued that PhotoMedex, Inc. v. Irwin, 601 F.3d 919 (9th Cir. 2010), meant that—“especially in the medical device field – claims that require the court to interpret FDA regulations stray too close to the exclusive enforcement domain of the FDA and should not be permitted to proceed.” IHS responded that it wasn’t trying to prove FDCA violations, but rather that defendants falsely advertised that their device was interchangeable with the approved P-STIM.  JHP Pharms., LLC v. Hospira, Inc., 52 F. Supp. 3d 992 (C.D. Cal. 2014), held that the FDCA did not bar a Lanham Act claim alleging that the defendant misrepresented its products as being FDA-approved because “where the issue of FDA approval is straightforward, a Lanham action is viable.” The court here agreed.  Further, although PhotoMedex was not specifically overruled by POM Wonderful, its precedential value may be limited.
 
However, IHS also alleged that the DyAnsys P–STIM device “undercuts the FDA regulatory framework,” was “unsafe and hazardous,” was “mislabeled,” was “ineffective,” posed “numerous health risks,” endangered patients, and violated the FDCA. The claims not about misrepresenting FDA approval were dismissed with leave to amend to clarify.
 
Defendants also received Noerr–Pennington immunity for claims based on their petitioning CMS and related communications with the government agency, because these were constitutionally-protected activities  seeking relief from a government agency. IHS didn’t allege facts showing that the activity was objectively baseless and a sham.
Posted in fda, http://schemas.google.com/blogger/2008/kind#post, preemption | Leave a comment