Copyright Office 512 Roundtable: Technological Strategies and Solutions

 

Official description: Infringement monitoring tools and
services; automated sending of notices, including notice parameters; automated
processing of notices; role of human review; identification of works through
fingerprinting, hash identifiers, and other technologies; filtering, including
“staydown” capabilities; fair use considerations; identification and tracking
of repeat infringers; and other pertinent issues.

 

 

JC: what tech is potentially available to help notice
senders and responders?  How does it
relate to incentives provided under the law?
Interested in details, costs.

 

 

Sofia Castillo, Association of American Publishers: Many of
our members use tech to address piracy on a regular basis.

 

 

Jonathan Band, Library Copyright Alliance: our concern is
overnotice, overtakedown, and harm to fair use.

 

 

Michael Housley, Viacom: oversee our tech vendors, Content
ID; we’re constantly dealing w/vendors in marketplace

 

 

Sarah Howes, Copyright Alliance: don’t know anything about
tech, and that’s like the artists I represent.

 

 

David Kaplan, Warner Brothers Entertainment Inc.: Use tech
fingerprinting and scanning in enforcement.

 

 

Michael Petricone, Consumer Technology Association: 2200
innovative companies, many small businesses, many 512-reliant.

 

 

Eugene Mopsik, American Photographic Artists: photo artists,
routinely use various tech means to discover unauthorized use of images;
founding board member of Plus Coalition, created to help ID rights information
and connect rights holders w/market.

 

 

Casey Rae, Future of Music Coalition: Primarily interested
on artist side; accessibility and affordability of detection tech; intersection
w/data integrity w/identification tech.

 

 

Steven Rosenthal, McGraw-Hill Education: Oversee
antipiracy/anticounterfeiting program, work w/vendors who identify piracy and
further our content protection needs.

 

 

Mara Schneider, Musician: Here speaking as someone who sees
tech around me used to monetize content and make it easy for uploaders but I
don’t have access to for takedowns.

 

 

Brianna Schofield, University of California-Berkeley School
of Law: Research study looked into use of tech by notice senders and OSPs.

 

 

Matthew Schruers, Computer & Communications Industry
Association: Licensed distributors; intermediaries that provide tools for
users.

 

 

Lisa Shaftel, Graphic Artists Guild: Illustrator/educator of
graphic designers about business and © licensing and monetization and use of
tech to find infringing uses.

 

 

Victoria Sheckler, Recording Industry Association of
America: Work w/antipiracy dep’t.

 

 

Howie Singer, Warner Music Group: Chief technologist at
strategy group: evaluation of tech that can support or threaten music business.

 

 

Lisa Willmer, Getty Images: availability of image
recognition software and what mechanisms we don’t have to bring leverage on
ISPs to actually use that tech.

 

 

Nancy Wolff, Digital Media Licensing Association: Tech used
for purposes of licensing and image recognition that’s available and what can
be done to make it more useful.

 

 

Andy Deutsch, Internet Comms Coalition: transmit and host
content; interest in tech changes and cooperative efforts to develop best
practices for 512.

 

 

JC: Heard lots about challenges of system on both sides, in
terms of sending notices and volume of notices, some of which are not properly
prepared.  Is tech a big part of the
answer here?

 

 

Castillo: Yes, tech is a big part of the answer.  Partly b/c there is strong opposition to
legislative solutions.  Voluntary
agreements and best practices have their limitations; don’t include everybody.
Filtering, fingerprinting, watermarking is possible, even if not perfect; a
good start. They actually would provide more effectiveness rather than just
efficiency to notice and takedown. Scribd’s BookID fingerprinting system: an
algorithm that incorporates word count, word frequency, etc.  Matching content can’t be uploaded/is removed
from site.  Possibility of challenging
BookID based removals.  Good example of
places where we can start building on tech and tweaking filters so they
eventually become more accurate and fewer false positives. Tech-based solutions
are good b/c 512(m) prohibition wouldn’t apply if information comes from DMCA
notices/already provided by © owners.
This would be information ISPs already have.

 

 

JC: why did Scribd adopt that tech?

 

 

Castillo: Don’t know the history.  Where they get the info: references from ©
owners or authors; information from DMCA notices.  This would be a way to reduce their intake of
notices; once you have a filter the reuploading process, you get fewer notices
which is better for the ISP. [Yeah, right.
Of course, if you’re an ISP that didn’t get flooded with notices in the
past, developing fingerprinting is just
a cost.]

 

 

Band: The internet is vast; copyright owners can use tech to
find infringing material; tech includes Google. There’s a danger of using these
tech measures to get false positives. Filtering needs to be voluntary.

 

 

JC: you say tech has to play a role and you’re concerned
about inaccurate notices.  On a practical
level, how do you address that?  Given
that tech tools are necessary to this process, how do you address overnoticing
and overtakedown in a way that might actually work at scale?

 

 

Band: not possible—it’s an imperfect world.  Good faith belief that content is infringing;
software can’t have a good faith belief. We need to suspend our belief to some
extent.  Not sure there’s anything from a
policy perspective.

 

 

JC: could decrease errors, but errors inevitable?

 

 

Band: yes, we need to recognize that. We need to acknowledge
that instead of denying it.

 

 

KTC: In terms of tech, assessing fair use—is that actually
possible to use tech to comply w/court saying content owner must consider fair
use?  If it’s a tech that only captures
full length films or sound recordings plus some other factor?  Could it be completely automated and subjectively
in good faith?

 

 

Band: Besek brought this up w/r/t Lenz amended opinion; I won’t speculate about why the 9th
Circuit removed that line. At the very least, tech can be developed to consider
some of these factors. Whether that would necessarily in a given case be
sufficient, I don’t know. You’re not going to have a lot of cases like Lenz.
Rightsholders should build that screening into their system, and it might
result in errors once in a while.  If a
takedown is challenged, once in a while they might have to litigate that. Once
in a while, they may have to pay damages. Cost of doing business.

 

 

Housley: there are tech available today that correctly
deployed can be used to find, especially, unedited content.  Viacom gives a wide berth to fair use.  Focus will always be most damaging content,
which is full length. Existing tech helps us manage that. We’re selling tech
short if we don’t think we can come up w/something better than fingerprinting.
AI and machine learning: the sky’s the limit to ID content.  It may be that the original intent of the
DMCA to have ISPs and owners work together has been distorted; incentive to
fine tune tech is no longer there.

 

 

JC: are you familiar w/tech in market?  [Yes.] We heard about Content ID and
Scribd.  Are there third party vendors
who offer filtering as an outside vendor to sites who might be interested in
using tech?

 

 

Housley: yes, there are.

 

 

JC: are there any websites other than YT and Scribd that
have adopted staydown tech through custom or third party software?

 

 

Housley: there are Audible Magic sites—Facebook has started
to develop its own system. There is also Vobile.

 

 

JC: wants to know more about third party services and
fingerprinting.

 

 

KTC: how does that work in getting the needed info to create
the hashes or fingerprinting?

 

 

Housley: on the creator side, either they provide the tools
and we put it into the database, or we give the content to them. Creators can
get fingerprints in and deploy the tech on any site.

 

 

Howes: individual creators are very excited by the online
opportunities to control their work. We are seeing tech being developed by OSPs
that are helping individual creators, which gets to legislative intent. As
artists, we are very collaborative people.
Hamilton wasn’t made by one
man but by a team of people who came up w/solutions. Artists can build really
successful platforms; when it comes to piracy on other platforms, there needs
to be more access. Individual creators: still using reverse image searches and
Google alerts, which is ineffective. On top of that, have to ID every
individual contribution of their work.
Control is part of your ability to make a living.

 

 

JC: Is there anything in the market that individual creators
can use to search for content that’s affordable?

 

 

Howes: I don’t know.
There might be.  There are some
services Mopsik can talk about.  Many
individual artists are still new to this.
There are platforms created by artists trying to figure out more
collaborative ways to involve the creator, similar to Content ID: most
successful part of Content ID is that it asks the creator what to do w/the
infringement.

 

 

Kaplan: tech is part of the solution.  There are no silver bullets.  That shouldn’t be a reason to discount the
use of tech.  Tech will evolve over time
so that it’s increasingly accurate and less expensive. Things that may not have
seemed reasonable 5 years ago will.  Not
so much about software—use of tech is almost always mixed w/human review/setup.
Notice sending/scanning at scale; often human review results in errors. Tech
itself has a lower error rate.
Facilitating fair use: definitely; matches can be ID’d by duration
relative to overall length of work. YT developed w/content ID.  When we talked to YT first 7 years ago—it
worked to a limited extent, but needed a ruleset associated w/content about
leaving up v. taking down—we thought they were overblocking and taking down too
much that we’d leave down. We became comfortable we were giving fair use enough
of a berth.

 

 

JC: human component in setting parameters for software.  Talk more about that?  Human review at the other end when
flagged—how does that integrate?

 

 

Kaplan: Depending on what piece of online policy we’re
addressing.  Scanning in framing
content—there’s a universe of pirate sites, not the entire internet, so we use
human review to decide where to scan in the first place. Word matches, word
exclusion.  Google notice: run searches
and human reviews to see if it’s a link to a pirate site. Filtering: humans set
up what content to look for; duration of match before action is taken;
sometimes the action is “human review” if the match didn’t fall into certain
parameters.  Can decide based on whether
it’s Audio, video, both.  Can also do
rulesets around territorial restrictions.

 

 

JC: are they trained in fair use?

 

 

Kaplan: in our case, yes. For less than full
feature/episode, that’s [heavily ?].

 

 

KTC: Schofield’s study identified issues
w/misidentifications—do you share concerns about improper notices?  Are there ways  to reduce concerns?

 

 

Kaplan: there’s always potential for increased errors. It’s
usually the fault of the human.  Can
reduce errors w/tech.

 

 

Petricone: Tech is very exciting and promising. Content ID:
99.5% of music revenues are now made w/Content ID, 99.7% accuracy. New model of
revenue—Ben Affleck interview set to “Sound of Silence” went viral, drove the
song to the top 10 50 years after its release. Fan uploaded content accoutns
for 50% of music revenue on YT.

 

 

JC: not everyone is able to take advantage of Content
ID.  Can you speak about that?

 

 

Petricone: Not right now.

 

 

Mopsik: Tech for motion pictures, Excipio is a company that
also extends to ID unlicensed uses.
Service providers in image space who use their own fingerprinting
algorithm and then the list has to be evaluated by the rightsholder to
determine what’s licensed and what’s not. The missing link in the image space
is the ability to identify what is an actual licensed use and what’s not.
That’s something Plus Coalition has been working on for years; predicated on
ability to establish a persistent machine actionable identifier. W/o greater
penalty for removal from images, that link will never happen.  Plus has an identifier w/ the image, w/all
licensing info held in an updatable database.
If you’re able to make that link, then machine action can determine
authorization.  W/r/t fair use: photog
are not particularly knowledgeable about fair use; images are rarely used in
snippets, and that can have significant impacts on market over time.

 

 

JC: do individual photographers have access to an affordable
service?

 

 

Mopsik: the fees are not significant. [Note: I originally misunderstood his comment.  He clarified: “The fees I was referencing are for the business services that track and identify copyright infringements for visual artists.  I am not on the board of any of those services nor do I have a business relationship with any of those services.  I am on the board of the PLUS Coalition – a non-profit established to simplify and facilitate the management and communications of image rights.  I receive no compensation from PLUS.  PLUS does not have an e-commerce component and its technology and resources are open source.”] They take 50% of any recovery. They have a legal services
component and pursue the infringement.

 

 

JC: they send a takedown notice?

 

 

Mopsik: they will. [No, they sue.]  Frequently, takedown procedure involves
chasing phantoms.  Or people takedown but
may have been using it for years. There’s a lot of attitude involved when you tell
them that there should be compensation.

 

 

Rae: Primarily we’re talking about ID tech, that’s
512(i).  Earlier, it wasn’t practical on
the service side to implement tech to do this. On the content side, they always
want new favorable legal precedent and damages.
We’re in a new place now. 512(i) encourages the creation of new standards.
But the method of deploying that is collaborative effort.  We have to get our processes dialed into
that. I’d like to see vendors, smaller rightsholders, ISPs in a body that can
provide recommendations not just once but on ongoing basis, given new tech
environments—virtual reality, etc.  Fair
use is interesting; my preference would be less focus on the entirety of a
work.  We can probably solve many
problems through process focused on practical implication.

 

 

JC: did you participate in Dep’t of Commerce process? [yes]
Where do things stand? Written comments expressed pessimism about ability to
get together and get standard tech measures.

 

 

Rae: optimistic, though Dep’t of Commerce process was more
of a cattle call. Better to focus on those who are representative of the
stakeholders, like the Copyright Alert process.

 

 

Rosenthal: burdens of developing tech: the same tech used to
ID infringement, like hash values and checksums, can be used to filter the
materials by sites and prevent whack a mole. It’s not new tech that needs to be
developed.  Intentional avoidance of tech
by ISPs to avoid claims of willful blindness in terms of not logging IP
addresses, so that DMCA notices are effectively rendered impotent.  Lots of frustrations when we try to enforce
our rights. Why can’t you use the same tech we’re using: IP address, hash
value.  Reinventing the wheel: tech is
out there.  We developed live streaming
filters that fingerprint and filter livestreaming TV and pay per view in real
time.  Some sites created their own tech
to do this.  Willingness is needed.

 

 

KTC: Unwillingness: Do you think there’s a disincentive in
512?

 

 

Rosenthal: in terms of logging IP addresses, Cox v. BMG
creates a disincentive to do so to avoid willful blindness. In terms of
non-filtering ISPs: many of these sites are run primarily by hosting and
distribution of content known to be infringing. If we cleaned up their site,
they’d lose the majority of their content/appeal.

 

 

Schneider: Obviously, there will be error. Machine learning:
translation on the internet learned so fast. If you compare it to the billions
of errors in people uploading things, it doesn’t compare. Tech should be used
in conjunction w/education. Automation w/o education: Content ID.  I should be accepted into Content ID as a
condition of safe harbor. Also being used for uploading and people think
they’re doing something good b/c it’s being monetized. But they’re also
catching my music, which isn’t being monetized and it’s hurting me, and fans
don’t realize that. It should say: this isn’t in our database of Content ID, so
if you don’t own it, don’t upload it.
Everyone’s complaining about erroneous takedowns and counternotices;
education is required.

 

 

JC: Why can’t you join Content ID?

 

 

Schneider: automated response gave me the impression I
wasn’t big enough.  They don’t say
why.  Secret terms.  They’ll send someone to talk to you, but Zoe
Keating was bullied into giving her whole catalog—all or nothing. Safe harbor
shouldn’t allow you to use these tools for their own gain.

 

 

KTC: On the notice side, popups appear to caution about
whether you took the picture. On the upload side, what cautions are used?

 

 

Schneider: this is the biggest educational thing.  Standardized requirements and questions for
all upload sites.  You have to sign
penalty of perjury on the notice side. Upload: ask under penalty of perjury if
you have permission, and warn about possibility of atty’s fees. Tell them what
isn’t fair use.  I’d love to see the
Copyright Office set the standard.  [I
wonder if she wants to go through this every time she sends an email with an
attachment, or an email long enough to contain song lyrics.]  Standardized: you have to accept Google’s TOC
and go through 46 steps. If you’re in a safe harbor, that should be a
privilege, not a right, have to adhere to standardized rules.

 

 

Schofield: In our research, we spoke to and heard rights
holders’ frustration w/dealing w/proliferation of infringing content online.
Automated tools are one way of dealing w/this to detect infringement. We ID a
number of best practices for refining those systems, minimizing mistakes. We
heard from rights holders who are already employing best practices, including
human cross checks and checking the sites that are targeted. These are good.
Tech strategies on OSP side: some are voluntarily implementing them; we see
good reasons for them to remain voluntary not least of which b/c huge amount of
the ecosystem doesn’t have the kind of volume of infringing content that would
justify imposing these systems.

 

 

JC: Smaller providers/w/o lots of infringement, ok, but if a
site is using filtering to place ads/for own economic purposes, should that be
available for rightsholders?  Websites,
sophisticated larger websites, use fingerprinting for their own purposes—to ID
content to place ads on it. If it’s already in use by a website, should it be
made available to people like Schneider. Should she be able to use Content ID
if they’re already using it and it’s available to other rightsholders?

 

 

Schofield: can’t comment on that specifically.  [Ad tech doesn’t “fingerprint” files in the
way that she thinks they do, I’m pretty sure. What would be the payoff?  Keyword use, sure.]  If a tool has been developed to combat
infringement, yes, it should be available to everyone.  We recommend trying to make systems broadly
available, with caveat re: using the same best practices.

 

 

KTC: There’s been a lot of focus on the #s of improper
notices.  You seem to support use of
automated systems despite finding a lot of improper notices?

 

 

Schofield: use of automation on the sender side is an
important part of the solution, but they can be refined.

 

 

Schruers: As I was listening, I was reminded that the
internet sector is occasionally criticized for technological solutionism: but
here we hear that our tech can be solution to all problems. Appreciate the
enthusiasm but we should understand the challenges.  DMCA Plus is expensive.  It doesn’t make everyone happy.  And it’s a tool of limited
applicability.  Only meaningfully applied
in 512(b) and (c), so half our DMCA actors aren’t within the scope of
that.  512(a) aren’t taking custody of
the content, and can’t filter unless they create a firewall. Nor are 512(d)
services hosting content, and don’t have a library to filter against. And of
course all that assumes a populated database and a contextual ruleset about
what you do when you find content in the DB. Clear in PTO process that there
are large entities on both sides and small entities on both sides. Small ISPs
face a real challenge in scaling up automation. Small ISPs have to be able to
take notices by fax, email, etc. Automating that is a serious challenge. If we
said “it has to be a webform,” that might be easier to automate, but I don’t
see that happening any time soon.

 

 

JC: different solutions for larger and smaller websites?
[Where does Wikipedia fall?]  Few notices
= manual; millions = different.

 

 

Schruers: that’s what we see today. Small ISPs will always
do manual takedowns, bundled w/other unrelated claims like defamation. Large
ISPs also handle that, but as smaller percentage; architecture assumes
sophisticated users.  [Remember, large
site isn’t the same thing as large number of notices: Wikipedia!]

 

 

JC: could set different standards for different classes.

 

 

Schruers: could do for 512(a), (b) etc. PTO process tried to
do that, and people didn’t seem happy w/it—heterogeneity on all sides.

 

 

KTC: Is there anything that can be done absent or with
legislation to encourage voluntary use by ISPs?

 

 

Schruers: if it’s legislation, it’s not voluntary; but there
are processes over time tailored to the constituents around the table.  Large notice senders can take advantage of
automated systems. In terms of access to DMCA Plus systems: privileged access
to the back end of a platform, allowing people to take down or claim
revenues–you will want the users of that system do reasonable things like
indemnify the platform for misrepresentations about what you own.  Stakeholders should have a demonstrated
course of legit use of the tools. If that isn’t there, use the DMCA.

 

 

KTC: I didn’t mean mandating use of a tech measure, but
maybe decreasing exposure to statutory damages if you filter.

 

 

Schruers: basic complaint from ISP is difficulty of
responding to messy, hand-coded notices; there’s already a lot of incentive to
reduce that burden, which is why they’re always looking for new tools like the
PTO process.

 

 

Greenberg: There are no STMs. But ISPs are concerned about
locking stuff into place. Neither will work, so what’s the solution to
encourage the use of tech measures by the ISPs?

 

 

Schruers: cost of responding to notices is encouragement,
especially since some will always have to be dealt w/by hand. That’s a
compelling motivation right there.  Allow
tech to evolve over time.  Acknowledge
broader marketplace: there isn’t going to be as much unlicensed if it’s
available licensed, with less aggressive windowing.

 

 

JC: so maintaining the fax # requirement incentivizes
Content ID?  I kid.

 

 

Shaftel: Should make it a violation for host to strip
metadata through upload; makes Plus system for images useless. Should be
voluntary licensing w/Pinterest, FB, YT—users aren’t compensating, and there
should be collective licensing. Adobe could create identifiers for software
users, which could also be used as part of Copyright Office registration.
Creator ID could facilitate electronic payment, voluntary transactions.  Tech is possible.  Visual creators are more likely to use this
if they know they’ll derive an income. We’d need to define commercial use in
the context of licensing as opposed to fair use. Getty has guidelines in its
web feature; definition would have to be approved by museums and libraries, b/c
we are mostly concerned about allowing them fair use. If users paid for
commercial use, they’d have safe harbor from DMCA takedown.

 

 

Sheckler: Tech does exist that is commercial, reasonable,
and reasonably price.  Audible Magic is
available at $1000/month for certain limitations. Key is thoughtful
implementation of filtering which isn’t just parameters of tech, but also rules
on top of that.  Content ID has a variety
of problems that could be addressed.
False positive issue: thoughtful implementation would address that;
Takedown Project study is inappropriate for thinking about fair use. Price of
admission—only applied to search; applied to a snapshot from 2013; it is
targeted sample.

 

 

JC: you mentioned thoughtful implementation.  Can you elaborate?

 

 

Sheckler: Review to see site is fit for scale notices.  We’re not going to search .pdf for music. And
Audible Magic you want to catch all/substantially all of the work.

 

 

Singer: It’s not always about tech but the business
processes that go along with it.  Stacked
URLs defeating takedowns: this isn’t a bug but a feature of sites designed to
be robust to individualized takedown notices. Get a prerelease song and never
publish the URL of the actual location but create 1000 references and publish
100/day.  Each day they issue takedowns
and the content is never removed.  Notice
and takedown individual URL system can never be effective when site works to
defeat the system.  “Pez dispenser” for
valuable content. Grooveshark.  [Why
isn’t this already illegal under the DMCA?]
Standards could be based on size or on how responsibly they deal w/that.
Warner and Viacom should be treated better than people who send bad notices. We
should look at bad actors: majority of our notices to 4squared are
repeats.  We can verify an account on
Twitter, so why not for takedowns?

 

 

JC: how common are the Pez dispeners sites?

 

 

Singer: We’ve found it in other cases than Grooveshark;
unlikely that a user upload was the source of the same song on the next day w/
a nearly identical URL.  [Why is that ok
under the current DMCA?]

 

 

JC: Is there a tech solution?

 

 

Singer: if there were notice and staydown that said this
song shouldn’t be available.

 

 

JC: anything w/o staydown?

 

 

Singer: not for those who are trying to undermine the
effectiveness of the process?

 

 

Willmer: there’s no content ID for images; the tech exists
but Google has chosen not to implement it; voluntary action isn’t enough.
Congress mandated use of STMs; that was key to striking a balance. The
definition of STMs was too narrow. There’s no tech that meets it so it’s
meaningless. Focus should not be on how the tech was developed but on what it
does and whether it’s available on reasonable terms. There is a way to check
images on upload to see if it’s registered.
Platforms educate users about perils of filing takedown notices: Are you
really sure about that? Even if it requires personal info? Imagine if they had
the same interest in educating users. What if it said when you uploaded a photo
in the database “this photo is protected by ©–please ensure that you have a
license or that it’s fair use,” with a guide to fair use.  [Um, if I took it, it’s also protected by
©–you mean something else, right? Or is © only for you guys?]  Sites that block crawlers should also not be
allowed immunity.  [So, no DMCA for
Facebook, eh?]

 

 

JC: Google?

 

 

Willmer: frustrating. We don’t have the clout to get Google
to provide what they’ve provided to other industries.

 

 

KTC: popup education: what is the cost of takedown steps?

 

 

Willmer: having content on the site benefits the site so
it’s clear that the incentives are for the content to be put on the site, not
to stay off if it’s not licensed.

 

 

KTC: is the lack of STMs just w/r/t images?

 

 

Willmer: I’m aware of none.

 

 

JC: I did see some references that metadata would be a STM.
Do you have an opinion on that?

 

 

Willmer: don’t think it meets the 512 definition. It’s a key
identifier of © ownership, and part of the problem is that the metadata is
often stripped, particularly when uploaded to large platforms. They take the
position that it increases the size of the file.

 

 

JC: any litigation over that?

 

 

Willmer: no litigation to my knowledge.

 

 

KTC: Is anyone aware of a STM that meets the 512 definition?

 

 

Scheckler [?]: CafePress case, but that was settled.  Didn’t say it was or wasn’t.

 

 

Wolfe: Google image search—we talked about wouldn’t it be
helpful if it said “images may be subject to ©” and they listened and left the
user experience the same way. Everything’s about the user experience, not a
healthy licensing market. Image recognition tech is only the beginning—the
amount of images online, and the requirements for sending a notice, are
inefficient and burdensome.  Really
hasn’t aged well.

 

 

Deutch: ISPs aren’t averse to tech. We want best practices.
Problem w/mandated tech measures that don’t start from negotiated process is
enormous variety of ISPs. Google is one, but there are 1000s of designated
agents. Some are not in a position to implement the fancier and perhaps more
promising tech.  They believe 1998
bargain was: © owners ID content they think is infringing and ISPs have to take
it down; that remains appropriate and filtering is not really workable.  Data is frequently atomized; can’t tell who
it belongs too. Large content users often encourage fans to post © materials;
impossible w/o invading privacy for ISPs to figure out what’s tolerated.  No magic bullet, but everything has to be
done in cooperation, as DMCA itself was.

 

 

JC: You say filtering can’t work, but YT uses it and we have
other sites that are clearly all unlicensed content. If © owner is sending
notice to a full length use, by definition they know it’s not licensed.  Why is filtering an impossibility in that
environment?

 

 

Deutch: that’s the job of suing the website: hotfile,
grokster, aimster, napster, scour have all gone down: whenever © owners have
really faced a rogue site, the effective way of dealing with that is a direct ©
lawsuit; if they’re doing what you say, they don’t have any claim to safe
harbor and  courts repeatedly said they
don’t.

 

 

JC: but DMCA did envision collaboration, and that hasn’t
happened as much as some would like. So we should have litigation?  That’s expensive for both sides.

 

 

Deutch: it’s difficult to filter consistent w/other values:
user privacy, undue burden on ISPs.
Nobody has yet spoken to a scalable tech for all ISPs—continue to let
tech develop.

 

 

KTC: Anything to be done short of mandating the adoption of
certain tech?

 

 

Mopsik: IPTC has a great study if you search for IPTC
metadata study: Chart that tells you which metadata is maintained/stripped on
upload to most popular social media sites.
Image Rights is one company that provides this service for photogs.

 

 

Schneider: in 2008 HEOA passed for universities, perceiving
that students were responsible for so much infringement.  NYU is using Audible Magic. They have to do
educational steps and report them.
People at universities say it’s working relatively well, not an
inordinate burden. I’m a big fan of a rating system for people who do
takedowns.  Rating creates accountability
and encourages education. Everyone is complaining about a purposeful lack of
education. Use the tech for education.

 

 

Schruers: Paradigmatic example given of easy infirngement
case was “full copy” but remember this very court in which we sit found that
full copies were fair use.

 

 

JC: what else can you use to draw a line for automation?

 

 

Schruers: which raises the question of whether that is a
good idea. Solutionist view of technology is not a panacea.

 

 

JC: so is every full-length use in need of review by a human
person? How is that plausible as a solution?

 

 

Schruers: It’s not a solution, but it’s the law.

 

 

JC: but you’re trying to solve a sea of infringement, and
we’re trying to solve that.

 

 

Schruers: can’t assume it’s inherently infringing.

 

 

JC: but they have to assume it to run an automated system,
even if there’s a remote possibility of an error.

 

 

Schruers: which is my broader point: there are built in
limitations to what we can reasonably automate, which is why we see differences
b/t DMCA-plus systems. Just b/c the entire internet hasn’t adopted DMCA-plus,
doesn’t mean there’s not extensive cooperation w/rightsholders, tailored to
particular platforms.

 

 

KTC: it has been difficult to develop STMs. Do you see any
path forward?

 

 

Schruers: mistaken premise that STMs are the only path
forward.

 

 

KTC: DMCA said it should be a possibility; to avoid that
becoming a nullity, could we do something to make it a reality.

 

 

Schruers: we’re on the path forward in different parts of
ecosystem. DMCA misassessed the probability of homogeneity, but shouldn’t
discount the robust variety we’re seeing in different spaces optimized for the
platforms we’re seeing.

 

 

Scheckler: There are reasonably priced techs available today
that would significantly reduce the volume of notices and counternotices.  W/r/t PTO process, I was heavily involved,
and while it had some helpful outcomes, it didn’t discuss STMs.  DMCA doesn’t say there can’t be flexibility.
They’re not coming to the table.

 

 

KTC: what would encourage them to come to the table or to
voluntarily employ some of this tech?

 

 

Scheckler: we stand ready to work w/you and Congress.

 

 

Willmer: The best leverage Congress would have is to
condition immunity on coming to the table and being willing to implement
available tech. Congress wanted to keep the works from going up in the first
place rather than having them taken down. [Hunh?]

from Blogger http://ift.tt/1Z9qD27

Advertisements
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s