soy and almond milk couldn’t deceive reasonable consumers

Ang v. Whitewave Foods Co., No. 13-cv-1953, 2013 WL 6492353 (N.D. Cal. Dec. 10, 2013)

Plaintiffs sued over defendants’ Silk and Horizon products. The Silk products are plant-based beverages, including Silk Vanilla Soymilk, Silk Almond Milk, and Silk Coconut Milk.  The Horizon products are yogurt and milk products.  The court dismissed the usual California claims with prejudice.

Plaintiffs alleged that the terms “sugar” or “dried cane syrup” should have been used on the products instead of “evaporated cane juice.”  Also, the Silk products were allegedly misbranded because “milk” is a substance that comes from lactating cows.  The first set of claims was dismissed because of res judicata: a valid prior class settlement in a Florida action alleging the same misbranding argument.

Defendants argued that the milk claims were also precluded because they only targeted products that contained evaporated cane juice, but that wasn’t true: the settlement barred claims related to use of the term evaporated cane juice, not all claims relating to products that contained evaporated cane juice.

However, the milk claims were preempted by the FDA’s standard of identity rules.  “A standard of identity is a requirement that determines what a food product must contain to be marketed under a certain name.”  The FDA requires a food to be identified by its common or usual name.  Though the FDA defines “milk,” the court found that this “pertains to what milk is, rather than what it is not, and makes no mention of non-dairy alternatives.”  (I don’t get this reasoning.  So if there’s no milk in the product at all, it can be called milk without regard to the FDA definition?)

Plaintiffs also noted FDA warning letters to soymilk makers that warned manufacturers that “soymilk” was misbranded because of use of the term milk.  But these brief statements were “far from controlling,” especially since the FDA “regularly uses the term soymilk in its public statements, suggesting that the agency has yet to arrive at a consistent interpretation … with respect to milk substitutes.”  Because the FDA had yet to mandate a name, the court went to the “common or usual” name requirement; that requires each class or subclass of food to have its own common or usual name that clearly states what it is in a way that distinguishes it from different foods.  That name can be established by common usage.

Here, the court found that “soymilk,” “almond milk,” and “coconut milk” accurately described the products:

As set forth in the regulations, these names clearly convey the basic nature and content of the beverages, while clearly distinguishing them from milk that is derived from dairy cows. Moreover, it is simply implausible that a reasonable consumer would mistake a product like soymilk or almond milk with dairy milk from a cow. The first words in the products’ names should be obvious enough to even the least discerning of consumers. And adopting Plaintiffs’ position might lead to more confusion, not less, especially with respect to other non-dairy alternatives such as goat milk or sheep milk. 

(Ed. note: non-dairy?)  As a result, the claims were preempted as attempting to impose standards not identical to FDA standards.

Also, the court independently found that plaintiffs’ claims flunked Iqbal/Twombly’s plausibility standard, for much the same reason already articulated: the use of “soy” and “almond” made it implausible that a reasonable consumer would be confused, “disregard the first words in the names, and assume that the beverages came from cows.… Under Plaintiffs’ logic, a reasonable consumer might also believe that veggie bacon contains pork, that flourless chocolate cake contains flour, or that e-books are made out of paper.”
Posted in california, fda, http://schemas.google.com/blogger/2008/kind#post, preemption | Leave a comment

PTO/NTIA online transactions

Online Transactions

Moderator: Ann Chaitovitz, AttorneyAdvisor for Copyright, Office of Policy and International Affairs, USPTO

Comments generally agreed that online markets should be left to private markets. What role if any for the government.

Panelists:

Roy Kaufman, Copyright Clearance Center

We focus primarily on text, dragged by users into other media. Our markets tend to be corporate, publisher to publisher and academic.

Prof. Brandon Butler, American University, Washington College of Law

Library Copyright Alliance: 3 major library associations representing 100,000 libraries around the world.

Q: what do each of you see as key obstacles to developing robust, comprehensive online licensing environment, and can the gov’t help?

(I guess we’re not going to talk about how gov’t can help by maintaining a large space in which licensing is not required?)

Kaufman: key obstacles are that it’s very hard to develop robust databases, taxonomies for licensing.  What is a library?  Different rights, different media, different markets, different norms. Can be brought together, because within each market there are many solutions. Gov’t can encourage and foster collaboration across media and sectors.

Meredith Jacob, Creative Commons

Sheer volume of creative works is an obstacle. Many people who create don’t think about registration/licensing at all.  Hard to explain why you should do that (especially if your intent is not to strike it rich).  Creators’ intent varies. Almost everyone who is a “user” is also a creator as CC sees. Having a user/creator divide is a problem.

What gov’t can do: Gov’t can not reinforce systems that assume that all creators want the same thing.  Don’t necessarily want remuneration. Don’t assume that all transactions should be licensed.

John Lapham, Getty Images: it’s easier than ever to license. Don’t demonize the right to be properly compensated. The obstacle is that we publicly shame people for wanting to be compensated and say that being seen ought to be enough. What gov’t can do: keep up by trying to develop small claims process that recognizes digital economy/new needs. Likewise, not being overly in love with status quo is critical. DMCA was implemented when we were worried about survival of internet, but now we know that it’s possible to make a good living as a search engine.

Butler: not much to do for libraries in terms of facilitating licensing. Libraries are doing wherever they can and see it as appropriate. ARL members spend $1.4 billion on content, of which $850 million collectively is spent on licensing, or 60%. We’re licensing like crazy.  Not really seeing a lot of barriers to finding people willing to take our money. What should gov’t do?  Licensing terms are trumping copyright’s default rules—sign away first sale and fair use rights. Libraries get these huge portfolios of licenses, but huge thickets of rights they often aren’t qualified to or capable of parsing when it comes down to deciding what uses are ok. Gov’t could ensure that default user’s rights in copyright act can’t be licensed away, at least for libraries that need those rights for their basic jobs.

Q: talk about UK Copyright Hub: what can the US learn?

Kaufman: a way for users/creators coming out of UK copyright review. Improvements in licensing info were possible. Similar efforts in the EU.  US should be doing the same thing.

Q: how do libraries see the relationship between online licensing and fair use?

Butler: Fair use is extremely important to libraries, and licensing does not and should not undermine fair use, especially in transformative contexts. In theory, more and better licensing therefore shouldn’t threaten us, but the other side doesn’t see it that way.  CCC is suing some of our members based on a misunderstanding of fair use, where they assert that the existence of a license should force fair use to shrink. Another example: text and data mining—this is a transformative fair use, even when done for money.  But CCC says they’re working on a market for text/data mining, in Europe where there’s not as clear an exception. That’s a terrifying prospect. We don’t think anything that comes out of this process should be seen or portrayed as taking away from fair use, but it’s a fear we have.  (Bravo!)

Kaufman: misleading to say we’re suing your members, but off topic.  Someone said this morning licensing doesn’t substitute for fair use (that would be me).  He’s ok with that, but we should have efficient online mechanisms and should not shy away out of fear of effects on fair use.

Q: Getty recently reached a deal with Pinterest. Tell us about that arrangement.

Lapham: Don’t think gov’t can help with that.  Tech companies like Getty can work with partners in the private sector to make more and better content available. Pinterest: a healthy percentage of their content belonged to us. Rather than having a slapfight, focused on losing metadata/attribution. Instead, reattach attribution and provide royalties.  Opportunity for more arrangements like that so that creators can create and still be compensated for use on social media sites.

Q: reattach metadata; was there also a payment for past uses?

Lapham: the arrangement works by using our image database of 10s of millions of ours and other companies’ pictures; we can match that against the website. Using image recognition tech, we can find and reattached metadata, and charge fees per image per month.

Jacobs: Creative Commons are valuable because they don’t require renewal/maintenance of longterm engagement with the process. That is useful for creators who can set it and forget it.

Kaufman: Linked Content Coalition should be watched.  CC is also useful because it’s human and machine reasonable.  Digital Object Identifier used in science publishing. Orchid: a researcher ID, but can ID researchers as authors.  Each one has a purpose; a lot have open APIs.  First step: gather them all up and decide how they play with each other in this acronym soup.

Jacobs: public domain content and content created through federally funded research needs to be in these databases so you can find content that is public domain or open licensed or CC licensed so there’s no division there.

Lapham: Hargreaves report in UK—could be useful to work with gov’t in creating image registries/databases, whether orphan works or otherwise. Don’t wait for gov’t to do it. Instead, partnership where we can provide services to meet objectives for orphan works or otherwise.

Butler: on how not to do it—Jonathan Band and I wrote anarticle about issues with collecting societies around the world. See problems of corruption, transparency, inefficiency—learn from those mistakes and work for accountability.

RT: What can the US learn from Canadian legislative reform and Canadian universities’ responses to Access Copyright?

Butler: we can learn a lot.  Canada, Australia, and some other countries who have had comprehensive licensing systems are looking to us and whether they should be turning more for facilitating educational uses to fair use, or even to CCC on a piecemeal rate instead of paying statutory/blanket licenses.

Allan Adler, AAP: All this talk about prohibiting waivers of fair use or other rights—we don’t want gov’t to engage in paternalistic policies; people wouldn’t know where that would end. You can waive almost any right, including First Amendment right to speech if you work for the gov’t (uh, does not follow, but ok).  Better and easier to pay for license than relying on fair use. (Well, yeah, if the copyright owners get the gov’t to hand them that right; apparently only some kinds of rights definitions are paternalist.)  Biggest stories of year are implementation of ACA and NSA’s activities—these are related to gov’t databases, so aren’t we worried about that esp. when we are talking about tracking online transactions?

Lapham: allow for private sector solutions to some of the big issues.

Butler: shares some of those concerns, especially about NSA. Private collection of info is a great tool for the gov’t, so any time anyone has a big database that is a concern for us. Any effort to create central database of what people are reading should raise non-copyright concerns.

Kaufman: We have fair use, exceptions, limitations—but the whole point is to get users what they want the best way they can; sometimes that’s rights information. Don’t demonize us for doing that.

John Morris, Associate Administrator and Director of Internet Policy, NTIA

Importance of collaboration v. talking past one another.

Shira Perlmutter, Chief Policy Officer and Director for International Affairs, USPTO

Committed to finding sweet spot for copyright and internet policy. Need continued collaboration from all stakeholders.  There will be hard work ahead. We haven’t chosen easy issues. If positive tone of discussion today and willingness to engage constructively continue, I’m optimistic about progress.  Soon will announce further public outreach on these topics. Plan is for roundtables around the country in the coming months.
Posted in copyright, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

PTO/NTIA access to rights information

The Government’s Role in a More Efficient Online Marketplace

Panel #1: Access to Rights Information

Moderator: Garrett Levin, AttorneyAdvisor for Copyright, Office of Policy and International Affairs, USPTO

Building the online marketplace is fundamentally for the private sector, and that process is well underway. Many comments stressed importance of private sector, and we agree, but gov’t may be able to facilitate the process.

Panelists:

Colin Rushing, SoundExchange

Building a database: we’re working on trying to make ISRC system work better, and working with counterparts around the world on better systems and processes for flow of data and money.

Prof. Pamela Samuelson, University of California at Berkeley School of Law

Berkeley hosted a conference on reformalizing copyright; many speakers discussed the importance of accurate and up-to-date rights information. Consensus: need more information through recording transfers. People who record transfers now do so voluntarily but incentives aren’t good enough.  Things one might do to create more incentives mentioned by her panelists (not a Samuelson endorsement): Jane Ginsburg among others mentioned making transfer unenforceable if unrecorded (shades of mortgages!); conditioning statutory damages and attorneys’ fees on recordation of transfer; condition availability of other remedies, including injunction, on recordation of transfer; condition right to sue on recordation of exclusive license or other transfer. Most intriguing, from Jane Ginsburg: unrecorded transfer would accomplish only nonexclusive license rather than exclusive license or assignment. That would be a pretty strong incentive to get the job done!

Matt Schruers, Computer & Communications Industry Association

CCIA is a trade ass’n of internet/tech companies. These panels are the answer to the previous panel. All the sticks in the world driving people away from access to unlawful content won’t be effective without access to lawful content. You can’t litigate your way to prosperity unless you’re a lawyer.  Must focus more on compensation than on control; too often we sacrifice compensation on the altar of control. How do you get to more yes?  Less sexy than polarized fights about notice & takedown or first sale; this is very technical and answers tend to be technical.

SoundExchange has interesting discussion about standards in its comments, as does Green Paper. Need to standardize how we store info, as well as how we access it through APIs.

Jim Griffin, OneHouse

Our job is to make it faster, easier, and simpler to pay, in hopes that more people will. The way to do that is create a market in registry services—to make it profitable. Role of gov’t is to create wholesale registries at the core that incentivize retail activity at the edge.  Should take notice of the best database in the world: DNS system brings millisecond answers worldwide.  Green Paper is positive about registries and databases, except for content.  Registering content is the key, and that will only happen when it’s profitable. Content industry sees registration as a cost, and as a risk. But tech loves databases and makes them profitable. All manner of investment into DNS because it’s profitable. In our industry we see it as a cost though.  Need to make every point of the registration chain profitable, and then advertise to get creative claims registered.  (There is so much space here for an analysis of the rhetoric of privatization and monetization!  Someone should get on that. But only for pay, because otherwise it’s meaningless.)

Jeff Sedlik, PLUS Coalition

Not enough people are involved in image licensing are here. Visual creators are the smallest of the small businesses. They aren’t able to represent themselves effectively; trade ass’ns struggle because it’s a disenfranchised industry. Information is easily stripped out, e.g. by screen capture. Connection between rights holder and image is lost, and you get an instant orphan. Inability to monetize is plaguing visual arts creators. Whole generation is deciding not to create because they can’t support themselves—can’t enforce/control rights because info isn’t there. Publishers are drowning in images, but can’t identify what rights they have and what rights they don’t because of fragility of metadata. Search engines can’t pass on rights info because they don’t have it—leads to overhesitancy of use or infringement. Cultural heritage sector: inefficient in amount of time to search for ownership—leads to trouble with preservation or hesitance to make use even though they should be used.

Lee Knife, Digital Media Association

Ideally people want to license everything at one fell swoop, but that’s not possible under today’s standards. 

Q: how could the gov’t promote the adoption of standards? What kind of standards?

Rushing: What doesn’t exist is a full list of sound recordings and associated ISRCs.  Makes system tricky to use.  Industry is addressing first problem: not a registry. Effort to create a useful/usable registry. Gov’ts role: support adoption of standards.  In registration, ISRC should be part of registration—a field allowing you to connect copyright records seamlessly with record company records. SoundExchange administers the statutory license, by regulation. ISRC isn’t required, but we’ve asked the Copyright Royalty Board to revisit that as an industry accepted standard. Gov’t can help standards become true industry standards.

Schruers: It’s not a burden overall when the gov’t requires standardization—increases the value of the entitlement if built into registration/recordation process.  Note also that mass licensing needs an API-like interface.

Samuelson: feasibility study of distributed registry systems would be justified.  Could standardize data collected, be interoperable; need information to be publicly available—how a DNS registry-type system might work is worth asking. Copyright Office could participate in developing standards for interoperability—would allow different types of creators to have communities they’re serving where they feel more connected to that than to the Copyright Office itself.  The copyright principles project recommended such a study. That can actually be done.

Griffin: goal is hierarchical inputs with nonhierarchical outputs: anyone anywhere on the globe can get a quick response, but the inputs have to be a certain way (like DNS). We’re all in this together: to exploit a musical work requires the album cover, lyrics. So we need a photo registry to help music, along with text. The one thing we know works with superspeed, which we need so we can do filtering (!!!! Nice aside!!!!), is domain name registry, and the investment is pouring into them because there’s a difference between the cost of entry at the wholesale level and what can be gleaned at retail. The only way we see necessary outreach to convince creators to register is that.  If it’s profitable, it will get the job done around the world. But mandatory registration for protection won’t work around the world.

Sedik: it is a global issue—images are available all over the world; you won’t know which one to search, so you need to connect them all. My organization is example of public-private cooperation. Marybeth Peters told us: If industry doesn’t pull together with users and distributors for visual content you won’t do well in the future.  API based way of finding who owns the information, so a machine could do it.  Europe is ahead of the US on some of this—Linked Content coalition on how rights info is communicated for various types of media.

Griffin: the only problem with LCC is the question of embedding the content within the file. Others can then change that info. We need a roughly centralized database so that it can’t be tampered with.

Sedik: agree: the position w/r/t visual works is that there’s presently no other way to communicate rights info other than embedding it. We will transition to identifiers linked to remotely stored information, with public and private metadata when we can.

Schruers: additional benefits to metadata could deal with other problems, e.g. in music industry, where it’s not clear that people collecting for uses of works are authorized to do that. A database could do that. Disputes about digital media services: there are two realities. On the one hand, digital media services are paying out billions. On the other, artists claim they aren’t getting paid. Where is the money going? SoundExchange is fairly transparent, but many other institutional licensors aren’t. Metadata could be a painless solution to that.

Griffin: the day of using artist name, track name, and album name is over. Moving into many countries with different character sets, and there are multiple ways to write the single name “Bee Gees.” Black box of money is divided by market share and arrives as unattributed income. We’d all agree that we want to end black boxes and orphans the right way—not through exceptions (quelle horreur!) but through finding the people and giving them money.  When I ask music services why they don’t report with the ISRC code (more than 95% of the money), the reason is there’s no database of them; we can’t verify them; they’re unreliable. We’ve waited 2 decades for the market to solve this. The gov’t can build a wholesale market around which profit-making activity can occur. Profit-motivated operators would not allow this.

Q: what role can the gov’t play in interoperability? 

Schruers: leading by example: altering recordation/registration processes to match best standards. Having APIs and then going out to industry to ask them to do the same.

Samuelson: Office doesn’t right now have the tech/infrastructure. We can talk, but without resources, it’s not going to happen.

Knife: Without turning our back on Berne, we could do more than lead by example. We could have a requirement that to get extra benefits of registration you have to comport with certain standards. Private entities have market motivation to solve these problems—if we could put in idea that responsible entity like Copyright Office would review the standard we wouldn’t be locking ourselves into something that works in 2013 but not 2017. But we’d still motivate people to register with that dataset.

Samuelson: flexibilities within Berne do exist. Jane Ginsburg & Daniel Gervais wrote papers on the subject. Especially for recordation of transfers, formalities are not a problem under Berne. Don’t start with saying that Berne means we can’t do anything with formalities. We need to do what’s right, not just ignore formalities.

Griffin: Brazil has registration and it’s not in violation of Berne. Enormous depths of problem—we’re not keeping up with the databases now. Societies around the world report lots of people joining and expecting to be paid. Best registry is 8 million songs; 2 unions that take 5% of money have only 800,000 songs in their databases.  Others report databases of 1-1.5 million tracks, which they think is impressive. To get what we need, we need 250 million song database. And photos are much more, but we don’t have a solid global unique ID. If we take the numbers we have now and shoot for them we’ll miss the mark radically. Have to get trillions of works in every part of the industry. That requires public private partnerships—private capital, outreach, and advertising to get the word out to people who aren’t represented: if you aren’t in this database, you aren’t getting paid.

Sedlik: looking at music example for visual arts. All stakeholders agree that databases should be separate from licensing; stakeholders should make user-controlled system, not gov’t, and allow any sort of system to connect to it, and connect to copyright offices around the world. We have participants in 130 countries. Solution must be global.

Rushing: ISRC ID’s sound recording. But who has the right to license it? Turns out to be unbelievably complicated; can be multiple entities depending on the use. Then you start crossing borders and everything gets even more complicated.

Knife: a truly central database is impossible. There are entities who control certain pockets, and control sometimes seems more attractive to them than giving access to the data. If we can’t have it all housed in one spot, of all of those rights, they are all owned by somebody. Eventually you can get someone on the phone who will say they control the rights in that country.  (Say what?)  That information is out there—rights exist and are striated. We need to collect that info and create access even if we don’t centralize that information itself.

Schruers: some constituencies view registration as a cost—but those are the ones who can navigate the system. Distributional costs to complex system: favors incumbents, keeps out smaller entities. Be wary of concern trolling about decomplexifying system, because that allows in smaller competitors who may be able to get a larger share of what they’re entitled to because things become smaller, more transparent, democratizing.

Griffin: entrepreneur sees more people who could pay if you start complexifying the info by allowing each person to register a claim such as the person who played the guitar. What one person/gov’t sees as complexity/problem, another sees an opportunity to lower cost and increase information.  Someone in domain name business wouldn’t complain that there were more domain names to register; to the contrary, domain name numbers are increasing.  And people do speculate in domain names.  (I do not think that analogy works at all.  It is the difference between a right that encompasses an entire work with nothing left over (as ownership of a domain name is presently exclusive) and a right to control a tiny chunk of a big thing (that is, the problem we now face with patents.)) We will be successful in registry when we watch the Superbowl and see “register your involvement in a creative work—you might get paid and you’ll get credit.” Without outreach, it won’t happen. 
Posted in copyright, http://schemas.google.com/blogger/2008/kind#post, music | Leave a comment

PTO/NTIA: notice and takedown

Improving the Operation of the Notice and Takedown System

Moderator: John Morris, Associate Administrator and Director of Internet Policy, NTIA

Voluntary agreement among wireless companies will be announced today re: cellphone unlocking.

We will not be talking about fundamental changes to notice and takedown, but instead take 512 as it is and see if there are areas where we can improve its implementation.

What areas might be fodder for multistakeholder consultations?

Panelists:

Victoria Sheckler, Recording Industry Association of America

In 1998, our industry was physical.  Today, nearly 2/3 of our revenues are from digital sources. Over 500 authorized services worldwide. We are working hard to create new services to give consumers engagement with music, drive new tech, and create partnerships/licensing everywhere.  Our work is being impacted by online infringement.  In 1998, less than 30% of Americans had access, less than 3% broadband. Today, 70% have access to broadband. Any file can be infinitely populated all over. Any file taken down can immediately come up over and over again. Katy Perry’s “War,” they’ve sent over 300 takedown notices for the same site to Google.  38 million requests to Google in the past couple of years, as well as millions to website operators themselves. Current system is outdated, simply isn’t working. Opportunity to address it through volunteer initiatives.

Options: (1) role of search.  Google has said that it doesn’t want search results to direct users to results that violate copyright law.  Promotion of authorized services.  Other possibilities, like icons to identify authorized services.

(2) Address whack a mole problem.  We send millions of notices on the same tracks and they continue to pop up. Unnecessary and undue burden on website operators and content community.

(3) Repeat infringers: inconsistent implementation. What’s a reasonable approach for repeat offenders?

Fred von Lohmann, Google

Most important thing for cooperation on notice and takedown is to focus on what’s been working.  Transparency and cooperation.  Transparency around notices—who’s sending them, for what—and also trusted copyright removal program (TCRP), which many in the room know about. RIAA is one of the members.  Stemmed from Google’s recognition that many notices were being submitted by a small number of submitters, such as RIAA, Microsoft, etc. Many were reliable, high-accuracy submitters, and we thought we could do better with such sophisticated, accurate entities.  Didn’t want to delay them with processing notices from nonsophisticated submitters, of which there are a lot that are incomplete or abusive. 

Today, TCRP members submit 95% of all removal notices, and in the last 30 days we’ve processed millions of such notices.  That’s been done with consultation with large submitters. This has also improved accuracy/accountability of notice and takedown industry. There are now many independent takedown vendors that search the internet and prepare takedown notices on their behalf and submit them to Google and other ISPs. Some are poorly behaved; sending takedowns that were inaccurate without copyright owners’ knowledge. Transparency report has allowed rightsholders to police their own vendors. Vendor community also likes it because it allows the accurate ones to get credit for it.

Transparency report also lets users inform us of errors, by looking for their own websites.  24 million/month, we don’t catch all errors; the public helps as well. We ejected 2 members from TCRP last year for persistent, repeated failure to submit accurate notices. Above and beyond what DMCA requires; the only way we can punish misuse is by having this extra program; they can still submit DMCA notices.  Would love to hear about similar efforts that have worked, rather than rehash of stale debates.

Corynne McSherry, Electronic Frontier Foundation

Considering the interests of users/small participants is really important. DMCA safe harbors have been tremendously beneficial overall.

Sees improper takedowns all the time, from home videos of dancing babies to lectures by significant academics, to entire YT channels of news reporting, and that’s just her docket right now. When these happen, they call the legitimacy of the whole process into question. Content owners and ISPs say they don’t want these either, precisely because of the legitimacy issue. So why don’t we create a set of meaningful best practices for fair use? Building in strategies to flag potential fair uses—obvious fair uses, which are a subset of fair uses but do indeed exist.  Avoiding takedowns based solely on keywords, as happened to Cory Doctorow’s book Homeland which was targeted for mass takedowns based on ownership of the TV show.  Another alternative: ADR. Counternotice isn’t really good enough for people targeted by improper takedowns—a way to request quick review.

Service providers can also do a lot.  Simple things: forwarding DMCA notices to users.  Many times people contact her and say they’ve gotten a takedown; hard to figure out not just who sent it but even whether it was a compliant notice.  Can’t negotiate Content ID process, which is hard to figure out.  Make it hard to shut down an entire account by sending a flurry of notices. Trusted users with an extra opportunity to appeal.

Susan Cleary, Independent Film & Television Alliance

Small companies. We get financing by ensuring that our partners have exclusive rights around the world. Need a strong regime in place. Notice and takedown/notice and notice in other countries are one of the only tools that independent rightholders get to exercise, and it’s whack a mole. If you might not be able to get production financing together to produce The Hurt Locker, you don’t just lose revenues, you don’t get the film. Need more efficiency in notice and takedown.  Independent rightsholders don’t have the money to use expensive technology of vendors/major rightsholders.  (I thought the market could fix everything—I guess that only works for some people.)  Lack time, money, staff. Need legal framework to give ISPs the cover to do what they need to do.  Voluntary agreements need to be transparent, and we need the government because without the government certain people are left out.  We don’t get powers of attorney to litigate for our members; they’re on their own. Search engines need to step up and point to legitimate product. Need threat of gov’t action for people to act in good faith, transparent and inclusive manner.  (Competing narratives of what constitutes inclusion—fascinating from a rhetorical perspective.)

Troy Dow, The Walt Disney Company

Congress intended 512 to be much more than a regime in which people sent notices and they were responded to.  (??)  Increasing dissatisfaction with operation as effective tool.  Did intend it to be a framework to provide incentives for copyright owners and ISPs to work together and detect and deal with infringement. Tech provides a role in providing solutions and Congress intended for 512 to be a vehicle for working together. UGC is an example of where notice and takedown wasn’t up to the magnitude of infringement in UGC (hunh? I don’t think that acronym means what he thinks it means, which is one reason I don’t like the term).  Cooperative tech solutions—we’ve managed to take significant infringement issues and at least put them aside with the UGC principles endorsed by some service providers and copyright owners.

Christian Genetski, Entertainment Software Association

Increasingly our members are exclusively cloud-based. The balance the DMCA is aimed to strike are very important to our membership on both sides. Our trade association plays a vital role for our industry by sending many millions of DMCA notices per year; most of our members also have DMCA agents that receive and process notices as well, and we take both sides seriously. What’s important to us is getting past having all the voices talking past one another, exchanging rhetoric about what they don’t like. Look at data. There aren’t really black hats and white hats, but rather a spectrum. One site, we sent 22,000 URLs for infringing copies of the same game title (we had API access); another site we sent 10-20 notices a month, but they took a couple of weeks to process them. Lower burden, lower costs, but for DMCA’s aims of reducing illegitimate content, the first instance was better because of the rapidity of the takedown process.  Expose outliers.

David Snead, Internet Infrastructure Coalition

Internet infrastructure is made of 30,000 small to medium sized businesses.  Most infrastructure providers aren’t content providers like Disney, nor are they like Google with large resources to devote to understanding fair use. They know what the DMCA is but don’t know the nuances.  Consider the lack of significant resources.

Voluntary arrangements need to keep in mind that the people implementing them won’t have a lot of resources. 512 is a relatively plain statute. Relatively easy to understand. What’s happened is that providers have muddled it and made it more complicated than they need to be. Most important result: best practices so that small and medium sized businesses can understand what these notices say and respond appropriately.

Von Lohmann: Google has already taken many of those steps, such as demotion in ranking algorithm based on number of DMCA notices; we’re the only member of the search engine industry that has done that.  There are over 66,000 registered DMCA agents in the Copyright Office’s database. That’s not just big companies. As I understand the mission, it is multistakeholder discussion. To talk search doesn’t do justice to these small/medium businesses with a dog in this fight. Google is interested in having these discussions, and meets with copyright industry members on a regular basis w/r/t search and YouTube. For a multistakeholder discussion of best practices, though, we need to get some of those processes out in the open, get data, get transparency, so that the others can learn from those examples. A focus on search in this process would be counterproductive.

McSherry: Another missing voice: technologists. If we start mucking with search, for example, we need to know how that will affect search and searchers’ behavior.  Rightsholders + ISPs + EFF is insufficient. Related to that, transparency for the public is key to meaningful participation/comment.

Cleary: while we think it’s important that searches point to legitimate product, people think the internet is unlimited space but it’s not.  Be careful to understand that rightsholder has right not to make available/control distribution and distribution windows. Not every product has a legitimate space on the web. 

Von Lohmann: as Dow knows, there’s no safe harbor for “search engines,” but rather for entities that rely on the ability to provide links, and there are far more than just a few of those. If you want a multistakeholder discussion about improving notice and takedown, singling out search isn’t true to the Green Paper’s goals.

Q: say more about transparency.

Sheckler: Google has done a great job of letting us know how many notices it received. But we don’t know how it’s working—need more transparency on that.  (?)

McSherry: Google’s transparency reports have been tremendously helpful for understanding what is happening.  Need more transparency on rightsholders side, big and small. Hard to understand how to suggest improvements without a window into how rightsholders or their agents decide what to target. Would further the conversation with more information; we know it’s not perfect so just telling us that “we identify infringements” is insufficient. 

Genetski: one of the best ways to do that is incentivize transparency. Voluntary best practices that elevate the end result for both sides = greater willingness to share data. Verified rights owner program that removes/prevents anything from appearing in the first place, then rightsowners would be willing to share more information and insights, and even set a higher standard than the DMCA notice required, if the reward for that investment is commensurate.

Dow: UGC principles were based on that kind of approach. There is room for transparency on the side of notice recipients—often we don’t know what goes into things like driving specific results down the search results.

Von Lohmann: we need more transparency from a group absent from this panel: the enforcement vendors. We need to understand their cost structure, how they generate notices, and what checks they use for accuracy.

Cleary: we don’t want transparency to get lost in different technologies. Independent rightsholders were left behind when ISPs started blocking P2P and legit content was blocked. Tech neutrality is key.  Copyright can’t be a guise for preferring other copyright owners’ content or excluding us from access to the pipes.

Q: bad notices

Genetski: we don’t like to see bad notices; we set high standards in sending our notices; we have limited enforcement resources. Our experience has been in millions of notices we get almost no counternotices.  Those problems fit into a broader framework of problems; even if you solved those problems, you wouldn’t have solved the problem of providing meaningful and effective enforcement. (Also you wouldn’t have solved world hunger, so I guess you shouldn’t try.)

Snead: there needs to be a meaningful way for targets of takedown notices to communicate with sending entity. All too often there’s virtually no way to get in touch with outsourced notice providers. If you have someone who wants to get in touch with them and say “we have the right to do this” there’s no individual who is following the case; there’s no phone number; there’s no email address. There needs to be a way—rightsholders need to direct their vendors—to provide this information.

Von Lohmann: amen.

Q: do we need to have a different conversation for small providers and big providers?

Cleary: we should have breakout sessions, but any time you put the big guys in a room together you risk lack of transparency. We are small, but we produce 80% of the feature films and TV in the country. I want to be in that room.

McSherry: tends to agree. At the end of the day, meaningful outcome requires inclusive process. One thing we’ve learned is that internet users won’t stand for backroom deals. Need lots of participation by rightsholders, ISPs, technologists—this is too important to leave to lawyers. Also need to hear from international community. Many activists around the world rely on ISPs here for expression, and they should weigh in.

Cleary: we represent foreign rightsholders in the US, and they want more enforcement of their rights in the US, where there’s huge piracy of their works.

Snead: this shouldn’t be divided into big and small guys. What we see is that DMCA largely works for small guys, but needs tinkering on the edges. US is at the center of a lot of internet infrastructure. This benefits US business. Changes must take this into account or we’ll drive business away from the US.

Q: small standardizations?

Cleary: voluntary agreement by payment processors for handling complaints—wanted to standardize a form for every one.  It can be done.

Dow: streamlining/effectiveness is worth having, but that’s setting our sights too low. One vendor from one studio has 39 million notices, for 87 titles.  Those notices went primarily to 25 sites: 58,000 notices for each one, and all those titles still remain available on those sites. Huge problem of inefficiency/ineffectiveness that streamlining notices won’t solve.

Von Lohmann: big numbers alone don’t tell you anything you want to know.  As Genetski said, 22,000 notices can be effective if fast. What we have is a lack of knowledge because we have a lot of different ISPs doing different things. Sharing knowledge is useful.

Sheckler: the data are useful, but from where we sit, we’ve sent 2 million notices to Google and mpcrystal.com (sp?) and the music is available on that site the next hour. That’s the problem we face. We’d love to see someone who’s done differently—is music different?

Snead: we already have data that will help rightsholders and people who are targets take action: a very plain and simple statement under the DMCA that’s not designed to instill fear or confuse. That’s all that needs to be done to let people know their rights. Extraneous info serves to make people afraid and confused.

Genetski: There may be abuse of what’s in the notice, and improvements to be made, but von Lohmann points out that there’s less transparency at the ISP stage, in that other 66,000. Why is RIAA sending so many notices?  Are they being processed?  (Is the RIAA suing that site?)

Snead: if ISPs are employing content protection, above and beyond the DMCA, they need to offer it to all rightsholders. 

Q: whack a mole problem?

McSherry: best alternative is to provide convenient alternatives. You can’t win whack a mole. Invest resources more productively.

Snead: Provide as much information as you can to the notice recipients. Let’s get information on the scope of the whack a mole problem.

Dow: we long for the day in which the mole actually goes into the hole for some period; right now the mole doesn’t even retreat. Look for tech solutions that are beyond notice and takedown.

Snead: use metadata, fingerprinting, and put all of that to use to stop whack a mole and make sure content stays down.  The goal is reducing piracy so that legit services flourish, including with fair use.

Sheckler: this is a problem. We agree with the DMCA’s framework and goals. Let’s work on deterring infringement.

Q: who is missing from this panel? We’ve heard vendors, technologists, security researchers, int’l community.

Snead: independent ISPs.

Dow: small and large copyright owners.

Von Lohmann: small and medium OSPs—resource constrained.

McSherry: internet user communities, such as remix communities—creators themselves.

Q: how much do we need more factual foundation? Do the stakeholder already understand the issues?

Sheckler: we’d all benefit from additional data.

Genetski: the hard part is to avoid being unwieldy. Everyone needs to share analysis of their own data.  Challenging, not impossible. Particularly for vendors with confidentiality agreements—real constraints, but important voice. 

McSherry: we need to understand what innovation/expression is facilitated by current system to avoid collateral damage.

Snead: begin with what’s working, not with what’s not.

Cleary: agreed, because we don’t understand the search algorithms.

Q: Is there any entity doing a good thing you want to give a shout-out to?

Von Lohmann: Microsoft has publicly spoken about its strategies and practices as rightsholder, been useful and enlightening.

Snead: those outsourced vendors who (1) have an individual following a case, (2) working, monitored phone number, (3) don’t use a proprietary method of communication and ID by URL.

Dow: UGC principles as an example of using reasonable measures to prevent infringement in the first place.

Snead: largest ISPs have worked with us on the voluntary notice system to make sure all the boxes were checked off; we worked with Public Knowledge too. Results are encouraging/improving.

McSherry: Google’s transparency report—more should be following suit.  One other service provider: Automatic, aka WordPress—they’re one of the few service providers to join a §512(f) lawsuit, and if more did that we’d see less takedown abuse.

Mark Cooper, Consumer Federation of America: How many search results were added during this period in which you got 24 million takedown requests?

Von Lohmann: there are more than a trillion pages on the web; despite the large number we receive, it is a trivially tiny, infinitesimal percentage of what we index.

Karen Russell, American Library Ass’n: Public schools, public libraries, universities are ISPs and have special interests to be considered.

Von Lohmann: there’s been very little study of the 66,000 registered copyright agents. What’s the breakdown? Many are content owners as well. Always assumed that 66,000 that register understates the number that rely on the DMCA; a large number don’t even know that they’re supposed to register but nonetheless maintain active notice and takedown practices.  For those of you listening: do register a copyright agent!

Joe Keeley, House Judiciary Committee: full copies v. less than full copies—latter more likely to be fair; possibly that takedown should work differently?

Von Lohmann: Content ID has the ability to do just that. Or if the audio track is different from video, another signal of remix. Those tools are becoming available. Imperfect proxy for full fair use test, but a lot can be done. Major content owners—movie studios—can be very responsible about using those tools to avoid targeting likely fair uses; those studios give credit and others should learn.

McSherry: while there’s no substitute for human review, tech can be used to flag for what’s likely fair use—saves time and energy.  Full copy of video and audio that’s been taken down before, that can easily match. But then identify and examine the relatively small but important percentage of uses that are more likely to be fair.

Cleary: be careful that you’re not dealing with a piece at a time posting, though.

Dow: tech can be used to flag more likely infringements and less likely ones—identify same file and make takedown more efficient.
Posted in copyright, dmca, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

PTO/NTIA: Maria Pallante

Current Copyright Office Initiatives on Digital Issues

Introduction: Shira Perlmutter, Chief Policy Officer and Director for International Affairs, USPTO

Copyright Office is covering many of the key issues addressed/not addressed in the Green Paper with studies etc.  Support/provide input into those issues.

Speaker: Maria Pallante, Register of Copyrights and Director of the United States Copyright Office

No focused attention from the executive since the WIPO treaties of the mid-90s. Important to have neutral, inclusive and informed policy. Bob Goodlatte: new copyright challenges—internet enables making available both authorized and not.  Questions of copyright ownership of historical works. Statutory damages are a concern; old laws are difficult to apply today. Copyright Office struggles to meet needs of its customers, the public.

Congress isn’t committed to a legislative package yet—we’re in no way close to that. But Congress has a clear role in copyright policy. More and more people are affected by copyright law; need to consider constitutional purposes. Copyright owners’ control can’t be absolute, and so we have things to reconcile. The public performance right is of paramount importance in the digital space. There are no criminal remedies for infringement as there are for reproduction/distribution and that’s a gap; there should be a way to craft remedies. But there also needs to be room for private performance.  Need orphan works policies both for isolated cases and for mass digitization.  Further roundtables will convene in the spring, and release drafts of legislative proposals. Need to address state of compulsory licenses, repeal some, consider new forms of collective/blanket licensing, review existing consent decrees.  Will be studying music licensing.

Highlights from hearings: in May, case study for consensus building with members of Copyright Principles Project. Innovation in America; the Role of Technology; the Role of Voluntary Agreements. The Rise of Innovative Business Models. Theme: innovation.  All the following comes from the various witnesses: Basic structure of the Act is sound; we need balanced changes to existing provisions.  Recordation of transfers is a priority, maybe tied to remedies. Encouraging registration for better information.  Remain tech-neutral. Copyright should foster certainty for businesses. These themes were also brought out in questions, not just written testimony.  More opinions from witnesses: Copyright owners lock down with restrictions; just trying to comply with current statute is expensive/cost-prohibitive.  Fair use for employees at institutions is too indefinite even if the statute in theory helps them. Fair use was never intended to be relied on so much and is overused—but digital technology has changed how students learn/interact. Lack of clarity around reasonable/ordinary personal use has contributed to disrespect for/misunderstanding of law.  Keep in mind noneconomic goals of copyright—won’t disseminate unless copyright owner feels safe.  Fair use is important, but DRM gets in the way. Voluntary initiatives illustrate the importance of multistakeholder, market-driven solutions in addressing piracy.

So, what’s next?  Scope of rights, fair use, and DMCA will all be considered next year. One or more hearings on Copyright Office itself.
Posted in copyright, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

PTO/NTIA: remix

Legal Framework for Remixes

Moderator: Michael Shapiro, Senior Counsel for Copyright, USPTO

Panelists:

David Carson, International Federation of the Phonographic Industry

Popular image of industry that forcefully asserts rights and goes to court at drop of hat is not the reality today, after baptism of fire with online pirates decimating sales. We’ve remade ourselves and focused on rights to license so consumers can experience them in just about any way they want, ideally in ways that make us money because we are a business. We aren’t out to sue people. Days of suing users are behind us.  We’re trying to give people ability to do what they want in a way that doesn’t harm our rights and compensates our artists when they use our creative efforts.

Generally, we are licensing UGC. YouTube licenses are exemplary: permit YT to make UGC with sound recordings available.  Google called the licensing solution, powered by Content ID, as win-win-win for copyright owners, YT, and users: new source of revenue for the first two and allows the users to remix without independently seeking licenses. 

Commercial sound recording remixes, by contrast, should be a negotiation with rights clearances and payments.

Prof. Peter DiCola, Northwestern University Law School

Book, Creative License, with Kembrew Mcleod.  Based on over 100 interviews with musicians who’ve been sampled and who sample; attorneys, industry professionals, and scholars. Many competing interests in sampling. Detail sample clearance process. Some great successes (Suzanne Vega and DNA, with initially unauthorized remix) but there are significant barriers, especially for independent labels and musicians. Inefficiencies: transaction costs; difficulty of negotiating across generations; royalty stacking problems. Even advocates of status quo agree: if you sample multiple works, it will be impossible to license your work for any price less than 100%.  Collage based music with 15-20 samples is impossible, everyone agrees. Some are untroubled by this, but the fact is that even superlawyers can’t get it done.

Jay Rosenthal, National Music Publishers’ Association

Has negotiated 100s of digital sampling deals in prior life, represented Salt N’ Pepa etc.  We support fair use exceptions as legitimate defenses, but don’t believe that fair use should be expanded beyond accepted contours or believe in compulsory licensing because of the various ways in which samples are used. Copyright law shouldn’t have primary goal of ease; should be supporting interests of creators. Shouldn’t promote class warfare between old artists and new artists. Congress should incentivize collaboration, including licensing.  Doesn’t believe there’s a problem with digital sampling. After 20 years, contractual deal points are relatively easy to negotiate. Businesses exist to help get clearance, get quotes, for new artists too. Easier than ever to find authorship/ownership information.  You can find publisher/songwriter if you really want. Cost of samples has never been lower; buyer’s market. Often not flat fees, but sharing percentage. If you have lots of samples, it’s hard, but it is done—idea that it’s undoable is untrue.

Takes exception to the idea that Public Enemy’s views on digital sampling are majority in hip-hop. Other rappers like Salt N’ Pepa concluded that unauthorized sampling is morally wrong/violates Golden Rule.  Would clear all samples.  “What a Man” had 60/40 basis.

No compelling reason to change broad framework with de minimis/copyrightability test. That’s the antithesis of progress to have loopholes in copyright law to allow remixers to use other artists’ music for free. But there are solutions. Market-based.  NMPA’s deal with YouTube over UGC, thousands of publishers.  UGC is a big part of this debate; is being put into a paid position.  Creative Commons approach is also viable.  Microlicensing is also a solution for less use/less money.  Much better for ecosystem to promote collaboration between new and older artists rather than them not asking permission, not paying, and not attributing. (I wonder what the Impressionists would have said about collaborating with the older generation.)

Josh Schiller, Boies, Schiller & Flexner LLP

Represented Richard Prince in court of appeals (i.e., that district court ruling was Not My Fault). Appropriation art as a recognized artform. Used photos as raw ingredients; you could call them samples; he’s called himself a kind of DJ.  Court found most of them transformative; believe the remainder will be so recognized too.

Importance of 2d Circuit’s decision: recognized that a work of art can be transformative without needing to look solely at an explanation the artist may provide. The concern for “legitimate” fair uses—there’s no such word in the statute. It lists a number of examples.  You have to look at each work for transformativeness.  Even using the entire image, when you’re dealing with art, can be fair.  Prince is inspired by many things; he shouldn’t be required to say magic words to get a transformativeness ruling.  Satire/parody need not be obvious.

The issue is not lack of clarity, but that fair use is case by case and copyright applies to so many industries that fair use must be considered within its context.

Prof. Rebecca Tushnet, Organization for Transformative Works

501(c)(3) nonprofit founded to protect and defend noncommercial transformative works and their creators.  Scope: 42 million hits on our website each week by people accessing fanworks, and we aren’t anywhere near the largest site for fanworks.  Creative works exist in an ecosystem, and in that ecosystem, noncommercial works are the equivalent of the wetlands—a rich source of diversity that can’t be replaced by systems of top-down control.  In this environment, fair use has an important disciplinary effect on the biggest copyright owners whose works are most often used in remix. It deters them from making the most outrageous claims and allows people who are caught up in automated enforcement mechanisms to assert their rights. If they find an organization like ours, fair use allows creators to fight back when copyright owners try to suppress critical and transformative uses like Jonathan McIntosh’s Buffy vs. Edward. Robust fair use supports a culture of free speech and reasonable balance as against a culture of suppression of speech and the resulting disrespect for copyright.

Licensing is no substitute for fair use, as fair use decisions from across the courts of appeal have recognized. Fair use exists to protect works that copyright owners wouldn’t license, as we’ve seen again and again with the licensing schemes offered as exemplars—both on YouTube and Amazon’s Kindle Worlds there are substantial content restrictions that fall most heavily on the most critical and most transformative uses. Fair use also exists to protect works that simply shouldn’t be controlled by copyright owners because of the substantial new meaning and positive externalities they bring into the world—positive externalities being the term for value that isn’t captured by the creators themselves in terms of monetary return and thus can’t simply be transferred over to existing copyright owners. In a licensing-only world that value would be misdirected and destroyed. Licensing schemes also support monopolization of the channels of communication, since only giants like Amazon and Google have the clout to negotiate broad licenses and use that to keep people locked into their platforms against the competition.

A final note, given the composition of this panel: under most circumstances, music isn’t a good model for the rest of copyright. The legal regime and the business models it has encouraged are so complex and specific that we should most likely look elsewhere, unless we’re prepared to adopt compulsory licensing across the board. And I think Mr. Schiller’s comments also bore this out.

Q: what is a remix?  Collective works/derivative works/compilations?  Jay Rosenthal distinguished between remix and mashups.

Rosenthal: from a music standpoint, a song that is basically a recreation that would come under the compulsory license is one type of derivative work, allowed by statute.  Beyond that, a song with samples v. a mashup with lots of samples are effectively the same thing from a legal standpoint. Is it harder for Girl Talk to license, if he tried? Yes. But nevertheless fundamentally the same. Might be different for other forms of art.

Q: do we have a cultural production problem?  If it’s uncertainty, where’s the evidence?

RT: lightning strike effects of getting a takedown notice, which often leads the person to withdraw completely; fear at the institutional level so that schools are unwilling to use remix even though it’s really good for teaching.

DiCola: trouble with move to commercial world; also inhibits licensing to push them underground.  Lost revenue is a shame too.

Carson: music industry shares that goal.  Instinct is to cut a deal, or do it on an automated basis.  There are always exceptions—recording artists who don’t want their work sampled.  Or a record company might want nothing to do with a particular product.  What if Nazis put a work on their website?  One of our poster children for European gov’ts about necessity of controlling uses is offensive uses made of our works is the brief YT phenomenon (we do take down) of Hitler’s In Memoriam to Adolph Hitler, using popular sound recordings such as theme from Titanic. We want to stop that.  So sometimes licenses won’t work.

Rosenthal: moral rights. Our YT deal resulted from class action on behalf of independent publishers. Have ongoing license with cooperation and collaboration, working on database. Idea is not to sue out of business or stop them from making fair use/derivative works—we want licenses.

Creativity: lived through the age of hip hop. No producer reaching out through a company, when they clear a sample or get a “no”—never known a producer to stop work and go home. Go on to the next one.

DiCola: doesn’t disagree about substitution being possible, but let’s talk about the places where there are barriers to understanding the system. There’s no example of someone with 20 samples getting a license. That kind of work can’t be licensed.

Rosenthal: I’ve done them on prorated basis.  Shouldn’t compel us to change a whole licensing system.

RT: noncommercial speech works differently; 16 year olds inventing remix in their bedrooms don’t take these business routes; diversity/chilling effects are disproportionate for women/minorities.  Types of creativity differ: example of Gone With the Wind, where they were perfectly willing to license certain types of content but not Alice Randall’s depictions of homosexuality and miscegenation.    Like saying that there are newspapers under a censorship regime and the fact that they’re filled indicates that free speech has been unaffected by censorship.

Q: fair use changes in the courts?

Schiller: court recognized that observers matter. There are readily available artistic opinions that speak both to transformativeness and to market substitution.  Commerciality doesn’t mean market substitution.

Q: statutory license/Canada’s UCG exception?

RT: Look at it!  Canada’s market seems to be functioning well; SOCAN even just cut a deal with YouTube. Protects against the lightning strike.

Rosenthal: sometimes it’s tough to understand what’s noncommercial. Many clients early in their careers are trying to turn themselves into viable marketplace forces, but aren’t making money. Brings intent to the fore, and whether user is trying to get into a commercial marketplace v. hobby/fun.  On fair use: Beastie Boys case is very worth following.

DiCola: an issue of control.  YouTube license is worth paying attention to but the advantage of a statutory scheme is that it’s public and transparent, and Content ID isn’t.  (Preach!)  When a YT clip has more than one work, how does the revenue get split?  The parties might know, but we don’t. Public scheme has benefits of understanding (and also, I would add, the benefit of allowing competitors to get the same deal).

Carson: statutory licenses have a lot of baggage. Many licensors and licensees aren’t particularly pleased with them.  Will always be cases where you want to say “no, you can’t use my work for that purpose,” and a statutory license doesn’t permit that.

Peter Menell: mashups are astoundingly popular, outside any real market. Disservice to copyright if we can’t bring it within a market of some source. Rosenthal hasn’t convinced me. Generation of remixers won’t just shift from one source to another if they get told “no, you can’t use X, find a Y instead.” You’re encouraging them to hate and defy copyright law. Mechanical license worked pretty well. Could be way to go.

RT’s reaction: this is a way to “hide the wiring” of copyright for ordinary people.
Posted in copyright, fanworks, http://schemas.google.com/blogger/2008/kind#post, music, presentations | Leave a comment

PTO/NTIA hearing: first sale

The First Sale Doctrine in the Digital Age

Moderator: Karyn Temple Claggett, Associate Register of Copyrights and Director of Policy & International Affairs, United States Copyright Office

Previous Copyright Office study concluded that first sale only covers distribution and thus doesn’t apply in digital context where reproductions are involved. Expansion wouldn’t serve underlying purposes of doctrine—right to transfer tangible property—and would risk harm. That was 2001; much has changed in legal and business environment. Increased market for digital goods and increased consumer expectation of what they can do with lawfully purchased digital goods.

Panelists:

Emery Simon, BSA, The Software Alliance

The issue for us is not first sale, but licensing. Nature of licensing is changing and marketplace is changing. Copyright is important to us as foundation of our business, which is licensing.  Software is uniquely license based though other industries are looking at it.  Licensing changes everything. We are in transition from distribution to license to access through the cloud or elsewhere. Licensing is under pressure—in Europe, Eusoft, SAP, Adobe—all of which say that a transaction labeled a license was actually a sale. Those decisions require policing by original licensor, and that’s hard to do.

Goal of license: meet consumer expectations and control secondary markets. Two keys to licensing model: (1) clarity on what user gets, (2) respect for user.  We try to account for customer reaction.

John Ossenmacher, ReDigi

We built a technological mechanism to verify digital goods/ownership and to allow for what we believe is lawful transfer from buyer to seller without making copies. We didn’t intend to go to battle.  Clarity: in our legal research, there’s never been a method of delivery specified in first sale.  The issue is the balance of power between different parties, and will that ultimately effect a good result?  There can be a lawful exchange of digital goods between consumers; the tech exists today. (Now I’m thinking about bitcoin and unique identifiers that prevent double-dipping.)  Can provide stronger protection than existed in the analog world.

Allan Adler, Association of American Publishers

Books are unglamorous. Ten years ago, ebooks were viewed as a flash in the pan.  Hyperbole about how quickly readers were going to adopt them.  Reading community has not yet cast its full bet.  AAP is engaged in analog and electronic versions.  We come just at the time at which people are looking at licensing and deciding it needs to be cut back.  All publishers I represent are producing their works in electronic format and following the traditional software model of using licenses. Should it not be allowed to be treated in the traditional way of other software? Markets move more quickly than regulatory regimes. Hard to imagine copyright has hindered innovation in this field. People are now reading books through their telephones. Market continues to surprise us, and we need to be nimble.

Sherwin Siy, Public Knowledge

Two issues: licensing; can you transfer a digital file.  Step back to first sale, not just as a restriction on distribution right. Its origins are the desire to balance the rights of a copyright owner with consumer’s control over her tangible physical property. This includes things like right to publicly display and distribute, but also includes a lot more than that: the ability to use the thing—read the book, listen to the song.  Private display/performance.  These aren’t usually an issue for tangible goods because there’s no §106 right. But with digital goods those do become an issue.  We talk about advantages of digital media, but there are also restrictions from the nature of digital technology. Statute hasn’t kept up with the fact that mere use and transfers of ownership and private performances/displays involve reproductions, and copyright owners have noticed.

Prof. John Villasenor, University of California, Los Angeles

Challenge is to write language that would allow a digital first sale doctrine without creating gaping loopholes exploited to the detriment of rightholders—short term loan problem. Tech is also doubtful: doesn’t doubt that ReDigi is a good system, but the history of digital security solutions is that equally smart people find ways to get past them. Not optimistic that we’d be able to effectively prevent people from making a copy of what they were supposed to delete.  (I’m sure the DMCA can prevent that, right?)

Q: why is digital first sale or sharing important?

Adler: secondary market in books has always been used books.  Books whose actual condition and therefore their value has deteriorated over time. That’s missing entirely with ebooks.  (As a frequent buyer of used books, I resent this blanket description. No one stands over the reseller making the book deteriorate, no matter how pristine its condition.)  An ebook will be identical to a brand new version.

Ossenmacher: that’s not accurate. First sale is fundamentally about control. Deterioration isn’t required.  There are physical books in excellent condition for resale.

Siy: Digital copies do degrade, and their perserverance comes only from copying—the medium itself lasts much less long than paper. There are games/software we have today only because people were able to hold onto copies and archive them (either because copyright owners weren’t around any more or because of a claim of fair use).  New markets created by copyright owners aren’t sufficient. Secondary markets offer new business opportunities for people who are thinking differently—rental markets for videotapes, textbooks—a vastly underserved market from the publishers themselves.  Also nonpocketbook issues: privacy. If you don’t have a secondary market, every copy can be known and you know who’s reading what. Also prevents diffusion of information.

Simon: sure, secondary markets are good. But we regulate secondary markets in used cars and lots of areas.  The notion of having a good doesn’t mean it’s not without dangers.  We license software with rights and responsibilities; creates privity issues. We worry about downstream users’ rights and responsibilities—we do updates, services. We negotiate for all those things. What rights/responsibilities do downstream users have?  It’s not a simple question of transferring possession.  Now that you’ve done that, now what.

Ossenmacher: privacy comes up a lot. There’s something lost when we move to digital that is decoupled from first sale. Now my transaction leaves all sorts of footprints. Anything digital is a privacy challenge.

Q: assuming a secondary market is a good thing, is this something that requires a legislative solution?  You can share some ebooks from B&N, Nook.  (In my experience, this is limited to the point of unusability; the few sharable books are only sharable once and never again.)

Ossenmacher: not realistic. Note that cars today are very software dependent. Huge progress in having secondary market support primary market. Piracy is the issue that we all want to prevent, but if you give people something of value, won’t people protect that good as something more valuable?  A digital good with zero economic value because it can’t be transferred isn’t as good as a good with a secondary market. 

Siy: There is room for additional actors—the market is restrained now. Markets are created by individuals trading, and that doesn’t happen now. Limits suppliers/sources to a few players, and becomes a poorer market.

Villasenor: premature to conclude that market won’t evolve—only short experience with digital access. We’ve seen interesting developments in ebook loaning, Ultraviolet with the movie industry allowing family members/household members have shared access to content. It’s early days.  (In fairness, I doubt he’s a huge fan of §1201 either, but it certainly wasn’t enacted in ‘wait and see’ mode.)  I challenge proponents to come up with good statutory language—how we can solve the short term loan problem.

Adler: secondary markets do benefit users, but don’t benefit the author or the publisher or the rightsholder, other than exposure of the work.  (Pity the Ford Motor Co., losing so much from resales!)  Students think that textbooks are priced too high, but can always sell them back to the bookstore, and the next person benefits by buying more cheaply, but in those transactions the author and publisher don’t benefit. We’d like to see new business models that continue to benefit author and publisher.  Hard to see why copyright owners would entertain the notion of digital first sale as anything but destructive if they can’t get compensated.

Siy: everyone was talking about markets’ flexibility. Tech for digital content has been around for a while. Adoption speed has a lot to do with the legal issues. It’s a bit strange to try to make the law adjust for existing markets when law is so much more slow. Rely on intelligent self-interested actors to build viable markets when the law accounts for various interests.

Pirates are not waiting for the first sale doctrine to change. They aren’t going to claim “oh I got it used.” 

Villasenor: it’s too soon to conclude that licensing won’t work as content owners respond to demands. Plenty to criticize, but they are becoming more flexible.

Q: If we acknowledge that digital first sale would only apply to copies that you own, wouldn’t we also have to expand it to cover copies you lawfully possess if there’s no such thing as ownership digitally? What exactly do you own? Music files, software, ebooks?

Simon: we’ve been in the licensing business from the beginning. You can lease a car or own a car, rent or buy an apartment. Terminology traps us. We are moving from transfer of physical goods to licensing access. Can have all sorts of restrictions on leasing apartment, such as subleasing, that are different from sale. Don’t think of first sale, but think of reality of marketplace as contractual relationships governed by licenses, and the licenses should be respected or you destroy viable markets of the future.  (Interesting analogy, though the landlord/tenant example suggests a huge role for regulators, including restrictions on transfer restrictions and on termination of transfers.)

Ossenmacher: can’t contract around first sale; contracting around legal rights is dangerous. There is a place for everything, but when I lease my apartment I know it’s a lease. When I buy a home I know I’m buying it. Clarity is indeed important. Society needs clear rules. 50-page incomprehensible EULAs are not that.

Q: does the average consumer realize she doesn’t “own” her ebook or music file?

Ossenmacher: our userbase provides much evidence that they belive they own their files and don’t understand the putative difference between buying a physical book and buying an ebook.  (I note that the buttons always say “buy it now,” not “license it now.”)

Simon: true, there is this expectation. People behave differently depending on how they acquired something. But exploiting that creates more confusion. The marketplace is correcting the confusion: I don’t think I own any of the movies I get from Netflix.  That’s the way a lot of works are moving.  Subscription models generally will do away with that transitional expectation gap. Fact that it exists doesn’t mean it should be exploited.

Villasenor: Dangerous to legislatively/judicially upend contracts. Vernor v. Autodesk: if you have a license, then that’s the end of the story. Alarming that the ECJ thinks differently.  But there could be more clarity.  (One theme here and the previous panel is how differently we are used to treating copyright—so differently that we don’t even notice how wildly different the rules are from, say, proving some other kind of injury when a tort has occurred, or regulating transactions in consumers’ interests.)

Adler: contract law has existed for a long time. Fraud, etc. Education is easily done. Reputations live or die by licenses; if consumers find them untrustworthy, they’ll find other vendors. Books are highly competitive. Library ebook lending: main players in popular works all have very different policies. They are evolving, and fast.

Siy: consumers need a clear idea of what’s happening. That’s why we talk about the sale/license distinction; it has real legal consequences. This isn’t a matter of “upending” contracts—the question is the expectations of the contracting parties—which affects whether it is a sale or a license. Even this crowd doesn’t really read the ToS of Amazon. 

The issue of privity is at the heart of the lease/sale distinction. Bobbs-Merrill was exactly about preventing control because the terms didn’t run with the chattels. When I buy a dud at a retailer, I don’t necessarily blame the manufacturer.

Ossenmacher: the copyright owners don’t want there to be a secondary market, period. If there is going to be one, licensing provides control for the copyright owner.  Question is whether that is good or bad. That’s the only reason they say it’s not a sale.

Licensing is fickle. Record industry talks about licensing two ways: when they speak to artists/bands, they say “it’s a sale” on iTunes because the royalty rate to artists is much higher for licenses; to us they say “it’s a license.”

Q from a person from SIAA: how does this work?

Ossenmacher: ReDigi 2.0: no reproduction involved, access from the cloud. A transaction—exchange of access keys without copying/moving files.

Art Brodsky (sp?): Even if a library gets a Harper Collins book, 26 checkouts; Random House, $85 for a “license” when you or I could download it for $10. Are the publishers going to let us own anything? Is ownership over?

Adler: not eliminating ability to own—consumers will ultimately have a choice (unless they are libraries, of course). Examples are companies trying to meet what they heard from libraries: replicate traditional library lending in digital environment—attempted to take into account differences for replacement copies for physical books. You get what you ask for—difficult for consumers to ask for all the new bells and whistles but still want the business model not to change from what they liked. (In the absence of copyright, we’d probably call that supply and demand.) An ebook is a different kind of product.

Q from audience: ownership is not a matter of convenience.  Very few outlets for producing digital books; libraries are unlikely to want restrictions on number of loans.  Contrast with what Internet Archive does with lending to show what consumers actually want.

Corynne McSherry, EFF: Talking about licensing is required.  Can talk about the statute all we want, but EULAs will take it away if they can.  There is empirical research on who reads EULAs: almost nobody.  People don’t understand them.  Eye to purposes of copyright: promote innovation. Hearing a lot about consumers and expectations, but not hearing a lot about innovation. This is important because many of these licenses include restrictions not just on first sale but also fair use etc. This inhibits the freedom to tinker. People I represent don’t just want access; they want to mess with them, spurring further innovation. When we talk about first sale/licensing, we need to talk about how we protect that.

Andrew Schur (sp?), Owners Rights Initiative: Embedded software in cars, refrigerators. All kinds of goods now software heavy. How do you deal with reselling these? Does the consumer own the entire bundle? Consumer doesn’t have a choice when all cars come with software.

Villasenor: he doesn’t think that carmakers are restricting resale in contracts (though he admits he doesn’t know).

Q: but what if you need updates?

A: you aren’t bound if you haven’t agreed.

Simon: this is fearmongering.  Nobody would do that for cars.  I have software updates that are automatic and a map update I paid for. That’s fine, that’s how markets work.  (Yeah, uh, Omega v. Costco suggests not.)

Siy: The solution is not to say “don’t worry, no one will sue”—that’s what they said in Kirtsaeng, and the Court was not impressed.

Simon: people sue over all kinds of things. But positing that refrigerators/cars justify secondary markets in software is just silly.

Q: authors get no benefits from secondary markets?  Silly. Willingness to pay is affected by ability to resell—whether that’s a house, a car, or a book (also my ability to lend to friends, family). 

Adler: we didn’t say that there was no benefit. We’re talking about compensation. There are also wild cards. Kirtsaeng: 40 years of national exhaustion flipped to international exhaustion, which would have to be under discussion in any change in first sale. A court recently decided that publishers can’t always determine the price at which they offer their goods, because sometimes retailers (Amazon) will be able to tell them what the books will sell for regardless of whether publishers agree.

Brandon Butler, AU WCL: Librarians scoff at the idea that digital lasts forever, unlike paper. It’s exactly the opposite. Paper will last forever if you leave it alone; buying music on different platforms and computers = disappear faster than you imagine. Don’t think that digital doesn’t degrade, doesn’t need replacement.

Villasenor: we’re in a transition period. When you have stuff on personal devices, that degrades. Cloud-based systems can last effectively forever.  200 years, this recording will be viewable. Challenge in providing access to that, but that’s not digital first sale.

Adler: we aren’t saying digital will last forever. But practically when someone buys a physical book used they expect the possibility of some deterioration. Resale markets for ebooks: people won’t find it acceptable if ‘this program no longer does the following things/has the following functions.’ The only ones they’ll buy are ones that work just like the new one.
Posted in contracts, copyright, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

PTO/NTIA Green Paper Hearing, statutory damages

Department of Commerce Public Meeting: Copyright Policy, Creativity, and Innovation in the Digital Economy

United States Patent and Trademark Office – Madison Auditorium

Live webcast available at: https://new.livestream.com/uspto/copyright

The Appropriate Calibration of Statutory Damages: Individual File Sharers and Secondary Liability

Moderator: Darren Pogoda, AttorneyAdvisor for Copyright, Office of Policy and International Affairs, USPTO

Panelists:

David Sohn, Center for Democracy & Technology

[sorry, missed this because I went to the wrong building!]

Steven Tepp, Sentinel Worldwide

Statutory damages are old as copyright itself, contemplated by TRIPS, necessary because actual damages are often conjectural/difficult to prove; deters for-profit businesses from encouraging widespread infringement (note how that last rationale is slipped in despite being very new)!  Digital age makes statutory damages more important than ever: drive thriving marketplace by giving copyright owners a measure of security a measure of security against misappropriation. The more rampant piracy becomes, harder it is for legitimate actors to compete.  (Michael Carrier’s empirical work tells a different story.)

Sandra Aistars, Copyright Alliance

Challenges exist for effective enforcement for all types of creators, and in ensuring that the public understands and respects the law. Overly politicized enforcement/unscrupulous attorneys have created PR problem.  Need to find common ground. Tendency is to look at big corporations, but copyright exists for all sizes of creators—need to understand effects on small businesses and individual authors. Statutory damages are often the only legal recourse for an individual or small business to address infringement—their availability is threshold for individual deciding whether or not to pursue a claim, given the costs of bringing action in federal court. Members can’t obtain legal assistance where statutory damages are not an option.  Deterrence and compensation are important, as well as difficult nature of proving value of copyright/loss caused by infringement, where work is uploaded and available to the entire internet.  Where only direct loss provable is license fee, that is an invitation to infringe without consequence.  (Interesting that this is the general tort regime.)  Profits can also be inadequate because profits may be too small, or too hard to calculate in terms of attribution to the infringement.  Broad range of damages is justified, flexibly applied.

Beyond these, any statutory damages scheme needs to preserve creators’ right to say no. Merely compensating lost license revenue is little more than compulsory license.  Need to resort to statutory damages isn’t because they’ve suffered no actual damages but is because harm to creator/community is broader than what can be proven and may also include noneconomic damages where the infringement is unusual in some way.

Prof. Peter Menell, University of California at Berkeley School of Law

Simplistic views of history: to say that statutory damages are well established misses a lot of context.  Current system derives from ASCAP/BMI’s problems decades ago, and we now live in a completely different era. Congress wasn’t thinking in 1999 about the enforcement problems that were going to arise a year later.  Panel is focused too narrowly: this issue is nested within a larger section about keeping rights meaningful.  Any solution must be multifaceted. The issue we’re trying to solve is enforcement, and that needs to be viewed holistically. Statutory damages are only a means.

In the internet age, we want a copyright system that garners public approval. That’s been lost, and statutory damages played a significant role in their disproportion.  We ought to be concerned. Judges are seeing very peculiar cases—selected based on incentives created, and statutory damages thus bring bizarre and unfortunate cases, inundating the courts. Porn litigation.  We ought to care about the harm being done.

We ought to channel consumers into authorized markets. That’s the longterm goal for most players in the system. Statutory damages was thought to be successful, but last decade showed it didn’t work for recording industry, which backed away.

To what extent is this system promoting tech and creative advances? Statutory damages aren’t helping. Distinguish between noncommercial and smaller players and big commercial players; think about orphan works which has solvable problems; then the large-scale enforcement problems—even there, the system is out of whack. Aggregating $150,000 across hundreds and thousands produces obscene numbers; we can think about how to scale statutory damages.

Markham Erickson, Internet Association

Statutory damages need to be thought of in the context of primary and secondary liability. While statutory damages are old, secondary liability is completely judge-made.  In other parts of the law, the auto industry isn’t held liable for consumers speeding, even though they know those cars will be so used.  Increasingly we’re seeing more litigation around primary infringement—Cablevision, DishHopper, Aereo—that would traditionally have been secondary cases.  Because of the scale, nascent tech can’t come to market because the threat of damages is so out of proportion.  It’s hard for counsel to talk publicly about clients who’ve been suppressed. Key question: deterrence of what? Legitimate noninfringing uses; good faith, objectively reasonable belief in noninfringing use. Reasonable minds will always differ at the margins, and the statute should encourage litigation to clarify matters, as produced Grokster and Betamax.

Q: How do we conduct cost benefit analysis, if statutory damages are indeed chilling innovation? How do we measure this?

Tepp: Doesn’t accept the premise.  We have a multitude of very successful online services (… that various entities have tried to sue out of existence). Other side: to what extent do statutory damages deter piracy and allow licensed services to move forward with confidence they won’t be undercut. Legit services are most vulnerable to piracy because they pay for their content.  The system is working well.

Erickson: certainly we want to encourage licensed services and take down clearly infringing services. But there are gray areas: services that are operating in good faith are exposed to statutory damage regime out of whack.  Look at cloud locker services: there is no possible way that every piece of content can be licensed. As long as users are allowed to upload, Amazon’s not in a position to determine whether they are infringing. Our companies want to allow users to store their content and space-shift it.  Google and Amazon have taken the risk, but they’re big companies that can tolerate litigation. Tepp’s view is not practical.

Aistars: Legitimate cloud businesses like Amazon can be compared to business models employing functions more clearly intended to drive infringing content, like Megaupload.  That’s where you see cases being brought, not against staple articles of commerce.

Sohn: takes the issue to be more calibration than existence of statutory damages. Can we minimize costs by focusing statutory damages more appropriately on real bad actors while imposing less risk on entities navigating uncertain copyright regime?  Hard area to quantify, because deterrence is a hard thing to prove (though Michael Carrier did try!). But it’s also very hard to prove what infringement has been deterred. Can’t have it both ways: assume that statutory damages deter infringement, then say deterrence of legit activity has to be proved.  When asking what behavior has been deterred, apply same standard.

Menell: most investments are best thought of ex ante.  Many entrepreneurs don’t want to run these risks, and we don’t want them to have to—we want them to be able to make better guesses.  Cloud services: a decade ago, Michael Robertson tried to introduce a cloud service! Resulted in one of the poster child statutory damage awards, in which the record company took over. And now we accept the same result! In an ideal system, we don’t get this damage because people can make informed judgments. We can’t make informed judgments with long drawn out cases and unpredictable juries. Make it easier to assess risks before we get into bringing in lawyers.

Erickson: uncertainty is part of law—if we lurch too far to delineating what’s ok, you do tend to lock in innovation in unhelpful ways. One appropriate measure to allow courts to work as they should: scale down insane awards. Company that thinks it has a lawful service but knows it might well be sued can test that without destroying the company.

Menell: doesn’t disagree with that premise. We’ve found ourself here just by the peculiarities of our constitution: SCt decided that juries decide, and that creates uncertainty. Moving towards a system where beyond a certain range you have to prove some measure of damages would be good—we have sentencing guidelines in other areas—other ways to better correlate to actual damages. Statutory damages is in part about combating underenforcement, but in these big scenarios we don’t have underenforcement—someone is going to go after Aereo, which is not a small nightclub/bar which was the target of statutory damages.  Think about risk settings distinctly.

Tepp: we’re being told that there’s so much uncertainty that we can’t have statutory damages, but also being told to embrace uncertainty, which I took as a reference to fair use. That’s your prerogative, but let’s not import policy debates over scope of exclusive rights into discussion of statutory damages, only available to adjudicated infringers.  (This is the worst argument I’ve heard today, though the day is young.  Does he counsel his clients not to worry about the potential damage awards when considering a litigation or pre-litigation approach?  No, let’s not make this personal: this is a disingenuous argument because legal analysis does not work this way.  Of course damages and substance interact, and among other things they interact on willingness to litigate out what the actual boundaries of the exclusive rights are.)  Range of statutory damages is intentionally wide. Nothing but anecdotal evidence of outsized statutory damages awards.  There are checks on awards: timely registration requirement.

Q: requiring statutory damages to more closely track actual harm in some cases. Natural counter is: how could Congress/guidelines reconcile that with fact that a lot of aboveboard copyright owners face significant obstacles identifying infringers and providing evidence of harm for something like P2P filesharing. Would identifying actual harm be possible/fair/strain on judicial resources?  Wouldn’t it be hard to prove ownership, registration, etc. of thousands of works in a sharer’s library, instead of a sampling as we saw in Jammie Thomas?

Sohn: There are a variety of ways to do it. One could imagine a regime where higher damages require a showing that there are substantial damages, even without proving them specifically.  Distinguish probably harmless infringement from infringement that is probably causing a bunch of harm.  Could be a presumption of minimum range without a threshold showing of harm. Point would be to have a middle ground, without full on proof of damages we believe too difficult to show.

Aistars: courts are already serving that function. Only cases where there truly is some greater societal harm see the larger damages awards; even default judgments against filesharing users tend to be on the lower end. Need to look at wide variety of creators relying on statutory damages and their deterrent effect, not just the business models of music and movies; need to look at newspapers, photographers. New proof is completely unmanageable for small business and overlooks noneconomic damages that individuals and small businesses often pursue infringement claims for. They have no track record of licensing; there may be no directly provable profits; photographer whose work was used without permission by clothing designer in large department store.

Menell: on the music side, if we started afresh we wouldn’t build a system built around massive statutory damages. We have experience that doesn’t work. Small claims, parking ticket style system would be much better.  Recalcitrants could get ramped up.  Using fed courts to resolve disputes: already much more than most of these works are worth. We need some other system. But not for the P2P network itself, which can scale.

Samuelson: fewer than 14% of WIPO countries have statutory damages, most being post-Soviet states; the statutory damage states have many limits, such as Canada’s cap on noncommercial infringement damages and judicial discretion to reduce statutory damages to meet a just award; many countries don’t allow per infringed work which is particularly important in the secondary liability context (Google Books, statutory damages exposure in the billions).  Other countries have 2x/3x guidelines. There are a number of things to look at for limitations that make them more just. Not arguing for repeal, but more limits.

Tom Sinder (sp?): Korea didn’t have statutory damages, and that didn’t deter infringement.  4 jury trials of individual filesharers, in which plaintiffs introduced reasonable royalty evidence: what would the license fee have been to do what D did, which was the whole economic value of the copryight—that means the award was compensatory.  (No it doesn’t!  No one thinks that Thomas’s making available worked like American Idol’s official website’s making available.  There was no chance she’d distribute on that scale.) Do you think these awards were excessive?

Sohn: yes.

Sinder: but what if it’s compensatory—what you’d have had to pay?

Sohn: for individual behavior, you want the amount to reflect the damages.  The real focus we should have is on tricky questions of copryight law—it’s a problem to have a regime that suggests that if they make a wrong interpretation the consequences are $100,000.

Tepp: we are naturally more sympathetic to a single mother, but she can impose just as much harm on the copyright owner for millions of people to download—the harm may be just that great.  (Hm, I hadn’t noticed these songs losing their economic value entirely.)

Q: Themes like lessening risk—maybe have discussions on secondary liability and orphan works then revisit statutory damages.  If we fixed those, what would be left?

Erickson: that would be helpful, but primary infringement is a growing issue too.

Menell: enforcement can be thought of up front.

Aistars: along with enforcement, there’s room for public enforcement to reduce harm to individuals and small businesses—small claims process.  Voluntary stakeholder process including necessary players making enforcement less burdensome for all players (all!) whether we represent small creators or internet innovators likewise burdened by enforcement challenges. (I can’t think who’s left out of that either/or.)

Posted in damages, http://schemas.google.com/blogger/2008/kind#post | Leave a comment

Trademark twofer

Kettler Capitals Iceplex ad, courtesy of Zach Schrag:

After Charbucks, is the Kettler Klassic dilutive?

Posted in dilution, trademark | Leave a comment

A story about creativity and standing on the shoulders of midgets

From Dave Freeman, via Simon de la Rouviere:

Let me tell you a little story about innovation and creativity. Years ago, I worked on a wiki-based project to find the first instance of ideas/techniques in video games (like the first game to use cameras as weapons, or the first game to have stealth as a play element). It excited me to dig to give credit to those who laid the foundations of ideas that we now take for granted. I couldn’t wait to show the world how creative and innovative these unknown game designers/developers were.

I went into it with much passion and excitement, but unexpectedly, it turned out that there were almost no “firsts”. Every time someone put up a game that was the first to do/contain something, there was another earlier game put up to replace it with a SLIGHTLY less sophisticated, or SLIGHTLY different version of the same thing. The gradient was so smooth and constant that eventually, the element we were focusing on lost meaning. It became an unremarkable point to address at all. We ended up constantly overwriting people’s work with smaller, less passionate articles, containing a bunch of crappy games that only technically were the first to do something in the crudest manner. Sometimes only aesthetically.

After a lot of time sunk into this project, I came to the conclusion that I was mistaken about innovation/creativity. It would have been a better project to track the path of ideas/techniques than to try to find the first instance of an idea/technique. I held innovation so highly for years before that, but after this project, I saw just how small it was. How it was but a tiny extension of the thoughts of millions before it. A tiny mutation of a microscopic speck that laid on top of a mountain. It was a valuable experience that helped me very much creatively.

Posted in copying | Leave a comment