Saturday, August 09, 2008

IPSC 2008: Copyright/First Amendment and Trademarks/Reputation

Copyright and the First Amendment: Comrades, Combatants or Uneasy Allies?
Abstract
Joseph Bauer
Notre Dame Law School

Standard reactions to copyright/speech conflicts: minimizing the conflict by harmonization (same goals); internal mechanisms of copyright (idea/expression, fair use, limited duration, merger/scenes a faire, others). The growth in the scope and duration of copyright makes that a harder sell. There are various reasons why each of these doctrines fail. He is particularly concerned with the chilling effects of fuzzy boundaries and limits on copyright.

There is still a problem, despite claims by copyright maximalists. He thinks that true compelling interest analysis should be applied at least in some cases. Antitrust, like the First Amendment, offers ways of thinking about what alternatives a defendant had to particular infringing content. Would the alternatives have been effective in conveying the speaker’s viewpoint without copying expression?

This perhaps turns into a question of debating and defining “traditional contours” after Eldred.

Trademarks/Reputation

Marks of Rectitude: Fair Trade, Brand-based Regulation and New Global Governance
Abstract
Margaret Chon
Seattle University School of Law

(Came in a bit late, unfortunately.) There’s a signifcant fair trade premium for coffee sold in the US and EU. There are a lot of middlepeople between the coffee grower and the drinker—for every $3 spent on a latte, only $.02 gets to the farmer. (Coffee is the second-largest traded commodity in the world, after oil.)

Fairtrade Labelling Organizations International is a nonprofit, multi-stakeholder organization uniting 20 labelling initatives in 21 countries. They aspire to set fair trade standards worldwide. They set several generic standards for producers and traders. The standards cover farmers and laborers. FLO only works with associations or cooperatives, though, not individual farmers, which is a source of critique.

Chon is interested in competition among intermediary certifiers. There’s a competing coffee certification process focused on environmental issues and provenance, and another one focused on ensuring a higher fair trade premium for farmers. There are vast differences in the kinds of standards they care about, and that consumers might or might not know about—the use of GMOs; whether the certifier is independent of the industry; etc.

Dinwoodie: there are two different modes, passive and active construction of consumer understanding of marks. There is market competition for standards; what does the consumer understand from all these certification marks? There’s also a genericism issue attached to the term “Fair Trade”—we might have a feel-good association attached to the term, but we don’t know what it represents.

Certification marks are much less specific than traditional “source or origin” functions of TMs. They imply more objectivity: but we don’t know how consumers understand that. OTOH, certification marks are arguably much more specific than merchandise licensing, esp. in terms of quality control. The solidity of meaning is contextual.

Consumers may not be aware of specific standards, but they may have different awareness of the certification process or outcomes. That awareness may also be low, but it could be affected differently. US law, she thinks, insufficiently regulates the stringency/honesty of certification processes.

Similarities between GIs and fair trade: they guide consumers to different consumption paradigms, and they make a more specific connection to local culture and local values. They can also link with traditional knowledge. Do they form an alternative to the hegemonic trade system? Given the percentage of fair trade coffee traded, it’s not a huge impact.

Third party certifiers are regulatory entrepreneurs, doing work necessary in a fragmented global structure. Development hasn’t focused on TMs as a source of development. Certification marks can provide institutional standards promoting tech transfer, which may have occurred with forestry certifications.

Eric Goldman: Chicken-and-egg problem—certification marks are a supply-side solution, used to further normative goals. But we can’t ignore the demand side—there isn’t necessarily an organic demand for fair trade coffee. But then the certification agency has to market to consumers to convince them of the need to demand fair trade.

Chon: That goes to the passive/active construction of marks. Most of these agencies devote most of their resources to producers, and have limited marketing budgets. And they have an ideological dilemma: are they marketers, or are they helping farmers? “Organic” is a good example of where consumers have taken up a term without much marketing by USDA.

Q: The Idaho potato organization has done a lot of marketing to convince suppliers and consumers of the value of the certification mark.

Chon: Consumer education is a huge deal. Middlepeople can also help—Starbucks can sell its coffee as fair trade coffee.

Justin Hughes: A difference between GIs and fair trade marks is that there isn’t a clutter problem for GIs. We expect tens of thousands of GIs and it doesn’t feel like clutter. Will the market cause fair trade marks to shake out a few dominant ones?

Chon: Democratic, grassroots approaches aren’t efficient, and the groups are trying to respond to decentralized constituencies.

"Smithers, release the hounds": Adopting a new normative framework and analysis (safe harbor?) for dealing with copyright infringement by electronic agents
Abstract
Eran Kahana
Datacard Group

It’s clear that courts are much more likely to enforce browsewrap in a B2B situation than in a consumer-to-business situation, so his analysis is confined to B2B. The “hounds” in his title are bots that are released and return to their bot masters with the content “duck,” and then the masters post the duck on their own site.

He argues that human-centric analysis of bots is flawed. The bad cases include discussions of things like “awareness” on the part of a bot, or the location of a Terms of Use, or the color and font thereof. His argument: you should look at whether (1) a bot was used (2) in a wrong way, with “wrong” to be the appropriate subject of debate.

Internet Archive v. Shell: IA argued that Shell failed to state a claim for breach of contract because it only learned of the ToU after it copied the pages and no human was ever aware of the ToU—he thinks those are weak arguments. Field v. Google is better in how it treats the bot, but it is still human-centric in relying on fair use.

Remedy: dispose of notions of consent and fairness to the bot master. The solution: adopt technical specs for safe harbors for bot designers. Roommates.com provides a useful guide. If a bot designer adheres to the standards, then the designer doesn’t need to monitor each site visited. Reciprocally, drafters of ToU must follow drafting conventions. A compliant bot encountering noncompliant ToU would still be immune from suit for browsing, while a noncompliant bot would not be immune.

The drafting conventions must mirror the terms the bot recognizes, like “do not archive.” Future bots might recognize Boolean logic, synonyms, and antonyms. The more sophisticated the intelligent bot, the more flexibility there is in drafting. Drafters could allow some indexing and not others.

Pasquale: Niva Elkin-Koren argued “let the crawlers crawl.” Let the Crawlers Crawl: On Virtual Gatekeepers and the Right to Exclude Indexing, 26 Dayton Law Review 180 (2001). Smaller entities need to be indexed, and larger ones don’t be. A person trying to build a new search engine is stymied by Google’s prohibition on crawling its site.

Goldman: robots.txt is much of the solution that’s necessary. There are some automated robots, but when people configure scrapers, there’s almost always a human factor. Is it fair to say that they couldn’t review every user agreement?

Kahana: Bad actors will never use this standard—e.g., competitors who don’t want to configure scrapers to comply with ToU. Robots.txt is a good beginning, but it won’t help with your desire to ban competitors from scraping while allowing Google. It could be made much richer.

The Economics of Reputational Information
Abstract
Eric Goldman
Santa Clara University School of Law

How do we help consumers make better decisions? Goldman defines reputational info as info about an actor’s past behavior that will help predict the actor’s future performance. Examples: word of mouth, recommendation letters and references, job evaluations and student evaluations—these are unmediated, going directly from giver to receiver. Mediated: credit scores, investment ratings, GPAs, product reviews and ratings on Amazon, ratemyprofessor.com, possibly voting systems like Digg and PageRank.

Hypotheses: (1) Anomalies in reputational information supply and demand, whether over or under, hinder markets. (2) Inconsistent regulation of reputational information should be examined for unwarranted dichotomies. Credit scores are heavily regulated, but GPAs are not, nor are Google searches. There may be good reasons for difference, but examining the family of reputational information might give us good lessons for changes.

One specific topic: the undersupply of reputational information. People have first-hand reputational information that remains non-public—our individual opinions of movies, banks, etc. Every single person has some such info. Why? Costs in time, vendor retribution, norms against public criticism, privacy, legal risks. Thus, the market doesn’t work as well as it could.

His solutions are designed to be provocative: We could make more information about consumer decisions public information—automate disclosure of when Mark Lemley switches banks, publish customer lists. Or, we could increase channels for anonymous distribution of reputational information. (He earlier referred to the problem of pollution of the data stream; I expect that will come up as to this solution.) We could recalibrate the legal consequences of sharing reputational info—make it harder for plaintiffs to win; make it easier for evaluatees to fight back; we could protect intermediaries facilitating production of reputational information, as §230 does. Government funding of reputational information production: his least favorite solution.

Q: Undersupply is an instance of the general problem that a market in information is inherently imperfect.

Goldman: Absolutely, which puts gov’t involvement on the table.

Lemley: Wants to push on the premise of undersupply. Striking to offer that argument when we now have more info than we ever have. Maybe the market worked much worse in the past, but our personal behavioral decisions on the basis of nonpublic info may be a sufficient proxy for public info. If enough people leave the bank/don’t go see the movie, the aggregate effect of that sends a signal the market can internalize.

Goldman: Yes—there is feedback just from choice. His concern is that’s not enough. Lemley’s Facebook page gives much better info about movies than Lemley’s movie attendance.

Me: This is getting into the tension between analyzing behavior in terms of incentive and analyzing it in terms of taste that is also hampering copyright analysis. Because producing such info has costs, we sometimes ask what the barriers are—if we want more, we try to lower the costs or increase the benefits. But some people just don’t want to talk: should we try to change their tastes?

Goldman: And that structures the question—do we have enough people providing the information, or not enough?

Elkin-Koren: These proposed solutions will affect the other problems you haven’t discussed—corruption and manipulation of information. Every solution you offer may increase that problem.

Goldman: Absolutely. Fixing undersupply is linked to avoiding/correcting oversupply.

No comments: