Monday, March 28, 2011

Mad Men part 3

Panel Two: Online Advertising and Privacy
Moderator: Bryan Choi, Yale ISP

Jonathan Mayer, DoNotTrack.Us Project, Stanford University
Stateful tracking (something stored on your device--tagging) and stateless tracking (things that don’t require something stored on the device but nonetheless allows you to figure out the device—fingerprinting). Many stateful mechanisms in the browser, incl. http cookies, http authentication, etc. So many options that it’s hard to counteract all of them, or all potential ways to fingerprint a browser (looking at plug-ins, clock skew, etc.). Cruder way of preventing: block the content that would tag or fingerprint the browser.

Trouble with lists of allowed sites: not comprehensive and overinclusive; must trust a third party; requires updating; breaks stuff (if you can’t use the “like” button on Facebook, no one will use the tech). TrustE allowed companies like Axciom but consumers weren’t in a good position to evaluate the merits.

Opt out: biggest is from the Network Advertising Initiative, 65 companies that you can opt out of in a click. It’s not comprehensive, a fraction of several hundred engaged in third-party tracking online. Also an issue of updating, making sure they don’t expire or clear optout cookies when you clear your cookies. Some browser extensions try to deal with that—e.g., Google’s Keep My Opt-outs.

Greatest weakness in cookie model: opt-out cookie is a promise not to target ads, not a promise not to track.

Do not track: design objectives were: universal, no updating, one-click. You need to observe suspicious behavior and monitor ad distributions: if there’s really no tracking then browsers should not behave as if there were. Gave a demo of catching someone fingerprinting—the system they developed is enforceable.

Dan Christen, Microsoft Corporation
IE9 has various privacy features. Tracking protection lists using multiple organizations; you can choose which 3d party list to use. You can turn it on or off easily (for example if content you want to see is blocked). You can personalize and see what sites are tracking you. Does implement header approach for do not track: http header and DOM property, sent when tracking protection list or personalized tracking protection are enabled. W3C has scheduled a workshop on possible further standardization of a do not track signal. Mozilla and Stanford submitted another proposal to IETF on do not track headers.

A lot of activity on the browser side; Mozilla and IE are both pushing the header approach. Process should be allowed to continue to form a standard with wide participation.

Joseph Turow, Annenberg School for Communication, University of Pennsylvania
Words used in public discussion to describe what’s going on: queasy, icky, creepy. When lawmakers invoke the ick factor as a reason, society has a problem. We have a situation in which policymakers and industry are not converging on the fact that there is a problem. Executives in ads sense that policymakers haven’t worked through the issue well enough to present a succinct, logical argument about harm: they say, what’s the problem given that we have COPPA, HIPAA, Gramm-Leach-Bliley? This is just psychological, not real. This is a continual theme in the literature—that the public has been misled. Executives say the antidote to customer distaste is anonymity.

But look at what anonymity means in practice. We have to discuss what’s actually taking place. What is at stake even with anonymity is social discrimination via reputation silos. Issue is ability to control identity, sense of self, and notion that other people are defining them without their knowledge or control. Personalization goes beyond whether people buy. Ads and discounts are status symbols; they alert people to their social position. So Lincoln is offering an ad to heavy NYT readers—if they click on the ad, they get the rest of the year of the NYT free. How do I get that ad?

In the future, these calculations of our marketing value may become routine parts of the info exchanged about people through the media system. Whether they know your name or not will be irrelevant. Tech developed for ads allow targeting individuals with personalized news and entertainment. This is already happening—targeting TV ads to subscribers via cable—the logic is becoming more urgent to advertisers and publishers. It’s not just the impact on individuals, but the media ecosystems: what magazines, newspapers, and TV shows are. “Church/state” wall between editorial and advertising is falling apart. Ads will be packaged with news tailored both to the advertiser’s goals and the targeted individual.

So will people get angry about the segmentation/discrimination, or will they just learn to live with it (as they have largely accepted skyrocketing inequality elsewhere)? Industry’s real hope is that people will not use do not track lists. A few people who know will use them; so what. Public ignorance of potential implications will not alleviate the longterm dangers. We need information respect based on information reciprocity. Behavioral targeting is just one facet of this—3% of online activity. Need to look at what publishers do with our data generally.

If companies want to use information about individuals, let people know where those notions originated and negotiate about them. Permission may well raise the cost of using that information. But the payback will be in veering away from divisive marketing.

Julia Kernochan Tama, Venable LLP
Consumers may be unaware of data collection/third party involvement, or worried about choice. But consumers may also value free content and relevant ads.

Self-regulatory principles, from major players: education, transparency, consumer control, data security, consumer notification for change in how data is treated, special procedures for treatment of sensitive data, and accountability for implementing the other principles. Transparency requires multiple mechanisms for clearly disclosing data collection and use practices, and control provides for mechanisms to give users the ability to choose whether data is collected/used and transferred to a non-affiliate. Thus, the “advertising option icon” (I heard a lot about this at the ANA last week too).  Consumers can go to aboutads.info to learn more about targeting and go to the opt-out page.

Accountability: BBB and Direct Marketing Association have accountability/monitoring programs. They identify noncompliant entities, follow up with them, and refer uncorrected noncompliance to the government, since being a member of these organizations while not complying with their requirements is deceptive.

Next steps: consumer and business education, promote industry participation, keep developing flexibly in response to technological change—mobile platforms, international integration, treatment of sensitive data. Flexibility is the strength of self-regulation.

Lee Tien, Electronic Frontier Foundation
We believe that most activity online is First Amendment activity: speaking, reading, association. Strongly protected against government regulation. The right to engage in these activities, including to do so anonymously, is very important. This does not depend on whether the information is “sensitive” in the sense of being medical. EFF is not focused on advertising, though we recognize it’s an incentive for collecting information—it’s not necessarily what needs to be controlled. Our concern is the surveillance. There is tremendous risk to privacy from government because of the accumulation of repositories of information either specifically or generally about preferences (to which the government may then seek access). Thus, he wants to think about when the info loses business value and should be destroyed, because it may never lose value to the government. So let’s work on preserving business value without creating a civil liberties sinkhole.

If we think we’re anonymous, we will say things we wouldn’t otherwise say. That’s a First Amendment freedom. When people are misinformed about the true anonymity of their activities, they make mistakes. It’s meaningless to talk about informed consent; people know what they do, but they don’t know what they do does. Though EFF supports Do Not Track to make clear how big the problem is, how many people will use it?

“Voluntary consent”—is this redundant? Fourth Amendment cases show that it is really hard to talk about the conditions under which consent was obtained. Who is really making the decision? A lot of times, it’s your browser.

Reidentification: 33 bits of information are sufficient to identify you uniquely. Only about 7 billion people in the world. Knowing your hometown, if it has 100,000 people, is worth 16 bits; a zip code can go way below that. It’s not the data you’re looking at that matters, it’s all the data out there—many bits of data can be put together to pinpoint you.

Mayer: there’s a void of information about behavioral ads. We don’t know how much more valuable it is. We don’t know how much more effective it is. We don’t know how widespread it is. The numbers tossed around earlier (yesterday) are from a small number of sources. What he’s seen suggests that behavioral advertising is more profitable, but not much more; this is status quo and might change.

We need the right incentives. Do not track could be a generative technology, allowing others to build on top of it—means of enriching ad value without compromising ad quality. You can do interest-based targeting without tracking. The user or the browser could list/transmit their interests, instead of a unique identifier. This could create value without privacy problems.

Turow: we’re at the beginning of the new world. Other kinds of tracking/utilities will develop. The real change will happen when TV—real TV—gets into the picture. People will be looked at individually and in household terms; tech already exists to send different ads/offers to people.

Tama: we know that behavioral advertising adds more value; the internet is largely ad-supported and companies need to experiment with other ways of supporting content. It’s all in flux. What would happen if you ended OBA tomorrow? No one knows, but there’s a concern for content drying up and innovation drying up. Senator McCaskill: we need to exercise caution so we don’t kill the goose that laid the golden egg.

Tien: we have a general problem larger than this one: our reliance on new media entities for services—ISPs see everything you do and thus threaten your privacy.

Christen: Value of each additional bit from the browser perspective will evolve over time as the tech evolves; the key is consumer choice.

Mayer: the goose is not at risk. OBA is relatively new; the goose is older. Even if you killed OBA, the goose would live (the state of advertising a few years ago).

Q: Talk about the distinction between opting out of use and opting out of collection? Those self-regulatory principles talk about opting out of collecting and using information “for behavioral advertising”—does that prohibition modify “collecting” information, or only “using”? One way to satisfy the requirement is not to collect, but another way is to collect for another purpose. Won’t at least some industry members interpret this in the most favorable way for themselves?

Tama: the principles are about use in OBA. What if you place a cookie—can the company still collect info for analytics and other ad delivery purposes? Yes, those activities are fundamental to the current model, and typically don’t raise the same privacy concerns. Frequency capping: I don’t want to get the same ad 200 times and the company doesn’t want to show that to me either. Capping doesn’t reveal anything about me, just allows efficiency.

Mayer: Many industry participants are interested in doing the minimum necessary to avoid government intervention. Back in the 90s, FTC got interested and industry formed NAI, which languished for a decade. Now the FTC is back, and so is NAI. The clickthrough rate on the advertising icon is .0035% and the overall opt-out rate is .00014%. When surveys indicate that the majority of users would like to opt out, we have a real market failure in users expressing preferences. We want to cut through the ambiguities in language and the weird ad icon interfaces and provide a mechanism so that users can easily find a clear policy statement. We want to design this with the consumer in the center; the incentives on the other side are misaligned.

Tama: self-regulatory effort is still being rolled out. She’s only seen a couple of the icons in the wild and the industry is working hard to increase awareness. Opt-out rates can mean that consumers aren’t concerned enough about it to make a change. There are ongoing developments like Google’s Chrome browser and we need to see how we can fit new tech into existing self-regulation processes.

Tien: disagrees about what can fairly be inferred from low opt-out rates. There is an attempt to frame discrepancies in behavior as lack of concern for privacy, but it’s really hard to believe that given some of the specific things research has shown consumers don’t know. One of the most important is that consumers don’t in the first place understand what a privacy policy is—they think the existence of a privacy policy means that the company is not sharing information. Given that, they may act unconcerned because of false beliefs. (It’s not as if they’re aware of the details of who’s serving the ads; they can easily believe that, as in the age of the print, it’s the Washington Post that intermediates between them and the advertiser.)

Christen: it’s early to look at response rates—he hasn’t even seen one of those icons yet. It will take time to do education. (RT: Of course we know a lot about how asterisks and other disclosures work, or don’t, already; the icon is not the first teeny addition to the main text of an ad to which consumers have been exposed.)

Q: What are the international implications? So much of this is independent of boundaries.

Christen: that’s why we recommend tools for users to protect themselves.

Tama: her group wants to figure out how compliance with US self-regulation should count for European data protection rules. The ideal is for the self-regulatory standard to become the standard everywhere. (Yes, I bet it would.)

Panel Three: Youth-Oriented Online Advertising
Moderator: Seeta Peña Gangadharan, Yale ISP

Mary Engle, Federal Trade Commission
FTC is the enforcement agency for deception and unfairness in advertising. Disclaimer: these views are her own and not necessarily those of the commission or any individual commissioner. FTC has long been interested in protecting children against unfair/deceptive ads—brought a case against free distribution of razor blades with Sunday papers, as dangerous to kids/pets. In 90s, people encouraged kids to call 900 numbers which cost $2/minute and the kids didn’t know the cost; ads showed toys performing in ways they couldn’t in real life.

Now, kids have huge purchasing power, and spend 7 ½ hours/day with media. Digital natives: facile with technology, and there’s a temptation to forget that just because they have tech prowess doesn’t mean they have similar levels of emotional maturity. They are impressionable, bad at assessing risks, bad at delayed gratification, and often naïve about the intentions of others. Some practices may not cross legal lines but are still appropriate for self-regulation.

COPPA: Children’s Online Privacy Protection Act regulates collection of personal information on websites directed at children under 13 or where there is actual knowledge that children under 13 are users; actual parental consent is required. Currently looking at rules, which haven’t been amended since the rise of social media. One issue on the table is the definition of “personal information,” which right now includes name, address, city, etc. but not a lot of the kinds of info collected via behavioral advertising. Statute gives some flexibility: personal info is any info that allows online or physical contact with an individual. Geolocation data, when a young child has a smart phone.

2009 behavioral ad report: teens are most likely to be visiting general audience sites. Issues include protecting teens from making mistakes that will haunt them forever. Facebook may have some idea of the age of a visitor, but most sites don’t—what do we do then? Some sites can be expected to do vigorous age identification—buying wine online, for example—but not all. Also, teens deserve autonomy as well as protection—they have free speech rights as speakers and to access to information.

Kathryn Montgomery, American University
The marketing we acted on to get COPPA seems very rudimentary now, but advocates got regulatory action even though the industry said it would never happen. Kidscom.com advertised as a safe place online, but sold all sorts of stuff—this was the kind of thing that led to COPPA. Ads aren’t supposed to come into a child’s room and collect information from that child. We need rules of the game for marketing to kids under 13.

Digital media help children developmentally for exploring identities, self-expression, relating to peers, autonomy. This isn’t just about serving standard ads online. Food and beverage companies are in the forefront of innovative marketing techniques. Example--an award-winning campaign. “The more information you gave us at registration, the creepier the experience.” This is a quote from the ad, not from Montgomery. They used Facebook Connect to pull two of the teens’ friends and “put” them in the asylum and allowed the teen to pick which to save; then they invited the teens’ entire social network to try and “save” them. Then they “forced” the user to take the position of torturer in order to finish the experience (also they needed to buy Doritos to get codes to unlock the final level). The idea was engagement with brands (and, apparently, with the position of torturer). Connecting brand identity with teen’s own identity. Brands are measuring emotional connections with brands. Immersive, cross-platform nature of new media: virtual/gaming reality, putting the user in a subjective state, inducing “flow” and making them more inclined to accept the ad message.

Personalization: one-to-one tailoring of ads. User generated content: youth aren’t just viewing ads but creating them and distributing them—they take ownership of the ad content. Very different from traditional understanding of ads for children. Marketing fully integrated into social and personal relationships and daily lives of young people, and this is only the beginning.

What do we do? We need a regulatory framework—combination of self-regulation and government rules. Pay particular attention to adolescents, who are not invulnerable based on their cognitive abilities. Teen brains aren’t fully developed; they have emotional vulnerabilities to this kind of marketing. Particularly around health, privacy, and socialization into a new marketing system—media literacy is important, but not sufficient. Need a dialogue about fair marketing principles. We don’t just want to look at kids: privacy safeguards are for all consumers.

Wayne Keeley, Children's Advertising Review Unit
CARU is a self-regulatory unit of the BBB, formed in 1974. Address children under 12 and, for privacy, under 13. Guidelines operate as safe harbor for FTC purposes. 50-100 cases/year, all transparent. Monitor thousands of commercials and hundreds of websites per year. 30-40% of cases are COPPA-related and the rest general advertising. Prescreen storyboards and rough cuts for advertisers, catching issues early.

Leslie Harris, Center for Democracy and Technology
How do we take advantage of currrent concern to get real results? In some ways COPPA is remarkable, since it’s never been challenged and there’s a great deal of industry alignment to the law. That’s no accident: very strenuous debate about whether to include teens, due to civil liberties groups, librarians, reproductive rights groups etc. (allied with industry). Taking teens out of the bill limited constitutional questions; didn’t try to boil the ocean—didn’t produce a full code of conduct, but directed only at sites directed to kids or sites with reason to know/actual knowledge that the kids were there.

COPPA has had some salutory effects, increased parental involvement with kids online, but some unintended consequences. Very few sites directed to kids—large brands dominate; smaller players/innovations have been pushed aside because there is in fact a cost to compliance. COPPA also led us to think that parental consent is the gold standard and that consent solves all problems. Consent is not the gold standard any more than opt-out is—puts all the burden on figuring out what to do on the parent. People think they’re being given a safety seal, but we have not spent enough time on norm-setting. We need to ask on the other side of the transaction: what should be the norms for kids? Not what should the obligations for parents be. Should we ever use behavioral ads on kids under 13?

FTC needs a more robust idea of what counts as an unfair practice, especially as new law is likely to be unworkable. CARU is inadequate, given the interconnections in the online environment. How long should companies hold on to their data? Self-regulation should create norms, and then over time the FTC can start enforcing them. Unfairness and deception can be more powerful than currently interpreted.

People believed that age verification would develop and make things easier. Jury has now come back: we don’t have effective age verification, which is why COPA was struck down. Every entity looking into this has come to the same conclusion. (Compare what’s going on in the UK with internet content filtering implemented by the major mobile provider, o2. )

For teens: does it make sense as a matter of public policy to provide new protections for 17-year-olds that we won’t make available to 18-year-olds? Or to possibly vulnerable 80-year-olds? Some possibility of a comprehensive privacy law; the US stands almost alone in the developed world in having no ground rules generally for fair information practices. Risk that we will go back to doing what we’ve done all along—attacking the bright shiny object, so we have a privacy act specific to home video renting instead of a real comprehensive law. Her biggest nightmare: do not track for kids.

Montgomery, in response to a question about teens who say they don’t care about ads: teens don’t always know what’s affecting them. They can feel invulnerable and feel that they’re not being influenced, but ads aren’t presenting themselves as boring things you click on but as entertainment vehicles to be involved with. It’s our responsibility to ensure that young people understand more about the environment and have some safeguards against deceptive/unfair practices. Ads directed to that age group are mainly pushing unhealthy products high in fat and sugar. The food industry has not wanted to pay any attention to self-regulation to that group as opposed to kids under 12.

Harris: some people in the industry fear that self-regulation norms for teens will be enacted into law and that’s why they’re reluctant to act.

Engle: we definitely can’t rely on self-perception of whether ads work. Even if half of ad dollars are wasted, half isn’t.

Michael Rand, Baruch: where do parents and teachers come in?

Montgomery: Does not support parental consent for teens. There is a role for education, but media literacy is weak in the US; it’s not in all schools. People who do media literacy are ill-equipped to deal with the complexity of things like Asylum 626. Research hasn’t shown that media literacy training works over the long term. Need multiple strategies, including self-regulation, education, and government. Young people may otherwise have no sense of what their privacy is worth.

Keeley: Canada has media literacy as part of core curriculum; we should look at that.

Engle: FTC instituted an ad literacy program aimed at tweens. Admongo.gov: interactive video game allowing kids to spot ads where they appear and ask questions about it. In partnership with Scholastic, they got a curriculum for teachers that any school can use. Remains to be seen how it will work, but the game is very engaging.

Q: we all tried to get comprehensive legislation in the 90s; the industry was opposed even to protecting children. Told us “we’ll never give you teens” because that’s the most lucrative market, and that’s still true. Global marketing research: children and teens are ground zero for research being done everywhere—to inculcate not just the ad modalities but the data collection, which can’t be separated out. If you want to see what online marketing to kids is really like, go here.

Harris: raising all boats and getting self-regulatory safe harbors to develop fair information practices for industry segments will get a better result than constantly segmenting populations.

Montgomery: we’d call for measures within a larger framework addressing the special vulnerabilities of teens, who may not be as familiar with privacy issues.

Tien: FTC unfairness jurisdiction: If we imagine a situation where in the next year nothing happens on the legislative level, then is there a way for the FTC to more actively use the notion of unfairness?

Engle: traditionally we’ve looked for economic injury or physical harm for harm that can’t reasonably be avoided by consumers (part of the unfairness standard)—can harm to privacy be included? We had a case against Sears when Sears offered rewards for consumers who agreed to have their online behavior tracked, but what was disclosed in small print was that it would look at online buying behavior and other sensitive information. FTC alleged deception for failure to adequately disclose what info was collected; but suppose they hadn’t—could we allege that collecting the info was unfair? Sears didn’t do anything with the info—thought it would be cool to have, but couldn’t figure out what to do with it. Hard to bring that case as an unfairness case. Active debate in the commission now about boundaries of harm.

Spyware: brought a case against a remotely installed keylogger tracking everything, including passwords. Experts could testify about potential financial harm and even risks from stalkers, but the harm should be cognizable even before those consequences materialize.

Harris: in looking at fact patterns, it is frustrating to think that the FTC can’t stop the manipulation of kids to disclose a lot of information. Goes back to kidvid days when the FTC tried to regulate sugary food ads for kids as unfair.

Engle: Congress took away our ability to regulate such ads for 14 years, and then codified the unfairness policy statement but said we couldn’t rely solely on public policy to show unfairness.

Montgomery: example of heavy constraints placed on FTC by industry.

Q: In Japan, age verification is done offline.

Keeley: there are a lot of models; in Europe they go up to 16 and the industry is looking at what can be done.

Harris: she’d oppose any parental consent for teenagers. Japan does have a new law, but she doesn’t think it’s particularly strong though it’s hard to criticize given that the US has no law. Credit card verification: anyone could be holding that credit card. It’s hard to talk about age verification without talking about identity, which is a bigger conversation.

No comments: