I knew Emily Bazelon from her work in Slate and on the Slate Political Gabfest podcast and always thought she was fantastic. When I heard she had written a book on bullying, I made it a point to read it, but never got around to it until now (thanks, summer break).

It’s every bit as good as her other work, and deftly avoids the trap that a lot of non-education writers fall into when writing about education for the first time: overgeneralizing from anecdotes and small sample sizes.

One section stuck out to me in particular. Bazelon jumps through hoops to finally secure a meeting with employees at Facebook to talk about how they handle cyberbullying complaints. Here’s the key excerpt:

…I found Willner among the rows of tables were his reps scrolled through the reports of bullying, harassment, and hate speech. They sat across from the Safety team (for suicidal content, child exploitation, and underage users) and near the Authenticity team (for complaints of fake accounts). The Authenticity reps, I noticed, had clear bright lines to follow: accounts that aren’t attached to rely names and email addresses must come down, period. The safety reps, meanwhile, were backed by a Microsoft software called PhotoDNA, which Facebook has piloted because it can identify images based on their digital signature, even if they’ve been cropped or changed. For the twenty to thirty warnings about suicidal posts that Facebook averaged per week, the site had a partnership with a suicide prevention center, which could pop up via IM chat on the page of a potentially suicidal user.

The Hate and Harassment reps, however, could not rely on clear calls or technological whizbangery as they slogged through their reports. “Bullying is hard,” Willner said. “It’s slippery to define, and it’s even harder when it’s writing instead of speech. Tone of voice disappears.” Willner’s team tried to come up with algorithms the site could run to determine whether a post was meant to harass and disturb, but hadn’t found anything that worked. Context was everything. He gave me an example from a recent abuse report he’d seen complaining about a status update that said, “He got her pregnant.” Who was it about? What had the poster intended? Looking at the words on the screen, there was no way for Willner’s team to tell. 

“Is it knowable whether that’s harassment or not?” he asked. 

For all of the ways in which technology has made the unthinkable possible, what a shame that there is no way for a bot to root out cyberbullying across these platforms. (I will leave it to the more skeptical out there to ask whether some of the more nefarious networking apps would even want to eliminate this.)

But the Facebook rep’s frustration about being able to determine what is or isn’t bullying goes much deeper than the basic questions he asks: (“Who was it about? What had the poster intended?”). Even if we imagine this as a real-world interaction, perhaps a comment overheard in a hallway, would a stranger be able to tell whether this comment amounted to bullying, in other words, if it met the now widely accept definition of being (1) aggressive (2) containing an imbalance of power and (3) repeated?

Probably not. So identifying bullying in general requires that the identifier be fully aware of the social power dynamics of the situation and the participants (for lack of a better word) as well as the history. This is incredibly difficult stuff to know if you are not on the ground building an understanding and accruing knowledge about peer dynamics in your building over the long-term.

Preventing cyberbullying will always be easier than responding to/punishing it after it’s already happened (though both are incredibly difficult). Kids will simply always be ahead of even the most tech-savvy adults. (There’s an analogy here with professional athletes: as the leagues tighten up anti-doping testing, the athletes simply find new and more inventive ways to subvert the testing.)

The comments of the Facebook rep about the impossibility of detecting cyberbullying algorithmically (and I think in the two years that have passed since Bazelon’s book most youth cyberinaction has actually moved off of Facebook and onto darker, harder-to-monitor platforms) are disheartening in one sense and yet empowering in another: the very reason technology can’t stop bullying is the same reason adults on the ground with kids can: because we’re there to see, to listen, and to (try to) understand. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s