The unfortunate human element in news sites

Journalists could use a bit of robot DNA.

Journalists could use a bit of robot DNA.

You’ve heard, I guess, about the software bots that the Associated Press is using to write thousands of corporate earnings reports? Writerbots have been around awhile — here’s NPR on what was then called StatsMonkey in 2010. And they’re coming on strong lately; see Slate on the LA Times’ quakebot.

I’m not too upset by this trend. Indeed, I’d like to see more of it.

Take the AP’s earnings reports, like this one on GE. They do serve users. They condense and standardize the information from the corporate filing and provide a comparison with analysts’ estimates. As a former business editor, I know that dragging earnings briefs out of reporters was an onerous task. I presume the AP bot will never insist that is has more important stories to work on.

There are risks. The bot is only as good as the information fed into it; no chance (I’m guessing) of it spotting a typo. And corporate earnings are notorious for hiding bad news.  If, as AP promises, the bot will free up human reporters to spend more time examining reports for those hidden delights, that’s great. But if I were a corporate spokesperson, I’d think that the odds had now shifted in my favor.

On the whole, though, using journalism bots to generate the kind of items that humans generally do by rote anyway won’t mean much for users. And we’d be better off if we could automate even more.

Take, for example, the statements of reporters on social media. CNN reassigned a reporter the other day after her tweet referred to some people as “scum” — an epithet she meant to aim only at a few individuals, but didn’t quite articulate clearly enough. A small-town reporter lost his job after commenting on the children of a local auto dealer on Facebook. Then there was the reader representative who called out a commenter for not having “the stones to make your comments using your real name.”

These people should have known better. I know the last one should have known better, because I trained him. And one of our first rules for commenting — or any participation in social media by journalists — was to remember that you were never just replying to one person. Instead, you have to think about it as talking past them to their entire audience, anyone who might find the post. Consider your social media activity in that way and you realize that the audience doesn’t know all your context; the audience expects you (fairly or not) to be in your work role all the time; the audience judges your comments on their own, not on the meanness of the person you were responding to.

If only we could automate these things without becoming annoying. I could be rich if I could create a comment robot that would

  • Thank commenters for participating.
  • Repeat bits of the story in response to users who didn’t bother to read it before commenting.
  • Link to previous stories to answer users who complain that “you only report bad news about …” or “you never covered this when it happened to …”
  • Point out that while, yes, this is a story about a kitten on a trampoline, the site also has stories about major news events.

Or maybe we just need automated Twitter editors. We can already type in complete URLs and have them automatically condensed to shortened forms. Why not add the capability to detect and block expressions of anger and sarcasm?

Heck, I’d settle for a Facebook function when reporters posted that would flash before them a map of the friends and friends of friends who could potentially see their smart remarks, to disabuse them of the notion that what they’re doing is private.

While we’re at it, how about a bot to automate the annotation of corrections in online stories? For many news sites, it’s very easy to rewrite a story completely and leave behind nary a trace of original errors. When I was an online editor, we had clear guidelines about how errors should be fixed; a prime rule was that any substantive fixes must be accompanied by a notice to the users. But whether those notices were written, and how they were worded, depended on the post’s writer, and possibly an editor. There is a very human tendency to play down our mistakes. It would be interesting to see a site adopt the kind of version-tracking that Wikipedia offers, so users could see exactly how a story changed. It could be tough for a bot to figure out which changes were “substantive” or not, but at the very least it could alert users that changes were made. I would be fascinated to read the comments that kind of version-tracking would bring forth.

I’m sure reporters would be equally fascinated … or something. Perhaps we’d better make sure the comment bot is ready first.