Careful what you type, Mr president.  Photographer: Andrew Harrer/Bloomberg
Careful what you type, Mr president. Photographer: Andrew Harrer/Bloomberg

There are perfectly serious, and honorable, reasons the U.S. Secret Service should want to analyze social networks using tools with the "ability to detect sarcasm and false positives." It's already been dubbed a "sarcasm detector" and condemned as evidence of government snooping; but what the security agency really wants is to avoid situations like the recent arrest of a 14-year-old Dutch girl for tweeting she was an al-Qaeda terrorist.

The truly dangerous part of this is not even that the Secret Service wants the new system to be compatible with Internet Explorer 8, which I think antivirus software should quarantine if found outside a museum. It's the agency's apparent willingness to rely on computer technology when it comes to natural languages. Though developers would have us think their linguistic tools are quite advanced, they should not be trusted to perform anything but the most rudimentary tasks.

The generally accepted level of accuracy for sentiment analysis -- a branch of computer linguistics that determines the positive or negative slant of a piece of text -- is about 65 percent, though some developers claim higher rates. The French company Spotter, which numbers the European Union, McDonalds and Coca-Cola among its clients, said last year it could identify sentiment with 80 percent accuracy. I'm not going to dispute these claims, but for Secret Service purposes, even 80 percent is far from good enough. I don't think they would consciously accept a 20-percent risk of a false negative.

The Stanford Natural Language Processing Group has a website where one can see a sentiment analysis algorithm at work. The simple demo program attempts to distinguish between positive and negative movie reviews. It is actually more sophisticated than much of the sentiment analysis software on the market now, because it works with entire sentences and contexts, not just separate words. I used the sentence "This movie was as vibrant, exciting and fun as a dead mouse on your doorstep" to test the algorithm, and my "review" was assessed as mildly positive.

The algorithm correctly spotted that my sentence contained a comparison and built a two-branch decision tree. It then ranked the meaningful words on a positivity scale, giving two pluses each ("very positive") to "exciting" and "fun" and one plus ("positive") to "vibrant." "Dead" got a minus (negative). The program then did the sum for every branch, added them up and got the final result.

Although commercial software using machine learning can get much more sophisticated, it will still be treating language as math. That approach has so far killed all efforts at truly adequate machine translation. The Russian company ABBYY, the developer of popular Finereader text recognition software, has spent 19 years and $80 million on its natural language system, Compreno, based on insanely detailed semantic maps, but has been unable so far to market it as translation software. In April, it was only offered as an intelligent search system for corporate networks.

The publicly available translation programs, such as those offered by Google and Bing, require special simplification skills from users to produce accurate results. Ideally, to get a decent translation out of these algorithms, one should have a command of both languages. Neither, of course, can handle irony. There is simply no way a computer program based on mathematical principles can provide an understanding of what Russian President Vladimir Putin meant when he wished "bon appetit" to G7 leaders who met without him in Brussels. The question he was asked, "How do you feel about the G7 leaders sitting around the table in Brussels as we speak?", implied in Russia that they were partaking of a meal, but there is no program that can catch this implication, without which Putin's deadpan reply just sounds silly.

Developers prefer to skip over such subtleties when they make major announcements such as Microsoft's recent promise to provide simultaneous translation for voice conversations on Skype. As with other machine efforts to process natural languages, this, too, will inevitably be a source of confusion as much as increased understanding.

The Secret Service cannot hope to successfully monitor the social networks the old-fashioned way, using human intelligence to interpret messages. Computer analysis, however, can only provide a false sense of security. Although any effort to cut down on "false positives" will be appreciated by posters who sometimes resort to sarcasm (probably about 99.99 percent of all social network users), it won't achieve the agency's goals in any meaningful sense. Perhaps it would be better off concentrating its efforts on other forms of intelligence than combing social networks for advanced announcements of terrorist attacks -- or even for signs of pernicious attitudes.

To contact the writer of this article: Leonid Bershidsky at lbershidsky@bloomberg.net.

To contact the editor responsible for this article: Mark Gilbert at magilbert@bloomberg.net.