Tuesday, March 13, 2018

Today, Adrian Chen Makes A Case That Russiagate Is A Red Herring

This blogger is skeptical of the Russiagate mania. Instead, the refrain of "Follow the money" from the Watergate investigation resonates all these years later. Further, along the money-line, Spiro T. Agnew was forced to resign as Vice President of the United States on October 10, 1973, — not for Watergate-connected activity — for a reduced sentence under charges of bribery, tax fraud, extortion and conspiracy. Those charges were brought — not by a Special Counsel — but the US Attorney for the District of Maryland George Beall. Agnew accepted a plea bargain that required his resignation as VPOTUS as well as a plea of no contest to a charge of tax evasion in return for a fine of $13,551.47 and no jail time. If Special Counsel Robert Mueller III follows the money and finds enough evidence of bribery, tax fraud, extortion, and conspiracy, the result might be a plea bargain for the current occupant of the Oval Office to avoid jail time. If this is a (fair & balanced) political possibility, so be it.

[x New Yorker]
A So-Called Expert’s Uneasy Dive Into The Trump-Russia Frenzy
By Adrian Chen

TagCrowd Cloud of the following piece of writing

created at TagCrowd.com

Whenever the Internet Research Agency is in the news, I get a sinking feeling in my stomach. I was one of the first US journalists to report extensively on the St. Petersburg-based “troll farm,” which was named in the indictment that Robert Mueller, the special counsel investigating Russian interference in the 2016 election, issued last Friday. As a result, I am often portrayed as an expert on the Internet Research Agency and Russian online propaganda. In this, I am not alone. The endless unfurling of the Trump-Russia story has occasioned an explosion in the number of experts in “information warfare,” “online influence operations,” “disinformation,” and the like. One reason for this is that the Russians’ efforts tend to be framed as a kind of giant machine, in which talking points generated by the Kremlin are “amplified” through a network of bots, fake Facebook pages, and sympathetic human influencers. The machine, we are told, is so sophisticated that only an expert, well-versed in terms such as “exposure,” “feedback loops,” and “active measures,” can peer into the black box and explain to the layperson how it works.

The thing is, I don’t really want to be an expert on the Internet Research Agency and Russian online propaganda. I agree with my colleague Masha Gessen that the whole issue has been blown out of proportion. In the Times Magazine article that supposedly made me an authority, I detailed some of the Agency’s disturbing activities, including its attempts to spread false reports of a terrorist attack in Louisiana and to smear me as a neo-Nazi sympathizer. But, if I could do it all over again, I would have highlighted just how inept and haphazard those attempts were. That the Agency is now widely seen as a savvy, efficient manipulator of American public opinion is, in no small part, the fault of experts. They may derive their authority from perceived neutrality, but in reality they—we—have interests, just like everyone else. And, when it comes to the Trump-Russia story, those interests are often best served by fuelling the fear of Kremlin meddling. Information-security consultants might see a business opportunity in drawing attention to a problem to which they (for a fee) can offer a solution. Think-tank fellows may seek to burnish their credentials by appearing in news articles—articles written by journalists who, we all know, face many different kinds of pressures to promote sensational claims. (How viral is the headline “Russian Internet Propaganda Not That Big a Deal”?) Even academic researchers, to secure funding, must sometimes chase the latest trends.

But couldn’t I be the sort of expert who tries to downplay the problem, offering a counterweight to others’ opinions? This might be appealing if the issue were being hashed out in obscure scholarly journals, rather than in an atmosphere in which every skeptical utterance about Trump-Russia becomes pro-Trump propaganda. Rob Goldman, Facebook’s vice-president for advertising, learned this lesson the hard way. Late last Friday, he argued on Twitter that, because the majority of the Internet Research Agency’s Facebook ads were purchased after the election, the group’s goal must have been not to elect Donald Trump but “to divide America by using our institutions, like free speech and social media, against us.” Perhaps Goldman hoped that, by portraying the Russians’ machinations as nonpartisan, he could appear to take the problem of online disinformation seriously without offending Trump’s supporters. But Goldman’s caution backfired. Trump triumphantly retweeted him, writing, “The Fake News Media never fails. Hard to ignore this fact from the Vice President of Facebook Ads, Rob Goldman!” In the next few days, Goldman was pilloried by the President’s critics; many pointed out that, according to the Mueller indictment, the Agency’s specific aim was to undermine Hillary Clinton and boost Trump. Goldman later apologized to his company in an internal message.

You can see how wielding my expertise has always felt like a lose-lose proposition. Either I could stay silent and allow the conversation to be dominated by those pumping up the Russian threat, or I could risk giving fodder to Trump and his allies. So, last week, when the Agency once again became the focus of the Trump-Russia story, I ignored the many media requests in my in-box and wrote a couple of short articles instead, including one about a brief telephone conversation I’d had with the alleged executive director of the Agency, Mikhail Burchik. Then, on Monday afternoon, I received an e-mail from a booker for “All In with Chris Hayes,” on MSNBC. They wanted to have me on to talk about Burchik. Figuring, naïvely, that in discussing this one development I’d be able to avoid dealing with knottier questions, I agreed.

The segment began innocuously enough. Hayes asked me about an appearance I had made on the Longform podcast, in 2015, in which I mentioned offhand that many of the accounts I had followed while reporting my Times Magazine story had switched from posting negative information about Obama to positive information about Trump. The Agency, I’d suggested with a laugh, must be pursuing “some kind of really opaque strategy of electing Donald Trump to undermine the US” The fact that I was considering this possibility struck me at the time as a worrying sign that I had internalized the paranoia that defines Russian propaganda itself, which sees in every bad thing that happens to Russia the hidden hand of the United States. Both the Trump campaign and the idea of a Russian troll operation to elect him seemed like a joke back then, and I said as much to Hayes.

The last question was the one I had hoped to avoid. “It seems like, in some ways, it’s a remarkably effective model,” Hayes said, referring to the Agency’s operation. “You don’t have to pull off some enormous thing. You just have to kind of be in people’s consciousness enough, constantly, in this sort of irritant way, with ninety people you’re paying, running an operation that doesn’t cost that much money. It does seem like a good bang for your buck.” I disagreed. I said I didn’t think that what amounted to a social-media marketing campaign—one whose supposed architects had a rudimentary grasp of the English language—could sow so much discord on its own. One could argue that ninety people is about what it would take to run the digital operation of a modern Presidential campaign—to shift votes in a candidate’s favor. But numbers tell only a part of the story. In the indictment, Mueller’s team reveals that the Agency didn’t discover the idea of targeting “purple states” until June, 2016, when a Texas-based conservative activist introduced them to the term. Cambridge Analytica this is not.

The morning after the Hayes interview, I woke up to find that a journalist named Aaron Maté had clipped the video and tweeted it, along with the comment “OMG, a sober/informed Russia take on MSNBC!” (Last April, Maté argued in The Intercept that Rachel Maddow, the network’s most popular host and a strong advocate of the notion that the Trump campaign colluded with Russia, was leading her viewers on “a fruitless quest.”) The clip, which I retweeted, spread faster than anything I’d written or said about the Agency since the original article. Within a few minutes, I had been retweeted by Julian Assange, the founder of WikiLeaks, who relentlessly promotes skepticism about Russian influence. (WikiLeaks, of course, played a role of its own in the 2016 election.) After Assange, various right-wing social-media influencers piled on, including Jack Posobiec, a pusher of the Pizzagate conspiracy. Some current and former employees of RT, the Kremlin-backed news network, picked the clip up, too. It was also shared by many journalists and liberals who cast it as a welcome bit of reason amid the rising frenzy. Still, I could feel my words slipping away, becoming the foundation for someone else’s shakily constructed argument. The fact that I had been given the rare opportunity to share an opinion on national television seemed pretty much cancelled out by the ways its online audience had put it to use.

But maybe I ought to look at the episode from the point of view of the information-warfare experts. They would see me as involved in an “influence campaign” to propagate my view that Russian influence wasn’t all that influential. According to the standards commonly used by such experts—namely, social-media metrics—my campaign was stunningly successful: my tweet about the Hayes segment garnered seventeen hundred retweets and thirty-nine hundred likes, for a total of more than 1.2 million “impressions”; Maté’s video clip has received more than a hundred and ninety thousand views.

If the metrics testified to my enormous influence, why did I feel so powerless? This question illustrates the problem with treating the spread of information as primarily a numbers game. It removes any agency from the equation, seeming to hand control of Americans’ thoughts and opinions to a roomful of young Russians in St. Petersburg; it ignores people’s tendency to share information that they already agree with; and it sees evidence, in the spread of that information among self-interested groups, of some grand design by a mastermind propagandist. I clicked through the profiles of the hundreds of people sharing my tweet and found a nearly incomprehensible whirl of agendas, egos, grudges, and strategies. I was supposed to be influencing this?

At a certain point, it occurred to me that my reluctance to opine about the Internet Research Agency was a product of the same logic that I was arguing against. I was judging my words and beliefs by how they might appear to the perfect propaganda machine, whether pro-Trump or anti-Russia. My caution gave the machine far more credit than it deserved. # # #

[Adrian Chen joined The New Yorker as a staff writer in 2016. Previously, he was a staff writer at Gawker, from 2009 to 2013. His stories on Internet culture and technology have appeared in the Times Magazine, Wired, MIT Technology Review, The Nation and New York magazine. He is a founder of IRL Club, a live event series about the Internet, and a former contributor to the Onion News Network, the Onion’s first online video series. His story for Gawker exposing a notorious Internet troll won a 2013 Mirror Award from Syracuse University’s S. I. Newhouse School of Public Communications. Chen received a BA (sociology) from Reed College (OR).]

Copyright © 2018 The New Yorker/Condé Nast Digital



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License..

Copyright © 2018 Sapper's (Fair & Balanced) Rants & Raves