By DAN MCLAUGHLIN
We have long known that Media Matters for America is a sleazy outfit that exists in large part not to play in the marketplace of ideas but for the goal of driving conservatives out of that marketplace. That’s been its modus operandi in the past, in agitating for advertiser boycotts of figures such as Rush Limbaugh and Tucker Carlson, whom Media Matters wants off the airwaves. Now, they’ve come after Elon Musk over the content-moderation policies of X (formerly Twitter). They appear to have picked on the wrong guy.
On Monday, Musk’s company filed suit against Media Matters and one of its writers, Eric Hananoki, in federal court in the Northern District of Texas. The suit alleges that the campaign by Media Matters and Hananoki to convince sponsors to abandon X involved misleading the advertisers about the possibility that users might see the sponsors’ ads running adjacent to posts with extremist content by neo-Nazis, white nationalists, antisemites, and the like. If X can prove what it alleges in its complaint, Media Matters may be in for the same sort of long, draining, expensive, and embarrassing battle that Fox News faced in the lawsuit by Dominion Voting Systems. And Musk, one of the world’s very richest men and a billionaire many times over, has the deep pockets and the motivation to keep at it for a very long time.
The Staged Experiment
Upon buying Twitter and rebranding it as X, Musk did a lot of initial good in breaking the social-media industry’s groupthink consensus in favor of one-sided, government-influenced suppression of speech, but the jury is still out on whether he can make his own vision for the company work for X users and the market. (Full disclosure: I’ve been a verified user of the platform now for long enough, with a large enough audience, that I make a very modest amount of money from the ad-revenue-sharing program that Musk introduced to induce widely followed users to contribute to the platform.)
Part of Musk’s initial pledge to make X a more unfettered free-speech platform with less-biased moderation was that he reopened the platform to a lot of people who had previously been banned, and he reduced heavy-handed banning policies. Inevitably, that endeavor has produced mixed results and let a lot of vile people back in the door.
Musk thinks like an engineer. In order to reassure big corporate advertisers, who provide the bulk of X’s revenue, he has instead touted technical solutions to keep their ads from appearing to X users next to extremist speech. Those include both generalized restrictions on what kinds of posts can carry ads, and specific tools to let advertisers prevent their ads from running near posts using specified words or phrases.
Media Matters set out to prove that, in spite of these touted safeguards, X users were in fact seeing ads for blue-chip companies paired with extremist speech. The specific charge by X in its lawsuit is that Media Matters took extraordinary steps to game X’s system in order to evade all safeguards for placing big corporate sponsors’ ads next to extremist content.
According to X — which claims to have tracked what happened with an internal technical investigation — Media Matters was able to produce its examples of ads pairing with extremist content only by rigging its test with user behavior so atypical that no other user would encounter the same ad/X pairings. The alleged conduct is something on the order of a researcher forcing a lab rat to drink water until its stomach burst in order to prove that water is dangerous.
As the complaint’s allegations detail:
First, Media Matters accessed accounts that had been active for at least 30 days, bypassing X’s ad filter for new users. Media Matters then exclusively followed a small subset of users consisting entirely of accounts in one of two categories: those known to produce extreme, fringe content, and accounts owned by X’s big-name advertisers. The end result was a feed precision-designed by Media Matters for a single purpose: to produce side-by-side ad/content placements that it could screenshot in an effort to alienate advertisers.
But this activity still was not enough to create the pairings of advertisements and content that Media Matters aimed to produce. Media Matters therefore resorted to endlessly scrolling and refreshing its unrepresentative, hand-selected feed, generating between 13 and 15 times more advertisements per hour than viewed by the average X user repeating this inauthentic activity until it finally received pages containing the result it wanted: controversial content next to X’s largest advertisers’ paid posts. [Emphasis in original]
How unrepresentative was this?
X’s internal user data tells the story of just how far Media Matters went to manufacture an inorganic user experience strictly aimed at creating an interaction between controversial content and big-name advertisers that was seen only by the Media Matters account and then published broadly. . . .
Media Matters set its account to follow only 30 users (far less than the average number of accounts followed by a typical active user, 219), severely limiting the amount and type of content featured on its feed. . . . The representation put forth by Media Matters constituted 0.0000009090909 percent of impressions served on the day in question. Most or all of these pairings were not seen by literally anyone besides Media Matters’ own manipulated account, and no authentic user of the platform has been confirmed to have seen any of these pairings.
So, Media Matters proved that it was possible for an X user who used the platform for the sole purpose of seeing corporate ads matched with extremist content to see those pairings, if the user tried hard enough. It could have published findings showing this in order to argue that Musk’s safeguards were not 100 percent foolproof. That is not remotely what it did.
Here, instead, is how Media Matters portrayed its experiment in a pair of articles by Hananoki. The two key publications are a November 16 article titled “As Musk endorses antisemitic conspiracy theory, X has been placing ads for Apple, Bravo, IBM, Oracle, and Xfinity next to pro-Nazi content” and a November 17 article titled “X is placing ads for Amazon, NBA Mexico, NBCUniversal, and others next to content with white nationalist hashtags” (emphasis added). In each case, the headline sets the tone by promising to show that such placements are an ongoing occurrence rather than something only Media Matters saw, and which was generated in an artificial experiment.
A sampling of what Media Matters wrote:
As X owner Elon Musk continues his descent into white nationalist and antisemitic conspiracy theories, his social media platform has been placing ads for major brands like Apple, Bravo (NBCUniversal), IBM, Oracle, and Xfinity (Comcast) next to content that touts Adolf Hitler and his Nazi Party. . . .
During all of this Musk-induced chaos, corporate advertisements have also been appearing on pro-Hitler, Holocaust denial, white nationalist, pro-violence, and neo-Nazi accounts. . . .
We recently found ads for Apple, Bravo, Oracle, Xfinity, and IBM next to posts that tout Hitler and his Nazi Party on X. . . .
But as hateful rhetoric flourishes on X, the platform’s remaining advertisers are especially affected. [Emphasis added]
In other words, the thrust of the articles was to present these ad placements as an ongoing and recurring problem that Media Matters “found” rather than events it staged by an experiment to test the vulnerability of the system. The screenshots presented in the articles gave no indication of their provenance, suggesting to the ordinary reader that these were simply spotted by Media Matters personnel or sent to them by other users.
According to X’s complaint, which is consistent with the way the articles present the screenshots, Media Matters went out of its way to avoid transparency by conducting all of its experiments through a private account that could not be seen by other users:
Media Matters omitted in its entirety its process of manufacturing these ad pairings. It did not include in its article that it created a user that only followed 30 accounts that either belonged to fringe figures or major national brands. Neither readers nor advertisers had any way of knowing that the entire feed was orchestrated to generate the remarkably rare combinations. Media Matters also omitted mentioning in its entirety its excessive scrolling and refreshing, allowing users to believe (falsely) that the “report” was produced under circumstances that were organic and unmanipulated. . . .
Media Matters’ image choice in its smear also functioned to hide the true nature of its report. All images selected contained only the ad and the controversial content, with all other posts absent from view. . . . Media Matters at no point includes images with any information about the account that was exposed to these images; the cropped nature of Media Matters’ deceptive screenshots leaves its profile picture out of frame. [Emphasis in original]
Consistent with its past practice, Media Matters trumpeted the specific advertisers affected, in an open effort to get them to cancel business with X. Apple, IBM, Comcast, and NBCUniversal all appear to have canceled or suspended advertisements — results that Media Matters publicized and celebrated. In the usual case of defamation or its commercial cousin, business disparagement (the main claim raised here by X), it can be difficult to meet the demanding threshold for proving “special damages” directly traced to the statement. Here, the Media Matters reports appear to have been the direct and proximate cause of those losses by X, and to have been written with the aim of causing them.
X has chosen its venue well. The Northern District of Texas, and in particular its Fort Worth division, has a conservative bench, a conservative jury pool, and a relatively fast-moving civil docket — all bad news for a left-wing organization defending a politically charged civil suit. Judge Mark Pittman, to whom the case was assigned, is a Trump appointee. The district is not the sort that tends to favor disposing of cases on motions to dismiss the complaint, rather than allowing them to go forward in discovery. As we shall see, Texas law, while hardly unique on this point, provides some fairly clear guidance in favor of the legal claims brought by X.
Media Matters is also facing a pincer movement, as Texas attorney general Ken Paxton has launched an investigation of Media Matters’s conduct. X’s lawyers include veterans of Paxton’s office. X should be able to get jurisdiction over Media Matters in Texas, given the nationwide nature of its publication and the impact on X’s business in the state (it may have more difficulty getting jurisdiction over Hananoki). Media Matters will likely try to get the case heard somewhere else based on the locations of witnesses and events, but the ongoing probe by Paxton may fortify X’s arguments for keeping the case where it was filed. It may likewise be an uphill battle if Media Matters wants the court to apply the law of some jurisdiction other than Texas. All of these, however, represent uncertainties for both litigants.
The practical challenges of the case will be another matter: Both sides will be highly motivated to seek all manner of discovery ranging well beyond the specific merits of the case. Media Matters may use the opportunity to grill Musk about his public statements and changes to the content-moderation policies. X will probably retaliate by delving into how Media Matters influences corporations, how it is validated as a source by media companies, and what ties it has with the government. Media Matters is well funded by left-wing donor networks (it has received significant funding from Arabella Advisors), but it is unlikely that it can match Musk’s capacity to pay lawyers to do this for years on end.
The Legal Merits
Does X have a case? Assuming that it can prove the facts alleged in its complaint, and that those facts will be judged under Texas law, it would seem likely that the case can survive a motion to dismiss and get to trial.
A pair of defamation suits against Dateline NBC provide examples of how these kinds of cases can go. In 1993, NBC settled a lawsuit filed by General Motors after a Dateline program about allegedly unsafe GM pickup trucks featured a test in which a crash caused a truck to catch fire. NBC insisted that its report was accurate: It showed a real GM pickup truck, it really did catch fire, and (said NBC) GM’s pickups really were prone to that sort of fire. What NBC didn’t tell viewers was that its test rigged the truck by replacing the gas cap with remote-controlled incendiary model-rocket engines. What deceived the viewers was the rigged nature of the test.
In 2014, a federal appeals court ruled against NBC involving another Dateline segment, this one portraying an insurance broker as tricking or scaring senior citizens into buying annuities. The program showed actual footage of the broker, but the Tenth Circuit concluded that omitting his more cautionary statements could mislead the viewer and support a defamation claim under Colorado law. NBC came back and presented more detailed defense with complete recordings of the broker’s seminar, and three years later, the Tenth Circuit threw the case out — but only after a fuller review concluded that the program was “substantially true” when considered in its full context.
Media Matters may argue here that its reports were in some sense literally true: It did manage to get the ads paired with extremist content, as reflected in the screenshots, and this proved that it was possible for this to happen. But then, Dateline tried that same argument, and the fact that it hid the rocket engines from its audience was its downfall. The thrust of X’s lawsuit is the concealment of the rigged nature of the test and the use of that test to convey a false impression about the likelihood that X users would encounter ads from these companies paired with extremist content. That likelihood is precisely the important part for advertisers. To say that Media Matters “found” these ad pairings is akin to saying that a cop who plants drugs in your car “found” the drugs there. It’s like saying you “found” pornography on Instagram after posting it to your own account.
Under Texas law, a defamation or business-disparagement case can be based on a report that uses literally true words or images if the report omits facts, or juxtaposes them in misleading ways, in order to create a false impression. The leading case is the Texas Supreme Court’s decision in Turner v. KTRK Television, Inc. (2000). Turner involved a television report about Sylvester Turner, who was then running for mayor of Houston (a job he holds today); his campaign dropped like a rock after the report, and he lost the race. It involved his legal representation of a man who committed insurance fraud by loading up on insurance policies while under criminal investigation and then faking his own death. Turner prepared the man’s will.
The report claimed that Turner was “deeply involved” in the fraud and created the impression of his culpability by stating a series of true facts, but in misleading ways. For example, its presentation compressed the timeline of events, portrayed Turner as scheming to get a friend named administrator of the estate without mentioning that the friend had already been named as executor of the will, and stated (truthfully) that a court had removed Turner from the ensuing litigation for “conflict of interest” without mentioning that the “conflict” arose from the legal rule that a lawyer can’t appear in a case where he is also likely to be a witness. The court (in an opinion joined by then-justices Greg Abbott and Alberto Gonzales), explained the legal standard:
Because a publication’s meaning depends on its effect on an ordinary person’s perception, courts have held that under Texas law a publication can convey a false and defamatory meaning by omitting or juxtaposing facts, even though all the story’s individual statements considered in isolation were literally true or non-defamatory. . . .
Just as the substantial truth doctrine precludes liability for a publication that correctly conveys a story’s “gist” or “sting” although erring in the details, these cases permit liability for the publication that gets the details right but fails to put them in the proper context and thereby gets the story’s “gist” wrong.
This is consistent with a broader principle of law that I have written about on many occasions: The law of fraud and false statements, which appears in different guises in the civil and criminal law, is centrally concerned with materiality and deception. In other words, it’s not a game of “gotcha” to find false statements; the point is to punish those who actually convince others of something false (or at least say things likely to do so), on an important matter that might change their behavior, where the audience doesn’t have its own access to the truth. It is common throughout different areas of false-statement and fraud law to rule that literally true statements can be misleading and fraudulent because they omitted crucial context. It is also common to read statements and documents as a whole, in light of the evidence available to the ordinary reader, in order to assess their message. Rigged tests and deceptive editing of actual words are in the heartland of these doctrines.
Texas cases show a variety of ways in which this rule (allowing suits for things such as “defamation as a whole” and “libel by implication”) has been applied in defamation and business disparagement suits: