In May, several French and German social media influencers received a strange proposal.
A London-based PR agency wanted to pay her to promote news on behalf of a client. A sophisticated three-page document detailing what to say and on which platforms to say it.
But it called on influencers not to force beauty products or vacation packages as usual, but falsehoods that prevent Pfizer-BioNTech’s Covid-19 vaccine. Even stranger, the Fazze agency claimed a London address for which there is no evidence of the existence of such a company.
Some recipients have posted screenshots of the offer. Revealed, Fazze has scrubbed his social media accounts. That same week, Brazilian and Indian influencers released videos mirroring Fazze’s script in front of hundreds of thousands of viewers.
The system appears to be part of a secret industry that security analysts and American officials say is exploding: disinformation for rent.
Private firms, stretching between traditional marketing and the shadowy world of geopolitical influence, sell services that were once mainly carried out by intelligence agencies.
They sow discord, interfere in elections, sow false narratives and promote viral conspiracies, especially on social media. And they offer their customers something precious: denial.
“Disinfo-for-hire actors hired by the government or related parties are on the rise and should be taken seriously,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, calling it “a booming industry.”
Similar campaigns have recently been found to promote India’s ruling party, Egyptian foreign policy goals, and political figures in Bolivia and Venezuela.
Mr. Brookie’s organization was tracking an operation during a mayor’s race in Serra, a small town in Brazil. An ideologically promiscuous Ukrainian company sponsored several rival political parties.
In the Central African Republic, two separate operations flooded social media with dueling pro-French and pro-Russian disinformation. Both powers vie for influence in the country.
A seemingly organic wave of anti-American posts in Iraq was followed to a public relations firm that was separately accused of simulating anti-government sentiment in Israel.
Most come from companies whose legitimate services are similar to those of a marketer or an email spammer with the lowest prices.
In job postings and LinkedIn employee profiles related to Fazze, Fazze is described as a subsidiary of a Moscow company called Adnow. Some Fazze web domains are registered as the property of Adnow, as the German media Netzpolitik and ARD Kontraste reported for the first time. Reviews from third-party providers show Adnow as a troubled advertising service provider.
European officials say they are investigating who hired Adnow. Parts of Faze’s anti-Pfizer talks resemble promotional materials for Russia’s Sputnik-V vaccine.
Hiring disinformation, if only sometimes effective, becomes more sophisticated as more practitioners iterate and learn. Experts say it is becoming more common in all parts of the world, outperforming operations that are carried out directly by governments.
The result is an accelerated surge of polarizing conspiracies, false civic groups, and fabricated public sentiments that are worsening our shared reality beyond the depths of recent years.
The trend arose after the Cambridge Analytica scandal in 2018, experts say. Cambridge, a political advisory firm associated with members of Donald J. Trump’s 2016 presidential campaign, was found to have collected data from millions of Facebook users.
The controversy drew attention to methods common to social media marketers. Cambridge used its data to address hyper-specific audiences with tailored messages. It tested what was popular with the tracking of likes and shares.
The episode taught a generation of consultants and opportunists that social media marketing was investing a lot of money for political ends, all disguised as organic activity.
Some newcomers eventually came to the same conclusion as Russian agents in 2016: disinformation does particularly well on social platforms.
At the same time, the backlash to Russia’s influence seemed to have deterred governments from being caught – while demonstrating the power of such operations.
“Unfortunately, there is tremendous market demand for disinformation,” said Brookie, “and many places across the ecosystem that are more than ready to meet that demand.”
Commercial firms carried out rent disinformation in at least 48 countries last year – almost twice as much as the previous year, according to a study by the University of Oxford. The researchers identified 65 companies that offer such services.
Last summer, Facebook removed a fact-checking network of Bolivian citizen groups and journalistic organizations. It was said that the sites that supported falsehoods that supported the right-wing government in the country were bogus.
Stanford University researchers trailed the content back to CLS Strategies, a Washington-based communications company that had registered as an advisor to the Bolivian government. The company had done similar work in Venezuela and Mexico.
A spokesman referred to the company’s statement last year that its regional boss had been on leave, but denied Facebook’s allegations that the work was classified as foreign interference.
New technologies make it possible for almost everyone to get involved. Programs generate fake accounts with profile photos that are difficult to track. Instant metrics help improve effective messaging. Likewise, access to users’ personal data, which can easily be purchased in large quantities.
The campaigns are seldom as sophisticated as those of government hackers or specialized companies like the Kremlin-supported Internet Research Agency.
But they seem cheap. In countries that require transparency in campaign funding, companies report that they charge tens of thousands of dollars for campaigns that include traditional advisory services.
The denial layer enables governments to spread disinformation more aggressively, domestically and internationally, than it would otherwise be worth the risk. Some contractors, when caught, have claimed they acted without their clients’ knowledge or only to gain future business.
Platforms have stepped up their efforts to stamp out coordinated disinformation. Mostly, analysts attribute Facebook, which publishes detailed reports of campaigns that it disrupts.
However, some argue that social media companies also have a role in making the threat worse. Engagement-increasing algorithms and design elements, research results, often privilege divisive and conspiratorial content.
Political norms have also shifted. A generation of populist leaders like Rodrigo Duterte of the Philippines has risen in part through manipulation on social media. Once in office, many institutionalize these methods as instruments of governance and external relations.
In India, dozens of government operated Twitter accounts have shared posts from India Vs Disinformation, a website and series of social media feeds purporting to be checking news about India.
India Vs Disinformation is actually the product of a Canadian communications company called Press Monitor.
Almost all of the posts seek to discredit or falsify reports unfavorable to Prime Minister Narendra Modi’s government, including the country’s heavy Covid-19 toll. A dedicated site promotes pro-modes narration under the guise of news articles.
A report by the Digital Forensic Research Lab examining the network called it “an important case study” of the emergence of “disinformation campaigns in democracies.”
A Press Monitor representative who identified himself only as Abhay described the report as completely inaccurate.
All he said was that his company was falsely identified as being based in Canada. When asked why the company lists a Toronto address, Canadian tax registration, and “part of Toronto’s thriving technology ecosystem,” or why he was reached on a Toronto phone number, he said he did business in many countries. He did not reply to an email asking for clarification.
Abhay Aggarwal’s LinkedIn profile identifies him as the Toronto-based CEO of Press Monitor and says the company’s services are used by the Government of India.
A number of pro-Beijing operations indicate the field’s ability to develop rapidly.
Since 2019, Graphika, a digital research company, has been following a network nicknamed “Spamouflage” for reliance on spam on social platforms early on with content that reflects Beijing’s line of geopolitical issues. Most of the posts received little or no engagement.
In the past few months, however, the network has developed hundreds of accounts with elaborate personas. Everyone has their own profile and post history, which can appear authentic. They seemed to come from many different countries and walks of life.
Graphika traced the accounts back to a content farm in Bangladesh that it created in bulk and likely sold to a third party.
The network sharply criticizes Hong Kong democracy activists and American foreign policy. By coordinating without doing it, it created the appearance of organic shifts in public opinion – and often attracted attention.
The reports were augmented by a large media network in Panama, prominent politicians in Pakistan and Chile, Chinese-language YouTube sites, British left-wing commentator George Galloway, and a number of Chinese diplomatic reports.
A separate pro-Beijing network exposed by a Taiwanese investigative firm called The Reporter operated hundreds of Chinese-language websites and social media accounts.
Disguised as news sites and civic groups, they promoted the reunification of Taiwan with mainland China and vilified Hong Kong’s protesters. The report found links between the pages and a Malaysia-based startup offering Singapore dollars to web users to promote the content.
But governments may find that outsourcing such shadow work also carries risks, said Mr. Brookie. For one thing, the companies are more difficult to control and could get caught up in unwanted messages or tactics.
Second, companies organized around fraud are just as likely to divert this energy onto their customers, inflating budgets and accounting for work that never gets done.
“The bottom line is that grifters will be crawling online,” he said.