The bombing killed at least three people, including a child, Ukrainian President Volodymyr Zelenskiy said in a public statement. Images of bloodied, heavily pregnant women fleeing the rubble with their hands holding their bellies have sparked outrage around the world. read more Among the most recognizable women was Mariana Vishegirskaya, a Ukrainian fashion and beauty influence. After the attack, photos of her walking down a hospital staircase with polka dot pajamas, which were captured by an Associated Press photographer, were widely circulated. Sign up now for FREE unlimited access to Reuters.com Register The online expressions of support for the expectant mother quickly turned into attacks on her Instagram account, according to two contractors who directly mediate the content from the conflict on Facebook and Instagram. They spoke to Reuters on condition of anonymity, citing non-disclosure agreements that barred them from discussing their work in public. The beauty influencer case is just one example of how Meta’s content policies and enforcement mechanisms allowed pro-Russian propaganda during the invasion of Ukraine, the coordinators told Reuters. Russian officials snatched the images, placing them side by side with her glossy Instagram photos, in an attempt to convince viewers that the attack was fake. On state television and social media, and in the hall of the UN Security Council, Moscow falsely claimed that Vishegirskaya was wearing make-up and a lot of clothes in an elaborate prank orchestrated by Ukrainian forces. Swarms of comments accusing the duo of being influential and of being an actor appeared under old Instagram posts where she posed with makeup tubes, moderators said. At the height of the attack, comments containing false allegations about the woman represented most of the material in a moderator’s content queue, which would normally contain a mix of posts suspected of violating Meta’s myriad policies, the individual recalled. “The posts were miserable,” he said, and appeared to be orchestrated, the coordinator told Reuters. But many were within company rules, the man said, because they did not immediately report the attack. “I could not do anything for them,” said the coordinator. Reuters could not contact Vishegirskaya. Meta declined to comment on the handling of Vishegirskaya’s business, but said in a statement to Reuters that several groups were working on the issue. “We have separate teams of experts and external partners examining misinformation and genuine behavior and implementing our policies to tackle this activity vigorously throughout the war,” the statement said. Meta’s policy chief Nick Clegg told reporters on Wednesday that the company was considering new measures to tackle misinformation and pranks on Russian government websites, without giving further details. read more Russia’s Ministry of Digital Development, Communications and Mass Media and the Kremlin have not responded to requests for comment. Representatives of Ukraine did not respond to a request for comment. “SPIRIT OF POLITICS” Based in a surveillance center of several hundred people examining content from Eastern Europe, the two contractors are pedestrians in Meta’s battle with police content from the conflict. He is among the tens of thousands of low-paid outsourcing workers around the world that Meta contracts to enforce its rules. The tech giant tried to position itself as a responsible webmaster during the invasion, which Russia calls a “special operation” to disarm and “denationalize” its neighbor. Just days after the war, Meta imposed restrictions on Russian state media and toppled a small network of coordinated fake accounts that it said were trying to undermine confidence in the Ukrainian government. read more He later said he had cracked down on another Russian-based network that falsely accused people of harassment, such as hate speech or intimidation, while thwarting attempts by networks that had previously been shut down to return to the platform. Meanwhile, the company sought to create space for users in the region to express their anger at the Russian invasion and call for arms in ways that Meta would not normally allow. In Ukraine and 11 other countries across Eastern Europe and the Caucasus, it has made a number of temporary exceptions to the “policy spirit” of its rules banning hate speech, violent threats and more. The changes were intended to honor the general principles of these policies, not their literal wording, according to Meta instructions to moderators seen by Reuters. For example, it allowed “inhumane speech against Russian soldiers” and calls for the death of Russian President Vladimir Putin and his ally, Russian President Alexander Lukashenko, unless those calls were considered credible or contained in additional targets, according to Reuters. . The changes became a flashpoint for Meta as it led to pressure both inside the company and from Moscow, which opened a criminal case against the company following a Reuters report on March 10 that made the discrepancies public. Russia has also banned Facebook and Instagram within its borders, with a court accusing Meta of “extremist activity.” read more Meta withdrew details of the exceptions following a Reuters report. He first restricted them to Ukraine and then canceled one, according to documents examined by Reuters, Meta’s public statements and interviews with two Meta executives, two coordinators in Europe and a third coordinator handling English-language content in another. area. had seen the advice. The documents provide a rare lens on how Meta interprets its policies, called Community standards. The company says its system is neutral and based on rules. Critics say it is often reactionary, driven by both business thinking and news circles, as well as principles. It is a complaint that has dragged Meta into other global conflicts, such as Myanmar, Syria and Ethiopia. Social media researchers say the approach allows the company to avoid responsibility for how its policies affect the 3.6 billion users of its services. The change of direction on Ukraine has caused confusion and frustration for moderators, who say they have an average of 90 seconds to decide whether a particular post violates the policy, as first reported by the New York Times. Reuters independently confirmed such disappointments with three coordinators. After Reuters reported the exceptions on March 10, Meta’s policy chief Nick Clegg said in a statement the next day that Meta would only allow such a speech in Ukraine. Two days later, Clegg told employees that the company was completely revoking the exemption that allowed users to request the deaths of Putin and Lukashenko, according to an internal company post on March 13 seen by Reuters. In late March, the company extended the remaining exemptions only for Ukraine until April 30, documents show. Reuters is the first to report on this extension, which allows Ukrainians to continue engaging in certain types of violent and inhumane speech that would normally be out of bounds. Within the company, writing on an internal social networking platform, some Meta employees expressed frustration that Facebook allowed Ukrainians to make statements that would be considered out of bounds for users posting about previous conflicts in the Middle East and elsewhere. . in copies of messages viewed by Reuters. “This policy seems to say that hate speech and violence are okay if they target the ‘right’ people,” one official wrote in one of 900 comments in a post about the changes. Meta, meanwhile, gave no guidance to coordinators to strengthen their ability to turn off posts that promote false narratives about the Russian invasion, such as denials that there were civilian deaths, people told Reuters. The company declined to comment on its guidance to coordinators. DENIAL OF VIOLENT TRAGEDIES Theoretically, Meta had a rule that should have allowed moderators to address the mob of commentators who directed unfounded vitriol on Vishegirskaya, the pregnant beauty influencer. She survived the bombing of a hospital in Mariupol and gave birth to her baby, the Associated Press reported. Meta harassment policy prohibits users from “posting content related to a violent tragedy or victims of violent tragedy that includes allegations that a violent tragedy did not occur,” in accordance with Community standards posted on its website. He invoked this rule when he removed posts from the Russian Embassy in London that had made false allegations about the Mariupol bombing after the March 9 attack. read more But because the rule is so narrow, two of the moderators said, it could only be used sparingly to fight the online hate campaign against the beauty influencer that followed. Posts that explicitly stated that the bombing was organized were eligible for removal, but comments such as “you are such a good actor” were considered too vague and should be kept standing, even when the subject was clear, they said. Meta guidance that would allow commentators to examine the context and impose the spirit of this policy could have helped, they added. Meta declined to comment on whether the rule applied to comments on Vishegirskaya’s account. At the same time, even explicit posts proved to be elusive in Meta enforcement systems. One week after the bombing, versions of the Russian embassy’s posts continued to circulate on at least eight official Russian Facebook accounts, including its embassies in Denmark, Mexico and Japan, according to an Israeli watchdog, FakeReport. read more One showed a red “fake” label over the photos of the Associated Press of Mariupol, …