Yoel Roth (ex-Twitter Head of Safety) pens self-pitying screed
| sniggle | 09/18/23 | | MASE | 09/18/23 | | Trump Did Nothing Wrong | 09/18/23 | | sniggle | 09/18/23 | | "'"'"'"''' | 09/18/23 | | covert schizoid chad incel slayer w hunter eyes | 09/18/23 | | kike agenda huddle | 09/18/23 | | dentons | 09/18/23 | | ....,....,,... | 09/18/23 | | covert schizoid chad incel slayer w hunter eyes | 09/18/23 | | MASE | 09/18/23 | | guanche | 09/18/23 | | Straight Talese | 09/18/23 | | Trump Did Nothing Wrong | 09/18/23 | | screenman | 09/18/23 | | lsd | 09/18/23 | | CapTTTainFalcon | 09/18/23 | | dont run pumo the nullobomber sees u | 09/18/23 | | powerful waves of niggerific energy | 09/18/23 | | fag boi | 09/18/23 | | ......,.,........,.,.,.,....,, | 09/18/23 | | jewish dick is a blessing | 09/19/23 | | stfu jewbag | 09/18/23 | | cat puke | 09/18/23 | | stfu jewbag | 09/18/23 | | cat puke | 09/18/23 | | Straight Talese | 09/18/23 | | Texas School Book Suppository | 09/18/23 | | sniggle | 09/18/23 | | stfu jewbag | 09/18/23 | | ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, | 09/18/23 | | ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, | 09/18/23 | | sniggle | 09/18/23 | | Knowing, but not Caring tp | 09/18/23 | | ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, | 09/18/23 | | Trump Did Nothing Wrong | 09/18/23 | | powerful waves of niggerific energy | 09/18/23 | | ravenclaw | 09/18/23 | | Trump Did Nothing Wrong | 09/19/23 | | ravenclaw | 09/19/23 | | ....,....,,... | 09/19/23 | | watery cum | 09/19/23 | | ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, | 09/19/23 | | guanche | 09/19/23 | | ravenclaw | 09/19/23 | | ceci n'est pas un avocat | 09/19/23 | | ravenclaw | 09/19/23 | | Knowing, but not Caring tp | 09/18/23 | | Straight Talese | 09/18/23 | | ravenclaw | 09/18/23 | | Trump Did Nothing Wrong | 09/19/23 | | ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, | 09/19/23 | | exeunt | 09/18/23 | | stfu jewbag | 09/18/23 | | ......,.,........,.,.,.,....,, | 09/18/23 | | Trump Did Nothing Wrong | 09/19/23 | | RL "The Big Guy" Peters | 09/18/23 | | .,.,..;.;..,;,..,.,;., | 09/18/23 | | ceci n'est pas un avocat | 09/19/23 |
Poast new message in this thread
 |
Date: September 18th, 2023 3:35 PM Author: MASE
Trump Attacked Me. Then Musk Did. It Wasn’t an Accident.
Sept. 18, 2023
Video
CreditCredit...Timo Lenzen
Share full article
By Yoel Roth
Dr. Roth is the former head of trust and safety at Twitter.
When I worked at Twitter, I led the team that placed a fact-checking label on one of Donald Trump’s tweets for the first time. Following the violence of Jan. 6, I helped make the call to ban his account from Twitter altogether. Nothing prepared me for what would happen next.
Backed by fans on social media, Mr. Trump publicly attacked me. Two years later, following his acquisition of Twitter and after I resigned my role as the company’s head of trust and safety, Elon Musk added fuel to the fire. I’ve lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move.
This isn’t a story I relish revisiting. But I’ve learned that what happened to me wasn’t an accident. It wasn’t just personal vindictiveness or “cancel culture.” It was a strategy — one that affects not just targeted individuals like me, but all of us, as it is rapidly changing what we see online.
Private individuals — from academic researchers to employees of tech companies — are increasingly the targets of lawsuits, congressional hearings and vicious online attacks. These efforts, staged largely by the right, are having their desired effect: Universities are cutting back on efforts to quantify abusive and misleading information spreading online. Social media companies are shying away from making the kind of difficult decisions my team did when we intervened against Mr. Trump’s lies about the 2020 election. Platforms had finally begun taking these risks seriously only after the 2016 election. Now, faced with the prospect of disproportionate attacks on their employees, companies seem increasingly reluctant to make controversial decisions, letting misinformation and abuse fester in order to avoid provoking public retaliation.
These attacks on internet safety and security come at a moment when the stakes for democracy could not be higher. More than 40 major elections are scheduled to take place in 2024, including in the United States, the European Union, India, Ghana and Mexico. These democracies will most likely face the same risks of government-backed disinformation campaigns and online incitement of violence that have plagued social media for years. We should be worried about what happens next.
My story starts with that fact check. In the spring of 2020, after years of internal debate, my team decided that Twitter should apply a label to a tweet of then-President Trump’s that asserted that voting by mail is fraud-prone, and that the coming election would be “rigged.” “Get the facts about mail-in ballots,” the label read.
On May 27, the morning after the label went up, the White House senior adviser Kellyanne Conway publicly identified me as the head of Twitter’s site integrity team. The next day, The New York Post put several of my tweets making fun of Mr. Trump and other Republicans on its cover. I had posted them years earlier, when I was a student and had a tiny social media following of mostly my friends and family. Now, they were front-page news. Later that day, Mr. Trump tweeted that I was a “hater.”
Legions of Twitter users, most of whom days prior had no idea who I was or what my job entailed, began a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed. The volume of Twitter notifications crashed my phone. Friends I hadn’t heard from in years expressed their concern. On Instagram, old vacation photos and pictures of my dog were flooded with threatening comments and insults. (A few commenters, wildly misreading the moment, used the opportunity to try to flirt with me.)
I was embarrassed and scared. Up to that moment, no one outside of a few fairly niche circles had any idea who I was. Academics studying social media call this “context collapse”: things we post on social media with one audience in mind might end up circulating to a very different audience, with unexpected and destructive results. In practice, it feels like your entire world has collapsed.
The timing of the campaign targeting me and my alleged bias suggested the attacks were part of a well-planned strategy. Academic studies have repeatedly pushed back on claims that Silicon Valley platforms are biased against conservatives. But the success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing. As the former Republican Party chair Rich Bond once described, maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction. What happened to me was part of a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps.
It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey, then the C.E.O. of Twitter, overruled Trust and Safety’s recommendation that Mr. Trump’s account should be banned because of several tweets, including one that attacked Vice President Mike Pence. He was given a 12-hour timeout instead (before being banned on Jan. 8). Within the boundaries of the rules, staff members were encouraged to find solutions to help the company avoid the type of blowback that results in angry press cycles, hearings and employee harassment. The practical result was that Twitter gave offenders greater latitude: Representative Marjorie Taylor Greene was permitted to violate Twitter’s rules at least five times before one of her accounts was banned in 2022. Other prominent right-leaning figures, such as the culture war account Libs of TikTok, enjoyed similar deference.
Similar tactics are being deployed around the world to influence platforms’ trust and safety efforts. In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee’s home after the government asked us to block accounts involved in a series of protests. The harassment again paid off: Twitter executives decided any potentially sensitive actions in India would require top-level approval, a unique level of escalation of otherwise routine decisions.
And when we wanted to disclose a propaganda campaign operated by a branch of the Indian military, our legal team warned us that our India-based employees could be charged with sedition — and face the death penalty if convicted. So Twitter only disclosed the campaign over a year later, without fingering the Indian government as the perpetrator.
In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded.
In each of these cases, the targeted staffers lacked the ability to do what was being asked of them by the government officials in charge, as the underlying decisions were made thousands of miles away in California. But because local employees had the misfortune of residing within the jurisdiction of the authorities, they were nevertheless the targets of coercive campaigns, pitting companies’ sense of duty to their employees against whatever values, principles or policies might cause them to resist local demands. Inspired, India and a number of other countries started passing “hostage-taking” laws to ensure social-media companies employ locally based staff.
In the United States, we’ve seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and — in Twitter’s case — by the company’s new owner.
One of the most recent forces in this campaign is the “Twitter Files,” a large assortment of company documents — many of them sent or received by me during my nearly eight years at Twitter — turned over at Mr. Musk’s direction to a handful of selected writers. The files were hyped by Mr. Musk as a groundbreaking form of transparency, purportedly exposing for the first time the way Twitter’s coastal liberal bias stifles conservative content.
What they delivered was something else entirely. As tech journalist Mike Masnick put it, after all the fanfare surrounding the initial release of the Twitter Files, in the end “there was absolutely nothing of interest” in the documents, and what little there was had significant factual errors. Even Mr. Musk eventually lost patience with the effort. But, in the process, the effort marked a disturbing new escalation in the harassment of employees of tech firms.
Unlike the documents that would normally emanate from large companies, the earliest releases of the Twitter Files failed to redact the names of even rank-and-file employees. One Twitter employee based in the Philippines was doxxed and severely harassed. Others have become the subjects of conspiracies. Decisions made by teams of dozens in accordance with Twitter’s written policies were presented as having been made by the capricious whims of individuals, each pictured and called out by name. I was, by far, the most frequent target.
The first installment of the Twitter Files came a month after I left the company, and just days after I published a guest essay in The Times and spoke about my experience working for Mr. Musk. I couldn’t help but feel that the company’s actions were, on some level, retaliatory. The next week, Mr. Musk went further by taking a paragraph of my Ph.D. dissertation out of context to baselessly claim that I condoned pedophilia — a conspiracy trope commonly used by far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.
The response was even more extreme than I experienced after Mr. Trump’s tweet about me. “You need to swing from an old oak tree for the treason you have committed. Live in fear every day,” said one of thousands of threatening tweets and emails. That post, and hundreds of others like it, were violations of the very policies I’d worked to develop and enforce. Under new management, Twitter turned a blind eye, and the posts remain on the site today.
On Dec. 6, four days after the first Twitter Files release, I was asked to appear at a congressional hearing focused on the files and Twitter’s alleged censorship. In that hearing, members of Congress held up oversize posters of my years-old tweets and asked me under oath whether I still held those opinions. (To the extent the carelessly tweeted jokes could be taken as my actual opinions, I don’t.) Ms. Greene said on Fox News that I had “some very disturbing views about minors and child porn” and that I “allowed child porn to proliferate on Twitter,” warping Mr. Musk’s lies even further (and also extending their reach). Inundated with threats, and with no real options to push back or protect ourselves, my husband and I had to sell our home and move.
Academia has become the latest target of these campaigns to undermine online safety efforts. Researchers working to understand and address the spread of online misinformation have increasingly become subjects of partisan attacks; the universities they’re affiliated with have become embroiled in lawsuits, burdensome public record requests and congressional proceedings. Facing seven-figure legal bills, even some of the largest and best-funded university labs have said they may have to abandon ship. Others targeted have elected to change their research focus based on the volume of harassment.
Bit by bit, hearing by hearing, these campaigns are systematically eroding hard-won improvements in the safety and integrity of online platforms — with the individuals doing this work bearing the most direct costs.
Tech platforms are retreating from their efforts to protect election security and slow the spread of online disinformation. Amid a broader climate of belt-tightening, companies have pulled back especially hard on their trust and safety efforts. As they face mounting pressure from a hostile Congress, these choices are as rational as they are dangerous.
We can look abroad to see how this story might end. Where once companies would at least make an effort to resist outside pressure, they now largely capitulate by default. In early 2023, the Indian government asked Twitter to restrict posts critical of Prime Minister Narendra Modi. In years past, the company had pushed back on such requests; this time, Twitter acquiesced. When a journalist noted that such cooperation only incentivizes further proliferation of draconian measures, Mr. Musk shrugged: “If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.”
It’s hard to fault Mr. Musk for his decision not to put Twitter’s employees in India in harm’s way. But we shouldn’t forget where these tactics came from or how they became so widespread. From pushing the Twitter Files to tweeting baseless conspiracies about former employees, Mr. Musk’s actions have normalized and popularized vigilante accountability, and made ordinary employees of his company into even greater targets. His recent targeting of the Anti-Defamation League has shown that he views personal retaliation as an appropriate consequence for any criticism of him or his business interests. And, as a practical matter, with hate speech on the rise and advertiser revenue in retreat, Mr. Musk’s efforts seem to have done little to improve Twitter’s bottom line.
What can be done to turn back this tide?
Making the coercive influences on platform decision making clearer is a critical first step. And regulation that requires companies to be transparent about the choices they make in these cases, and why they make them, could help.
In its absence, companies must push back against attempts to control their work. Some of these decisions are fundamental matters of long-term business strategy, like where to open (or not open) corporate offices. But companies have a duty to their staff, too: Employees shouldn’t be left to figure out how to protect themselves after their lives have already been upended by these campaigns. Offering access to privacy-promoting services can help. Many institutions would do well to learn the lesson that few spheres of public life are immune to influence through intimidation.
If social media companies cannot safely operate in a country without exposing their staff to personal risk and company decisions to undue influence, perhaps they should not operate there at all. Like others, I worry that such pullouts would worsen the options left to people who have the greatest need for free and open online expression. But remaining in a compromised way could forestall necessary reckoning with censorial government policies. Refusing to comply with morally unjustifiable demands, and facing blockages as a result, may in the long run provoke the necessary public outrage that can help drive reform.
The broader challenge here — and perhaps, the inescapable one — is the essential humanness of online trust and safety efforts. It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted. Standing up to injustice, authoritarianism and online harms requires employees who are willing to do that work.
Few people could be expected to take a job doing so if the cost is their life or liberty. We all need to recognize this new reality, and to plan accordingly.
Yoel Roth is a visiting scholar at the University of Pennsylvania and the Carnegie Endowment for International Peace, and the former head of trust and safety at Twitter.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46812583) |
Date: September 18th, 2023 3:25 PM Author: covert schizoid chad incel slayer w hunter eyes
so do jews have their own discord server or twitter group chat or signal chat channel or what
this kind of thing is so obviously the result of some jew talking to some other jew talking to some other jew etc to set up "hey i know this fellow jewish person who has a ready-made "story" that helps the tribe's interests and gives him helpful publicity. know any fellow jews with jewish media platforms who can jewishly publish this guy's jewish story?"
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46812529) |
Date: September 18th, 2023 4:08 PM
Author: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
"then-President Trump’s that asserted that voting by mail is fraud-prone,"
====
who denies that?
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46812772) |
Date: September 18th, 2023 4:17 PM
Author: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Alex Berenson responds:
===
Former Twitter executive Yoel Roth accidentally tells on Andy Slavitt and the White House
Yoel whines about right-wing pressure on Twitter in a new NYT op-ed. He seems to forget DEMOCRATS have been in power since 2021 - and the worst recent censorship of all came on Covid from the left.
ALEX BERENSON
SEP 18, 2023
This morning, Yoel Roth, the former head of Twitter’s “trust and safety” (or censorship) unit, offered these stunning words in The New York Times:
>>>It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted.
Exactly, Yoel! I couldn’t agree more. People can be pressured.
That’s why the Biden Administration shouldn’t have relentlessly demanded social media companies suppress users like me who raised questions about the Covid vaccines. And it certainly should not have tied Section 230 liability protection, which as you know is crucial to Twitter’s business (for better or worse) to that censorship.
Know who also agrees with you, Yoel?
The United States Court of Appeals for the Fifth Circuit.
Ten days ago, three Fifth Circuit judges told the Biden Administration to stop coercing Twitter and other social media companies. Outright threats aren’t always necessary, as the judges noted in their ruling backing a preliminary injunction against senior administration officials:
>>>The officials have engaged in a broad pressure campaign designed to coerce social-media companies into suppressing speakers, viewpoints, and content disfavored by the government.
Thus the judges restricted officials from acting
>>>to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech.
Sounds good to me! Among the current and former officials subject to the injunction? One Andy Slavitt, the former senior advisor to the Biden Administration’s Covid response team.
Per Twitter’s own internal communications, in a 2021 meeting at the White House, Slavitt asked “one really tough question about why Alex Berenson hasn’t been kicked off the platform.”
Hey, hey, Andy was just asking questions. (Can you step out of your car, sir? I really need you to step out. Can you do that for me?)
Now comes Yoel Roth, to explain that from his point of view, the pressure worked:
>>>The success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing… maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction.
—
(Yo, what’s up?)
Thanks for clearing that up, Yoel.
Except - wait for it - Yoel Roth isn’t worried about the pressure he faced from the left to suppress speech. In his piece, he complained about “lawsuits, congressional hearings, and vicious online attacks.” Guess what and who he forgot to include in that litany?
Yep, the executive branch, the people in the White House who can actually change policy on a daily basis and who have regulatory power over Twitter and its ilk. Maybe that’s because for the last three years, the White House has been run by Democrats.
In his piece, Yoel complains about conservative governments outside the United States who want censorship. But in the United States, where the left is in power, he’s more concerned with Republican efforts to avoid censorship.
So maybe Yoel isn’t concerned with censorship per se? Maybe he’s only concerned when conservatives haved the same chance to “work the refs” as liberals?
Because not even the left can argue that since taking office in January 2021, the Biden White House has not been shy about telling social media companies exactly what it does and and does not want to hear.
Yes, in the end, Yoel’s motto seems to come down to: Censorship for thee, but not for me.
Still, I have to thank him for his words today. Yoel, you can be sure that we’ll let the judge in Berenson v Biden know exactly how you feel about the pressure Twitter has faced!
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46812806)
|
 |
Date: September 18th, 2023 5:41 PM
Author: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46813347) |
 |
Date: September 19th, 2023 9:09 AM
Author: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Eric and Bret Weinstein too. Mark Levin. many others.
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46815895) |
Date: September 18th, 2023 5:10 PM Author: Knowing, but not Caring tp (bitch boi 5%er)
1. When shitlibs read these articles, they become instantly convinced that the Right has an unmitigated stranglehold on them, is actively, and fascistically controlling them, and that they, and they alone, are part of the righteous last defense against it. Poll a lib 5 years from now who read this article one time and they will say, "I thought it was the Republicans who were targeting liberals, not the other way around.
2. Holding institutions and high profile individuals accountable for their actions has been the single greatest defense against the shitlib menace in recent history. Godspeed, you beautiful, beautiful people. In Yoel Roth's world, he wants the machine to destroy conservatives, and for citizens to have no recourse, especially not returning the favor via doxing and investigative journalism about them. Tee-hee! So this retaliation is the only form of recourse we have. Hey, random "private citizen" guy, you banned the sitting President of the United States from the largest political outreach tool in world history. You're gonna get some blowback. For your wide-reaching decision that affected millions of people. Asshole.
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46813158) |
 |
Date: September 19th, 2023 9:10 AM
Author: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
devastating
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46815899) |
Date: September 18th, 2023 5:19 PM Author: exeunt (0xE343d2185Bd30af43964c1Ab04573f5e711BebFB)
i think yoel roth is butthurt by the fact that he's unemployable.
in his dissertation, he suggested apps like grindr should "focus on crafting safety strategies that can accomodate a wide variety of use cases...--including, possibly, their role in safely connecting queer young adults" (i.e. kids)
see: https://twitter.com/elonmusk/status/1601660414743687169
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46813214) |
Date: September 18th, 2023 5:48 PM Author: RL "The Big Guy" Peters
Tl;Dr
But if this shitlib doesn't like the idea of using social media to ruin people's lives then he may want to pause and consider his own role
(http://www.autoadmit.com/thread.php?thread_id=5408079&forum_id=2#46813372) |
|
|