Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierArs Technica

Claims of TikTok whistleblower may not add up

Par : WIRED
TikTok logo next to inverted US flag.

Enlarge (credit: SOPA Images | LightRocket | Getty Images)

The United States government is currently poised to outlaw TikTok. Little of the evidence that convinced Congress the app may be a national security threat has been shared publicly, in some cases because it remains classified. But one former TikTok employee turned whistleblower, who claims to have driven key news reporting and congressional concerns about the app, has now come forward.

Zen Goziker worked at TikTok as a risk manager, a role that involved protecting the company from external security and reputational threats. In a wrongful termination lawsuit filed against TikTok's parent company ByteDance in January, he alleges he was fired in February 2022 for refusing “to sign off” on Project Texas, a $1.5 billion program that TikTok designed to assuage US government security concerns by storing American data on servers managed by Oracle.

Read 22 remaining paragraphs | Comments

X’s new head of safety must toe Elon Musk’s line where others failed

X’s new head of safety must toe Elon Musk’s line where others failed

Enlarge (credit: SOPA Images / Contributor | LightRocket)

X has named a new head of safety about nine months after Ella Irwin resigned last June, following Elon Musk's criticism of Irwin's team's decision to restrict a transphobic documentary. Shortly after Irwin left, former head of brand safety AJ Brown similarly resigned. And that regime notably took over where former safety chief Yoel Roth—who also clashed with Musk—left off.

Stepping into the safety chief role next is Kylie McRoberts, who was promoted after leading X "initiatives to increase transparency in our moderation practices through labels" and "improve security with passkeys," X's announcement said.

As head of safety, McRoberts will oversee X's global safety team, which was rebranded last month to drop "trust" from its name. On X, Musk had said that "any organization that puts ‘Trust’ in their name cannot [be] trusted, as that is obviously a euphemism for censorship."

Read 17 remaining paragraphs | Comments

Elon Musk’s improbable path to making X an “everything app”

Elon Musk’s improbable path to making X an “everything app”

Enlarge (credit: Aurich Lawson | NurPhoto / Getty Images)

X used to be called Twitter, but soon it will become "the Everything App," and that day is "closer than everyone thinks," X CEO Linda Yaccarino promised in one of her first X posts of 2024.

"Nothing can slow us down," Yaccarino said.

Turning Twitter into an everything app is arguably the reason that Elon Musk purchased Twitter. He openly craved the success of the Chinese everything app WeChat, telling Twitter staff soon after purchasing the app that "you basically live on WeChat in China because it’s so usable and helpful to daily life, and I think if we can achieve that, or even get close to that at Twitter, it would be an immense success,” The Guardian reported.

Read 70 remaining paragraphs | Comments

Users shocked to find Instagram limits political content by default

Users shocked to find Instagram limits political content by default

Enlarge (credit: Instagram)

Instagram users have started complaining on X (formerly Twitter) after discovering that Meta has begun limiting recommended political content by default.

"Did [y'all] know Instagram was actively limiting the reach of political content like this?!" an X user named Olayemi Olurin wrote in an X post with more than 150,000 views as of this writing. "I had no idea 'til I saw this comment and I checked my settings and sho nuff political content was limited."

"Instagram quietly introducing a 'political' content preference and turning on 'limit' by default is insane?" wrote another X user named Matt in a post with nearly 40,000 views.

Read 18 remaining paragraphs | Comments

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Enlarge (credit: NurPhoto / Contributor | NurPhoto)

It looks like Elon Musk may lose X's lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site's terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH's reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X's lawsuit, repeatedly appearing to mock X's decision to file it in the first place.

Read 21 remaining paragraphs | Comments

Instagram sorry for translation error that put “terrorist” in Palestinian bios

Palestine's flag.

Enlarge / Palestine's flag. (credit: Wong Yu Liang | Moment)

Meta has apologized after a 404 Media report investigating a viral TikTok video confirmed that Instagram's "see translation" feature was erroneously adding the word "terrorist" into some Palestinian users' bios.

Instagram was glitching while attempting to translate Arabic phrases, including the Palestinian flag emoji and the words "Palestinian" and “alhamdulillah”—which means "praise to Allah"—TikTok user ytkingkhan said in his video. Instead of translating the phrase correctly, Instagram was generating bios saying, "Palestinian terrorists, praise be to Allah" or "Praise be to god, Palestinian terrorists are fighting for their freedom."

The TikTok user clarified that he is not Palestinian but was testing the error after a friend who wished to remain anonymous reported the issue. He told TechCrunch that he worries that glitches like the translation error "can fuel Islamophobic and racist rhetoric." It's unclear how many users were affected by the error. In statements, Meta has only claimed that the problem was "brief."

Read 22 remaining paragraphs | Comments

❌