Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierInformatique & geek

Tesla settles with Black worker after $3.2 million verdict in racism lawsuit

Aerial view of Tesla cars in a parking lot at a Tesla facility.

Enlarge / Tesla cars sit in a parking lot at the company's factory in Fremont, California on October 19, 2022. (credit: Getty Images | Justin Sullivan)

Tesla has settled with a Black former factory worker who won a $3.2 million judgment in a racial discrimination case, a court filing on Friday said.

Both sides were challenging the $3.2 million verdict in a federal appeals court but agreed to dismiss the case in the Friday filing. The joint stipulation for dismissal said that "the Parties have executed a final, binding settlement agreement that fully resolves all claims."

Tesla presumably agreed to pay Owen Diaz some amount less than $3.2 million, ending a case in which Diaz was once slated to receive $137 million. As we've previously written, a jury in US District Court for the Northern District of California ruled that Tesla should pay $137 million to Diaz in October 2021.

Read 7 remaining paragraphs | Comments

Tesla must face racism class action from 6,000 Black workers, judge rules

Aerial view of a Tesla factory shows a giant Tesla logo on the side of the building, and a parking lot filled with cars.

Enlarge / Tesla factory in Fremont, California, on September 18, 2023. (credit: Getty Images | Justin Sullivan )

Tesla must face a class-action lawsuit from nearly 6,000 Black people who allege that they faced discrimination and harassment while working at the company's Fremont factory, a California judge ruled.

The tentative ruling from Alameda County Superior Court "certifies a class defined as the specific approximately 5,977 persons self-identified as Black/African-American who worked at Tesla during the class period from November 9, 2016, through the date of the entry of this order to prosecute the claims in the complaint."

The tentative ruling was issued Tuesday by Judge Noël Wise. Tesla can contest the ruling at a hearing on Friday, but tentative rulings are generally finalized without major changes.

Read 16 remaining paragraphs | Comments

Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images

Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king."

Enlarge / Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king." (credit: @stratejake / X)

On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.

"We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon," wrote Google in a statement Thursday morning.

As more people on X began to pile on Google for being "woke," the Gemini generations inspired conspiracy theories that Google was purposely discriminating against white people and offering revisionist history to serve political goals. Beyond that angle, as The Verge points out, some of these inaccurate depictions "were essentially erasing the history of race and gender discrimination."

Read 9 remaining paragraphs | Comments

Que se passe-t-il avec la plateforme Substack et les contenus nazis ?

La plateforme Substack, qui permet de partager et diffuser des textes, dont des newsletters, est au cœur d'une polémique aux États-Unis depuis quelques semaines. En cause, l'absence de modération du site, y compris contre les discours fanatiques. Un problème identifié il y a longtemps, mais qui a pris de l'ampleur avec une hausse de propos extrémistes.

11 millions de dollars de perte : le tweet antisémite d’Elon Musk va lui coûter cher

Habitué à la provocation et aux prises de position radicales, Elon Musk a relayé une publication antisémite le 15 novembre, ce qui a poussé de nombreuses entreprises et organismes politiques à retirer leurs publicités de X (ex-Twitter). Plusieurs documents dévoilent l'étendue des pertes, qui isole encore toujours plus le réseau social.  [Lire la suite]

Abonnez-vous aux newsletters Numerama pour recevoir l’essentiel de l’actualité https://www.numerama.com/newsletter/

People think white AI-generated faces are more real than actual photos, study says

Eight images used in the study. Four of them are synthetic. Can you tell which ones?

Enlarge / Eight images used in the study; four of them are synthetic. Can you tell which ones? (Answers at bottom of the article.) (credit: Nightingale and Farid (2022))

A study published in the peer-reviewed journal Psychological Science on Monday found that AI-generated faces generated with three year-old technology, particularly those representing white individuals, were perceived as more real than actual face photographs, reports The Guardian. The finding did not extend to images of people of color, likely due to AI models being trained predominantly on images of white individuals—a common bias that is well-known in machine learning research.

In the paper titled "AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones," researchers from Australian National University, the University of Toronto, University of Aberdeen, and University College London coined the term in the paper's title, hyperrealism, which they define as a phenomenon where people think AI-generated faces are more real than actual human faces.

In their experiments, the researchers presented white adults with a mix of 100 AI-generated and 100 real white faces, asking them to identify which were real and their confidence in their decision. Out of 124 participants, 66 percent of AI images were identified as human, compared to 51 percent for real images. This trend, however, was not observed in images of people of color, where both AI and real faces were judged as human about 51 percent of the time, irrespective of the participant's race.

Read 8 remaining paragraphs | Comments

Channel calling for aborting Black pregnancies temporarily restricted by YouTube

Channel calling for aborting Black pregnancies temporarily restricted by YouTube

Enlarge (credit: Anadolu Agency / Contributor | Anadolu)

YouTube has removed one video and stopped monetizing YouTube influencer Cynthia G's channel after finding that the account repeatedly violated YouTube policies by posting videos over the past two years that accumulated tens of thousands of views by calling for Black abortions.

The decision came after an Ars reader asked Ars to investigate why these videos do not violate YouTube's community guidelines.

The video that YouTube removed was titled "If Aborting Black Males Isn't The Solution, What Is?" It was posted in November 2021 and, as of last week, still qualified for ad monetization. In the video, Cynthia G said that "a lot of people" considered the "solution" to be "something horrible that is genocidal" and provided a racist justification, saying that the only way to counter Black male violence is to "eliminate" Black men.

Read 9 remaining paragraphs | Comments

4chan users manipulate AI tools to unleash torrent of racist images

4chan users manipulate AI tools to unleash torrent of racist images

Enlarge (credit: Aurich Lawson | Getty Images)

Despite leading AI companies' attempts to block users from turning AI image generators into engines of racist content, many 4chan users are still turning to these tools to "quickly flood the Internet with racist garbage," 404 Media reported.

404 Media uncovered one 4chan thread where users recommended various AI tools, including Stable Diffusion and DALL-E, but specifically linked to Bing AI's text-to-image generator (which is powered by DALL-E 3) as a "quick method." After finding the right tool—which could also be a more old-school photo-editing tool like Photoshop—users are instructed to add incendiary captions and share the images on social media to create a blitz of racist images online.

Make captions "funny, provocative," the thread instructs users. Use "redpilling message (Jews involved in 9/11)" that are "easy to understand."

Read 12 remaining paragraphs | Comments

❌