Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierArs Technica

Indiana Jones and the Great Circle is a new first-person Nazi-whipping journey

Indiana Jones in front of an alcove in a ruin.

Enlarge / CGI Harrison Ford just can't believe he's getting roped into another globe-trotting adventure. (credit: Bethesda/Machine Games)

Almost two years ago to this day, Bethesda told everyone its Machine Games subsidiary was working on a new Indiana Jones game, one with "an original story." Now we can see what Indiana Jones and the Great Circle is going to look like, with a gameplay trailer showing up during Microsoft's Developer Direct event, and when it's arriving: "2024." You can now wishlist it on Steam and the Xbox store; it's exclusive to those platforms.

Gameplay reveal trailer for Indiana Jones and the Great Circle.

While the game has Harrison Ford's likeness, it's not Ford voicing your character. Troy Baker, the original voice of Joel in The Last of Us, picks up the role of the archaeologist.

From the trailer, Great Circle looks a lot like the modern Wolfenstein games that Machine Games made—and that's a good thing. The New Order and The New Colossus excelled at making you feel more like a human action hero than a shooting tank. They've got a knack for first-person platforming, stunts, and cinematic moments that are nowhere near as painful as in many shooters. They excel at balancing immersing you as a player and letting your character have a personality.

Read 4 remaining paragraphs | Comments

Elon Musk reverses Twitter ban of Sandy Hook shooting-denier Alex Jones

Alex Jones speaking outside a court house while standing in front of several TV news microphones.

Enlarge / Infowars-founder Alex Jones speaks to the media outside Waterbury Superior Court on September 21, 2022 during one of his Sandy Hook defamation trials. (credit: Getty Images | Joe Buglewicz)

Elon Musk has allowed conspiracy theorist Alex Jones back on the social network formerly named Twitter, despite saying that he "vehemently" disagrees with Jones' claims that the Sandy Hook Elementary School shooting was a hoax.

Musk restored the @RealAlexJones account after polling X users. With almost 2 million votes, about 70 percent of users supported reinstating Jones, who was banned by Twitter in 2018.

"I vehemently disagree with what he said about Sandy Hook, but are we a platform that believes in freedom of speech or are we not? That is what it comes down to in the end. If the people vote him back on, this will be bad for X financially, but principles matter more than money," Musk wrote on Saturday. Musk also spoke with Jones about his Sandy Hook comments in a live interview on X.

Read 15 remaining paragraphs | Comments

1960s chatbot ELIZA beat OpenAI’s GPT-3.5 in a recent Turing test study

An illustration of a man and a robot sitting in boxes, talking.

Enlarge / An artist's impression of a human and a robot talking. (credit: Getty Images | Benj Edwards)

In a preprint research paper titled "Does GPT-4 Pass the Turing Test?", two researchers from UC San Diego pitted OpenAI's GPT-4 AI language model against human participants, GPT-3.5, and ELIZA to see which could trick participants into thinking it was human with the greatest success. But along the way, the study, which has not been peer-reviewed, found that human participants correctly identified other humans in only 63 percent of the interactions—and that a 1960s computer program surpassed the AI model that powers the free version of ChatGPT.

Even with limitations and caveats, which we'll cover below, the paper presents a thought-provoking comparison between AI model approaches and raises further questions about using the Turing test to evaluate AI model performance.

British mathematician and computer scientist Alan Turing first conceived the Turing test as "The Imitation Game" in 1950. Since then, it has become a famous but controversial benchmark for determining a machine's ability to imitate human conversation. In modern versions of the test, a human judge typically talks to either another human or a chatbot without knowing which is which. If the judge cannot reliably tell the chatbot from the human a certain percentage of the time, the chatbot is said to have passed the test. The threshold for passing the test is subjective, so there has never been a broad consensus on what would constitute a passing success rate.

Read 13 remaining paragraphs | Comments

❌