Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierArs Technica

How new tech is making geothermal energy a more versatile power source

The Nesjavellir Geothermal Power Station. Geothermal power has long been popular in volcanic countries like Iceland, where hot water bubbles from the ground.

Enlarge / The Nesjavellir Geothermal Power Station. Geothermal power has long been popular in volcanic countries like Iceland, where hot water bubbles from the ground. (credit: Gretar Ívarsson/Wikimedia Commons)

Glistening in the dry expanses of the Nevada desert is an unusual kind of power plant that harnesses energy not from the sun or wind, but from the Earth itself.

Known as Project Red, it pumps water thousands of feet into the ground, down where rocks are hot enough to roast a turkey. Around the clock, the plant sucks the heated water back up to power generators. Since last November, this carbon-free, Earth-borne power has been flowing onto a local grid in Nevada.

Geothermal energy, though it’s continuously radiating from Earth’s super-hot core, has long been a relatively niche source of electricity, largely limited to volcanic regions like Iceland where hot springs bubble from the ground. But geothermal enthusiasts have dreamed of sourcing Earth power in places without such specific geological conditions—like Project Red’s Nevada site, developed by energy startup Fervo Energy.

Read 21 remaining paragraphs | Comments

A supernova caused the BOAT gamma ray burst, JWST data confirms

Artist's visualization of GRB 221009A showing the narrow relativistic jets — emerging from a central black hole — that gave rise to the brightest gamma ray burst yet detected.

Enlarge / Artist's visualization of GRB 221009A showing the narrow relativistic jets—emerging from a central black hole—that gave rise to the brightest gamma-ray burst yet detected. (credit: Aaron M. Geller/Northwestern/CIERA/ ITRC&DS)

In October 2022, several space-based detectors picked up a powerful gamma-ray burst so energetic that astronomers nicknamed it the BOAT (Brightest Of All Time). Now they've confirmed that the GRB came from a supernova, according to a new paper published in the journal Nature Astronomy. However, they did not find evidence of heavy elements like platinum and gold one would expect from a supernova explosion, which bears on the longstanding question of the origin of such elements in the universe.

As we've reported previously, gamma-ray bursts are extremely high-energy explosions in distant galaxies lasting between mere milliseconds to several hours. There are two classes of gamma-ray bursts. Most (70 percent) are long bursts lasting more than two seconds, often with a bright afterglow. These are usually linked to galaxies with rapid star formation. Astronomers think that long bursts are tied to the deaths of massive stars collapsing to form a neutron star or black hole (or, alternatively, a newly formed magnetar). The baby black hole would produce jets of highly energetic particles moving near the speed of light, powerful enough to pierce through the remains of the progenitor star, emitting X-rays and gamma rays.

Those gamma-ray bursts lasting less than two seconds (about 30 percent) are deemed short bursts, usually emitting from regions with very little star formation. Astronomers think these gamma-ray bursts are the result of mergers between two neutron stars, or a neutron star merging with a black hole, comprising a "kilonova." That hypothesis was confirmed in 2017 when the LIGO collaboration picked up the gravitational wave signal of two neutron stars merging, accompanied by the powerful gamma-ray bursts associated with a kilonova.

Read 7 remaining paragraphs | Comments

Three episodes in, the Fallout TV series absolutely nails it

  • Like the games, the show depicts a Vault Dweller making her way out into the Wasteland. [credit: Amazon ]

Amazon has had a rocky history with big, geeky properties making their way onto Prime Video. The Wheel of Time wasn’t for everyone, and I have almost nothing good to say about The Lord of the Rings: The Rings of Power.

Fallout, the first season of which premiered this week, seems to break that bad streak. All the episodes are online now, but I’ve watched three episodes so far. I love it.

I’ve spent hundreds of hours playing the games that inspired it, so I can only speak to that experience; I don’t know how well it will work for people who never played the games. But as a video game adaptation, it’s up there with The Last of Us.

Read 31 remaining paragraphs | Comments

Man pleads guilty to stealing former coworker’s identity for 30 years

Man pleads guilty to stealing former coworker’s identity for 30 years

Enlarge (credit: Malte Mueller | fStop)

A high-level Iowa hospital systems administrator, Matthew Kierans, has admitted to stealing a coworker's identity and posing as William Donald Woods for more than 30 years, The Register reported.

On top of using Woods' identity to commit crimes and rack up debt, Kierans' elaborate identity theft scheme led to Woods' incarceration after Kierans' accused his victim of identity theft and Los Angeles authorities failed to detect which man was the true William Donald Woods. Kierans could face up to 32 years in prison, The Register reported, and must pay a $1.25 million fine.

According to a proposed plea agreement with the US Attorney's Office for the Northern District of Iowa, Kierans met Woods "in about 1988" when they worked together at a hot dog stand in New Mexico. "For the next three decades," Kierans used Woods' "identity in every aspect of his life," including when obtaining "employment, insurance, a social security number, driver's licenses, titles, loans, and credit," as well as when paying taxes. Kierans even got married and had a child using Woods' name.

Read 20 remaining paragraphs | Comments

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024 in Beverly Hills, California.

Enlarge / Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024, in Beverly Hills, California. (credit: Getty Images)

On Tuesday, the Artist Rights Alliance (ARA) announced an open letter critical of AI signed by over 200 musical artists, including Pearl Jam, Nicki Minaj, Billie Eilish, Stevie Wonder, Elvis Costello, and the estate of Frank Sinatra. In the letter, the artists call on AI developers, technology companies, platforms, and digital music services to stop using AI to "infringe upon and devalue the rights of human artists." A tweet from the ARA added that AI poses an "existential threat" to their art.

Visual artists began protesting the advent of generative AI after the rise of the first mainstream AI image generators in 2022, and considering that generative AI research has since been undertaken for other forms of creative media, we have seen that protest extend to professionals in other creative domains, such as writers, actors, filmmakers—and now musicians.

"When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods," the open letter states. It alleges that some of the "biggest and most powerful" companies (unnamed in the letter) are using the work of artists without permission to train AI models, with the aim of replacing human artists with AI-created content.

Read 10 remaining paragraphs | Comments

Apple wouldn’t let Jon Stewart interview FTC Chair Lina Khan, TV host claims

The Daily Show host Jon Stewart's interview with FTC Chair Lina Khan. The conversation about Apple begins around 16:30 in the video.

Before the cancellation of The Problem with Jon Stewart on Apple TV+, Apple forbade the inclusion of Federal Trade Commission Chair Lina Khan as a guest and steered the show away from confronting issues related to artificial intelligence, according to Jon Stewart.

This isn't the first we've heard of this rift between Apple and Stewart. When the Apple TV+ show was canceled last October, reports circulated that he told his staff that creative differences over guests and topics were a factor in the decision.

The New York Times reported that both China and AI were sticking points between Apple and Stewart. Stewart confirmed the broad strokes of that narrative in a CBS Morning Show interview after it was announced that he would return to The Daily Show.

Read 11 remaining paragraphs | Comments

Medicare forced to expand forms to fit 10-digit bills—a penny shy of $100M

Par : Beth Mole
High angle close-up view still life of an opened prescription bottles with pills and medication spilling onto ae background of money, U.S. currency with Lincoln Portrait.

Enlarge (credit: Getty | YinYang)

In a disturbing sign of the times, Medicare this week implemented a change to its claims-processing system that adds two extra digits to money amounts, expanding the fields from eight digits to 10. The change now allows for billing and payment totals of up to $99,999,999.99, or a penny shy of $100 million.

In a notice released last month, the Centers for Medicare & Medicaid Services (CMS) explained the change, writing, "With the increase of Part B procedures/treatments exceeding the $999,999.99 limitation, CMS is implementing the expansion of display screens for monetary amount fields related to billing and payment within [the Fiscal Intermediary Shared System (FISS)] to accept and process up to 10 digits ($99,999,999.99)."

The FISS is the processing system used by hospitals and doctors' offices to process Medicare claims.

Read 3 remaining paragraphs | Comments

OpenAI holds back wide release of voice-cloning tech due to misuse concerns

AI speaks letters, text-to-speech or TTS, text-to-voice, speech synthesis applications, generative Artificial Intelligence, futuristic technology in language and communication.

Enlarge (credit: Getty Images)

Voice synthesis has come a long way since 1978's Speak & Spell toy, which once wowed people with its state-of-the-art ability to read words aloud using an electronic voice. Now, using deep-learning AI models, software can create not only realistic-sounding voices, but also convincingly imitate existing voices using small samples of audio.

Along those lines, OpenAI just announced Voice Engine, a text-to-speech AI model for creating synthetic voices based on a 15-second segment of recorded audio. It has provided audio samples of the Voice Engine in action on its website.

Once a voice is cloned, a user can input text into the Voice Engine and get an AI-generated voice result. But OpenAI is not ready to widely release its technology yet. The company initially planned to launch a pilot program for developers to sign up for the Voice Engine API earlier this month. But after more consideration about ethical implications, the company decided to scale back its ambitions for now.

Read 14 remaining paragraphs | Comments

Eternal Sunshine of the Spotless Mind and the philosophy of self, identity, and memory

<em>Eternal Sunshine of the Spotless Mind</em> stars Jim Carrey in one of his most powerful dramatic roles.

Enlarge / Eternal Sunshine of the Spotless Mind stars Jim Carrey in one of his most powerful dramatic roles. (credit: Focus Features)

Last week, the 2004 cult classic Eternal Sunshine of the Spotless Mind marked its 20th anniversary, prompting many people to revisit the surreal sci-fi psychological drama about two ex-lovers who erase their memories of each other—only to find themselves falling in love all over again. Eternal Sunshine was a box office success and earned almost universal praise upon its release. It's still a critical favorite today and remains one of star Jim Carrey's most powerful and emotionally resonant dramatic roles. What better time for a rewatch and in-depth discussion of the film's themes of memory, personal identity, love, and loss?

(Spoilers for the 2004 film below.)

Director Michel Gondry and co-writer Pierre Bismuth first came up with the concept for the film in 1998, based on a conversation Bismuth had with a female friend who, when he asked, said she would absolutely erase her boyfriend from her memory if she could. They brought on Charlie Kaufman to write the script, and the three men went on to win an Oscar for Best Original Screenplay for their efforts. The title alludes to a 1717 poem by Alexander Pope, "Eloisa to Abelard," based on the tragic love between medieval philosopher Peter Abelard and Héloïse d'Argenteuil and their differing perspectives on what happened between them when they exchanged letters later in life. These are the most relevant lines:

Read 36 remaining paragraphs | Comments

“We’ve done our job”: Baldur’s Gate 3 devs call off DLC and step away from D&D

Karlach, the tiefling barbarian, infernal heart glowing, axe at her back.

Enlarge / Sometimes your infernal-engine-powered heart just isn't in it. (credit: Larian Studios/Hasbro)

Swen Vincke, director of the colossal entity that is Baldur's Gate 3, is not leaving the door open to future expansions of that already fully packed game.

At this week's Game Developer's Conference (GDC), Vincke made it clear during a talk and in interviews that Larian Studios is not going to make any major new content for Baldur's Gate 3 (BG3)—nor start work on Baldur's Gate 4, nor make anything, really, inside the framework of Dungeons & Dragons' Fifth Edition (5e).

Not that Vincke or his team are bitter. Their hearts just aren't in it. They had actually started work on BG3 downloadable content and gave some thought to Baldur's Gate 4, Vincke told IGN. "But we hadn’t really had closure on BG3 yet and just to jump forward on something new felt wrong." On top of that, the team had new ideas that didn't fit D&D 5e, which "is not an easy system to put into a video game," Vincke said.

Read 7 remaining paragraphs | Comments

The restored Star Trek Enterprise-D bridge goes on display in May

A recreation of the Star Trek The Next Generation Enterprise-D bridge

Enlarge / The Enterprise-D bridge recreation, seen in London in 2002. (credit: Peter Bischoff/Getty Images)

More than a decade has gone by since three Star Trek: The Next Generation fans first decided to restore the bridge from the Enterprise-D. Plans for the restored bridge morphed from opening it up to non-commercial uses like weddings or educational events into a fully fledged museum, and now that museum is almost ready to open. Backers of the project on Kickstarter have been notified that Sci-Fi World Museum will open to them in Santa Monica, California, on May 27, with general admission beginning in June.

It's not actually the original set from TNG, as that was destroyed while filming Star Trek: Generations, when the saucer section crash-lands on Veridian III. But three replicas were made, overseen by Michael Okuda and Herman Zimmerman, the show's set designers. Two of those welcomed Trekkies at Star Trek: The Experience, an attraction in Las Vegas until it closed in 2008.

The third spent time in Hollywood, then traveled to Europe and Asia for Star Trek: World Tour before it ended up languishing in a warehouse in Long Beach. It's this third globe-trotting Enterprise-D bridge that—like the grit that gets an oyster to create a pearl—now finds a science-fiction museum accreted around it. Well, mostly—the chairs used by Riker, Troi, Data, and some other bits were salvaged from the Las Vegas exhibit.

Read 6 remaining paragraphs | Comments

Choose your side in a civil war with House of the Dragon’s dueling S2 trailers

This short teaser for S2 of HBO's House of the Dragon lets you choose between two full trailers.

It's been a long wait for the second season of HBO's House of the Dragon, in which House Targaryen descends into civil war over the heir to the Iron Throne. It's set to premiere in June, and HBO is ramping up its marketing with a rather clever twist: not one official trailer, but two, each presenting the perspective of one side in the bloody conflict. And we get to choose which trailer we'd like to view—although if you're like us, you'll elect to watch both.

(Spoilers for the first season below.)

As I've written previously, HBO's House of the Dragon debuted in 2022 with a solid, promising pilot episode, and the remainder of the season lived up to that initial promise. The series is set nearly 200 years before the events of Game of Thrones and chronicles the beginning of the end of House Targaryen's reign. The primary source material is Fire and Blood, a fictional history of the Targaryen kings written by George R.R. Martin. As book readers know, those events culminated in a civil war and the extinction of the dragons—at least until Daenerys Targaryen came along.

Read 7 remaining paragraphs | Comments

Lifesaving gene therapy for kids is world’s priciest drug at $4.25M

Par : Beth Mole
A mother with her twin 6-year-old boys who have metachromatic leukodystrophy, a genetic disease that leaves them unable to move. Photo taken on September 3, 2004.

Enlarge / A mother with her twin 6-year-old boys who have metachromatic leukodystrophy, a genetic disease that leaves them unable to move. Photo taken on September 3, 2004. (credit: Getty | John Ewing/Portland Press Herald)

In a medical triumph, the US Food and Drug Administration on Monday approved a gene therapy that appears to trounce a rare, tragic disease that progressively steals children's ability to talk, move, and think, leading to a vegetative state and death. For those who begin to slip away in infancy, many die by age 5. But, with the new therapy, 37 children in an initial trial were all still alive at age 6. Most could still talk, walk on their own, and perform normally on IQ tests, which was unseen in untreated children. Some of the earliest children treated have now been followed for up to 12 years—and they continue to do well.

But, the triumph turned bittersweet today, Wednesday, as the company behind the therapy, Lenmeldy, set the price for the US market at $4.25 million, making it the most expensive drug in the world. The price is $310,000 higher than what experts calculated to be the maximum fair price for the lifesaving drug; the nonprofit Institute for Clinical and Economic Review, or ICER, gave a range last October of between $2.29 million to $3.94 million.

The price raises questions about whether state, federal, and private health insurance plans will be able to shoulder the costs. "Unless states have allocated appropriately for it, and looked at the drug pipeline, they may not be prepared for what could be significant cost spikes," Edwin Park, a research professor at the McCourt School of Public Health at Georgetown University, told CNN.

Read 8 remaining paragraphs | Comments

Darkness rises in an age of light in first trailer for Star Wars: The Acolyte

Amandla Stenberg stars as a former padawan turned dangerous warrior in Star Wars: The Acolyte.

A long time ago, in a galaxy far, far away, the Galactic Republic and its Jedi masters symbolized the epitome of enlightenment and peace. Then came the inevitable downfall and outbreak of war as the Sith, who embraced the Dark Side of the Force, came to power. Star Wars: The Acolyte is a forthcoming new series on Disney+ that will explore those final days of the Republic as the seeds of its destruction were sown—and the streaming platform just dropped the first trailer.

The eight-episode series was created by Leslye Headland, who co-created Russian Doll with Natasha Lyonne and Amy Poehler. It's set at the end of the High Republic Era, about a century before the events of The Phantom Menace. Apparently Headland rather cheekily pitched The Acolyte as "Frozen meets Kill Bill," which is an intriguing combination. She drew on wuxia martial arts films for inspiration, much like George Lucas was originally inspired by Westerns and the samurai films of Akira Kurosawa.

(Some spoilers for the prequel trilogy below.)

Read 3 remaining paragraphs | Comments

Thomas Stafford, who flew to the Moon and docked with Soyuz, dies at 93

Apollo commander Tom Stafford (left) with Soyuz commander Alexei Leonov during the Apollo-Soyuz mission in July 1975.

Enlarge / Apollo commander Tom Stafford (left) with Soyuz commander Alexei Leonov during the Apollo-Soyuz mission in July 1975. (credit: NASA)

Former NASA astronaut Thomas Stafford, a three-star Air Force general known for a historic handshake in space with a Soviet cosmonaut nearly 50 years ago, died Monday in Florida. He was 93.

Stafford was perhaps the most accomplished astronaut of his era who never walked on the Moon. He flew in space four times, helping pilot the first rendezvous with another crewed spacecraft in orbit in 1966 and taking NASA's Apollo lunar landing craft on a final test run before Neil Armstrong and Buzz Aldrin set foot on the Moon in 1969.

By his own account, one of the greatest moments in Stafford's career came in 1975, when he commanded the final Apollo mission—not to the Moon but to low-Earth orbit—and linked up with a Russian Soyuz spacecraft carrying two Soviet cosmonauts. The Apollo-Soyuz Test Project (ASTP) planted the seeds for a decades-long partnership in space between the United States and Russia, culminating in the International Space Station, where US and Russian crews still work together despite a collapse in relations back on Earth.

Read 42 remaining paragraphs | Comments

Apple may hire Google to power new iPhone AI features using Gemini—report

A Google

Enlarge (credit: Benj Edwards)

On Monday, Bloomberg reported that Apple is in talks to license Google's Gemini model to power AI features like Siri in a future iPhone software update coming later in 2024, according to people familiar with the situation. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI.

The potential integration of Google Gemini into iOS 18 could bring a range of new cloud-based (off-device) AI-powered features to Apple's smartphone, including image creation or essay writing based on simple prompts. However, the terms and branding of the agreement have not yet been finalized, and the implementation details remain unclear. The companies are unlikely to announce any deal until Apple's annual Worldwide Developers Conference in June.

Gemini could also bring new capabilities to Apple's widely criticized voice assistant, Siri, which trails newer AI assistants powered by large language models (LLMs) in understanding and responding to complex questions. Rumors of Apple's own internal frustration with Siri—and potential remedies—have been kicking around for some time. In January, 9to5Mac revealed that Apple had been conducting tests with a beta version of iOS 17.4 that used OpenAI's ChatGPT API to power Siri.

Read 5 remaining paragraphs | Comments

Report: Sony stops producing PSVR2 amid “surplus” of unsold units

PSVR2 (left) next to the original PSVR.

Enlarge / PSVR2 (left) next to the original PSVR. (credit: Kyle Orland / Ars Technica)

It looks like Sony's PlayStation VR2 is not living up to the company's sales expectations just over a year after it first hit the market. Bloomberg reports that the PlayStation-maker has stopped producing new PSVR2 units as it tries to clear out a growing backlog of unsold inventory.

Bloomberg cites "people familiar with [Sony's] plans" in reporting that PSVR2 sales have "slowed progressively" since its February 2023 launch. Sony has produced "well over 2 million" units of the headset, compared to what tracking firm IDC estimates as just 1.69 million unit shipments to retailers through the end of last year. The discrepancy has caused a "surplus of assembled devices... throughout Sony’s supply chain," according to Bloomberg's sources.

IDC estimates a quarterly low of 325,000 PSVR2 units shipped in the usually hot holiday season, compared to a full 1.3 million estimated holiday shipments for Meta's then-new Quest 3 headset, which combined with other Quest products to account for over 3.7 million estimated sales for the full year.

Read 4 remaining paragraphs | Comments

Elon Musk’s xAI releases Grok source and weights, taunting OpenAI

An AI-generated image released by xAI during the launch of Grok

Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1. (credit: xAI)

On Sunday, Elon Musk's AI firm xAI released the base model weights and network architecture of Grok-1, a large language model designed to compete with the models that power OpenAI's ChatGPT. The open-weights release through GitHub and BitTorrent comes as Musk continues to criticize (and sue) rival OpenAI for not releasing its AI models in an open way.

Announced in November, Grok is an AI assistant similar to ChatGPT that is available to X Premium+ subscribers who pay $16 a month to the social media platform formerly known as Twitter. At its heart is a mixture-of-experts LLM called "Grok-1," clocking in at 314 billion parameters. As a reference, GPT-3 included 175 billion parameters. Parameter count is a rough measure of an AI model's complexity, reflecting its potential for generating more useful responses.

xAI is releasing the base model of Grok-1, which is not fine-tuned for a specific task, so it is likely not the same model that X uses to power its Grok AI assistant. "This is the raw base model checkpoint from the Grok-1 pre-training phase, which concluded in October 2023," writes xAI on its release page. "This means that the model is not fine-tuned for any specific application, such as dialogue," meaning it's not necessarily shipping as a chatbot. But it will do next-token prediction, meaning it will complete a sentence (or other text prompt) with its estimation of the most relevant string of text.

Read 9 remaining paragraphs | Comments

Bill Skarsgård takes revenge from beyond the grave in The Crow trailer

Bill Skarsgård takes on the role of Eric Draven in the Lionsgate reboot of The Crow.

The 1994 cult classic film The Crow turns 30 this spring, so it's as good a time as any to drop the first trailer for the long-in-development reboot directed by Rupert Sanders (Snow White and the Huntsman, Ghost in the Shell). Bill Skarsgård takes on the starring role made famous by the late Brandon Lee.

(Spoilers for the original 1994 film below.)

Based on a 1989 limited comic series by James O'Barr, The Crow was directed by Alex Proyas. The film starred Brandon Lee as Eric Draven, a rock musician in crime-ridden Detroit. He and his fiancée, Shelly Webster (Sofia Shinas), are brutally murdered on Devil's Night by a gang of thugs on the orders of a crime boss named Top Dollar (Michael Wincott). A year later, Eric is resurrected, dons black-and-white face paint, and proceeds to take his bloody revenge before returning to his grave. Alas, Lee was accidentally killed by a prop gun during the final days of shooting; the film was completed with the help of Lee's stunt double (Chad Stahelski, who launched the John Wick franchise) and some clever special effects.

Read 8 remaining paragraphs | Comments

An EV that charges 30% faster? Volvo and Breathe think their tech can do it

An illustration of a Volvo EV powertrain

Enlarge / Volvo's electric powertrains are going to get a bit smarter with Breathe's new real-time battery-management system. (credit: Volvo )

Would you like an electric vehicle that can charge up to 30 percent faster than the current breed? If so, you're not alone—Volvo Cars thinks that's a desirable outcome, too, which is why the carmaker has invested in and partnered with a British startup called Breathe Battery Technologies. Consequently, Volvo will be the first automaker to add Breathe's new battery management technology to its EVs, although, before too long you should see Breathe's tech show up in other EVs, as well as consumer tech devices.

A spinoff out of Imperial College in London, Breathe wants to add some extra brainpower to battery management.

"The frustration that everyone feels is that cell manufacturers brute force and empirically test batteries until they die," explained Ian Campbell, CEO of Breathe. "They ship the data sheet alongside those batteries that has some numbers baked in, that says "control it according to this A4 piece of paper," and that significantly underutilizes the complex electrochemistry and materials in the system that they built and shipped."

Read 6 remaining paragraphs | Comments

Study: Conflicting values for Hubble Constant not due to measurement error

This image of NGC 5468, a galaxy located about 130 million light-years from Earth, combines data from the Hubble and James Webb space telescopes.

Enlarge / This image of NGC 5468, about 130 million light-years from Earth, combines data from the Hubble and Webb space telescopes. (credit: NASA/ESA/CSA/STScI/A. Riess (JHU))

Astronomers have made new measurements of the Hubble Constant, a measure of how quickly the Universe is expanding, by combining data from the Hubble Space Telescope and the James Webb Space Telescope. Their results confirmed the accuracy of Hubble's earlier measurement of the Constant's value, according to their recent paper published in The Astrophysical Journal Letters, with implications for a long-standing discrepancy in values obtained by different observational methods known as the "Hubble tension."

There was a time when scientists believed the Universe was static, but that changed with Albert Einstein's general theory of relativity. Alexander Friedmann published a set of equations in 1922 showing that the Universe might actually be expanding, with Georges Lemaitre later making an independent derivation to arrive at that same conclusion. Edwin Hubble confirmed this expansion with observational data in 1929. Prior to this, Einstein had been trying to modify general relativity by adding a cosmological constant in order to get a static universe from his theory; after Hubble's discovery, legend has it, he referred to that effort as his biggest blunder.

As previously reported, the Hubble Constant is a measure of the Universe's expansion expressed in units of kilometers per second per megaparsec. So, each second, every megaparsec of the Universe expands by a certain number of kilometers. Another way to think of this is in terms of a relatively stationary object a megaparsec away: Each second, it gets a number of kilometers more distant.

Read 7 remaining paragraphs | Comments

Image-scraping Midjourney bans rival AI firm for scraping images

A burglar with flash light and papers in business office. Exactly like scraping files from Discord.

Enlarge / A burglar with a flashlight and papers in a business office—exactly like scraping files from Discord. (credit: Getty Images)

On Wednesday, Midjourney banned all employees from image synthesis rival Stability AI from its service indefinitely after it detected "botnet-like" activity suspected to be a Stability employee attempting to scrape prompt and image pairs in bulk. Midjourney advocate Nick St. Pierre tweeted about the announcement, which came via Midjourney's official Discord channel.

Prompts are the written instructions (like "a cat in a car holding a can of a beer") used by generative AI models such as Midjourney and Stability AI's Stable Diffusion 3 (SD3) to synthesize images. Having prompt and image pairs could potentially help the training or fine-tuning of a rival AI image generator model.

Bot activity that took place around midnight on March 2 caused a 24-hour outage for the commercial image generator service. Midjourney linked several paid accounts with a Stability AI data team employee trying to "grab prompt and image pairs." Midjourney then made a decision to ban all Stability AI employees from the service indefinitely. It also indicated a new policy: "aggressive automation or taking down the service results in banning all employees of the responsible company."

Read 6 remaining paragraphs | Comments

Op-ed: Charges against journalist Tim Burke are a hack job

Op-ed: Charges against journalist Tim Burke are a hack job

Enlarge (credit: natasaadzic/Getty)

Caitlin Vogus is the deputy director of advocacy at Freedom of the Press Foundation and a First Amendment lawyer. Jennifer Stisa Granick is the surveillance and cybersecurity counsel with the ACLU’s Speech, Privacy, and Technology Project. The opinions in this piece do not necessarily reflect the views of Ars Technica.

Imagine a journalist finds a folder on a park bench, opens it, and sees a telephone number inside. She dials the number. A famous rapper answers and spews a racist rant. If no one gave her permission to open the folder and the rapper’s telephone number was unlisted, should the reporter go to jail for publishing what she heard?

If that sounds ridiculous, it’s because it is. And yet, add in a computer and the Internet, and that’s basically what a newly unsealed federal indictment accuses Florida journalist Tim Burke of doing when he found and disseminated outtakes of Tucker Carlson’s Fox News interview with Ye, the artist formerly known as Kanye West, going on the first of many antisemitic diatribes.

Read 14 remaining paragraphs | Comments

Matrix multiplication advancement could lead to faster, more efficient AI models

Futuristic huge technology tunnel and binary data.

Enlarge / When you do math on a computer, you fly through a numerical tunnel like this—figuratively, of course. (credit: Getty Images)

Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine. This could eventually accelerate AI models like ChatGPT, which rely heavily on matrix multiplication to function. The findings, presented in two recent papers, have led to what is reported to be the biggest improvement in matrix multiplication efficiency in over a decade.

Multiplying two rectangular number arrays, known as matrix multiplication, plays a crucial role in today's AI models, including speech and image recognition, chatbots from every major vendor, AI image generators, and video synthesis models like Sora. Beyond AI, matrix math is so important to modern computing (think image processing and data compression) that even slight gains in efficiency could lead to computational and power savings.

Graphics processing units (GPUs) excel in handling matrix multiplication tasks because of their ability to process many calculations at once. They break down large matrix problems into smaller segments and solve them concurrently using an algorithm.

Read 11 remaining paragraphs | Comments

Devs left with tough choices as Warner Bros. ends all Adult Swim Games downloads

A plucky, likable creature under the looming threat of consumption by an interconnected menacing force of nature in one of Adult Swim Games' titles.

Enlarge / A plucky, likable creature under the looming threat of consumption by an interconnected menacing force of nature in one of Adult Swim Games' titles. (credit: Adult Swim Games)

Warner Bros. Discovery seems set to remove at least 16 games from its Adult Swim Games subsidiary from games markets and has told the affected developers that it will not transfer the games back to them nor offer other means of selling them in the future.

Ars reported Wednesday on the plight of Small Radios Big Televisions, a Steam and PlayStation game made by a solo developer who received a notice from Warner Bros. Discovery (WBD) that it was "retiring" his game within 60 days.

In a comment on that Ars post, Matt Kain, developer of Adult Swim Games' Fist Puncher, noted that they had received the same "retired" notice from WBD. "When we requested that Warner Bros simply transfer the game over to our studio's Steam publisher account so that the game could stay active, they said no. The transfer process literally takes a minute to initiate (look up "Transferring Applications" in the Steamworks documentation), but their rep claimed they have simply made the universal decision not to transfer the games to the original creators," Kain wrote.

Read 11 remaining paragraphs | Comments

Anthropic’s Claude 3 causes stir by seeming to realize when it was being tested

A 3D rendering of a toy robot with a light bulb over its head in front of a brick wall.

Enlarge (credit: Getty Images)

On Monday, Anthropic prompt engineer Alex Albert caused a small stir in the AI community when he tweeted about a scenario related to Claude 3 Opus, the largest version of a new large language model launched on Monday. Albert shared a story from internal testing of Opus where the model seemingly demonstrated a type of "metacognition" or self-awareness during a "needle-in-the-haystack" evaluation, leading to both curiosity and skepticism online.

Metacognition in AI refers to the ability of an AI model to monitor or regulate its own internal processes. It's similar to a form of self-awareness, but calling it that is usually seen as too anthropomorphizing, since there is no "self" in this case. Machine-learning experts do not think that current AI models possess a form of self-awareness like humans. Instead, the models produce humanlike output, and that sometimes triggers a perception of self-awareness that seems to imply a deeper form of intelligence behind the curtain.

In the now-viral tweet, Albert described a test to measure Claude's recall ability. It's a relatively standard test in large language model (LLM) testing that involves inserting a target sentence (the "needle") into a large block of text or documents (the "haystack") and asking if the AI model can find the needle. Researchers do this test to see if the large language model can accurately pull information from a very large processing memory (called a context window), which in this case is about 200,000 tokens (fragments of words).

Read 11 remaining paragraphs | Comments

The AI wars heat up with Claude 3, claimed to have “near-human” abilities

The Anthropic Claude 3 logo.

Enlarge / The Anthropic Claude 3 logo. (credit: Anthropic)

On Monday, Anthropic released Claude 3, a family of three AI language models similar to those that power ChatGPT. Anthropic claims the models set new industry benchmarks across a range of cognitive tasks, even approaching "near-human" capability in some cases. It's available now through Anthropic's website, with the most powerful model being subscription-only. It's also available via API for developers.

Claude 3's three models represent increasing complexity and parameter count: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Sonnet powers the Claude.ai chatbot now for free with an email sign-in. But as mentioned above, Opus is only available through Anthropic's web chat interface if you pay $20 a month for "Claude Pro," a subscription service offered through the Anthropic website. All three feature a 200,000-token context window. (The context window is the number of tokens—fragments of a word—that an AI language model can process at once.)

We covered the launch of Claude in March 2023 and Claude 2 in July that same year. Each time, Anthropic fell slightly behind OpenAI's best models in capability while surpassing them in terms of context window length. With Claude 3, Anthropic has perhaps finally caught up with OpenAI's released models in terms of performance, although there is no consensus among experts yet—and the presentation of AI benchmarks is notoriously prone to cherry-picking.

Read 17 remaining paragraphs | Comments

Cops called after parents get tricked by AI-generated images of Wonka-like event

A photo of the Willy's Chocolate Experience, which did not match AI-generated promises.

Enlarge / A photo of "Willy's Chocolate Experience" (inset), which did not match AI-generated promises, shown in the background. (credit: Stuart Sinclair)

On Saturday, event organizers shut down a Glasgow-based "Willy's Chocolate Experience" after customers complained that the unofficial Wonka-inspired event, which took place in a sparsely decorated venue, did not match the lush AI-generated images listed on its official website (archive here). According to Sky News, police were called to the event, and "advice was given."

"What an absolute shambles of an event," wrote Stuart Sinclar on Facebook after paying 35 pounds per ticket for himself and his kids. "Took 2 minutes to get through to then see a queue of people surrounding the guy running it complaining ... The kids received 2 jelly babies and a quarter of a can of Barrs limeade."

The Willy's Chocolate Experience website, which promises "a journey filled with wondrous creations and enchanting surprises at every turn," features five AI-generated images (likely created with OpenAI's DALL-E 3) that evoke a candy-filled fantasy wonderland inspired by the Willy Wonka universe and the recent Wonka film. But in reality, Sinclair was met with a nearly empty location with a few underwhelming decorations and a tiny bouncy castle. In one photo shared by Sinclair, a rainbow arch leads to a single yellow gummy bear and gum drop sitting on a bare concrete floor.

Read 5 remaining paragraphs | Comments

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora

Tyler Perry in 2022.

Enlarge / Tyler Perry in 2022. (credit: Getty Images)

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI's recently announced AI video generator Sora can do.

"I have been watching AI very closely," Perry said in the interview. "I was in the middle of, and have been planning for the last four years... an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me."

OpenAI, the company behind ChatGPT, revealed a preview of Sora's capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to surpass other AI video generators in capability dramatically. It seems that a similar shock also rippled into adjacent professional fields. "Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing," Perry said in the interview.

Read 4 remaining paragraphs | Comments

Stability announces Stable Diffusion 3, a next-gen AI image generator

Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background.

Enlarge / Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background. (credit: Stability AI)

On Thursday, Stability AI announced Stable Diffusion 3, an open-weights next-generation image-synthesis model. It follows its predecessors by reportedly generating detailed, multi-subject images with improved quality and accuracy in text generation. The brief announcement was not accompanied by a public demo, but Stability is opening up a waitlist today for those who would like to try it.

Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called "prompts" and turns them into matching images) range in size from 800 million to 8 billion parameters. The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers. Parameter size roughly corresponds to model capability in terms of how much detail it can generate. Larger models also require more VRAM on GPU accelerators to run.

Since 2022, we've seen Stability launch a progression of AI image-generation models: Stable Diffusion 1.4, 1.5, 2.0, 2.1, XL, XL Turbo, and now 3. Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI's DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse. (This has led to lawsuits that are unresolved.) Stable Diffusion models have been open-weights and source-available, which means the models can be run locally and fine-tuned to change their outputs.

Read 7 remaining paragraphs | Comments

Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images

Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king."

Enlarge / Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king." (credit: @stratejake / X)

On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.

"We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon," wrote Google in a statement Thursday morning.

As more people on X began to pile on Google for being "woke," the Gemini generations inspired conspiracy theories that Google was purposely discriminating against white people and offering revisionist history to serve political goals. Beyond that angle, as The Verge points out, some of these inaccurate depictions "were essentially erasing the history of race and gender discrimination."

Read 9 remaining paragraphs | Comments

Will Smith parodies viral AI-generated video by actually eating spaghetti

The real Will Smith eating spaghetti, parodying an AI-generated video from 2023.

Enlarge / The real Will Smith eating spaghetti, parodying an AI-generated video from 2023. (credit: Will Smith / Getty Images / Benj Edwards)

On Monday, Will Smith posted a video on his official Instagram feed that parodied an AI-generated video of the actor eating spaghetti that went viral last year. With the recent announcement of OpenAI's Sora video synthesis model, many people have noted the dramatic jump in AI-video quality over the past year compared to the infamous spaghetti video. Smith's new video plays on that comparison by showing the actual actor eating spaghetti in a comical fashion and claiming that it is AI-generated.

Captioned "This is getting out of hand!", the Instagram video uses a split screen layout to show the original AI-generated spaghetti video created by a Reddit user named "chaindrop" in March 2023 on the top, labeled with the subtitle "AI Video 1 year ago." Below that, in a box titled "AI Video Now," the real Smith shows 11 video segments of himself actually eating spaghetti by slurping it up while shaking his head, pouring it into his mouth with his fingers, and even nibbling on a friend's hair. 2006's Snap Yo Fingers by Lil Jon plays in the background.

In the Instagram comments section, some people expressed confusion about the new (non-AI) video, saying, "I'm still in doubt if second video was also made by AI or not." In a reply, someone else wrote, "Boomers are gonna loose [sic] this one. Second one is clearly him making a joke but I wouldn’t doubt it in a couple months time it will get like that."

Read 2 remaining paragraphs | Comments

Reddit sells training data to unnamed AI company ahead of IPO

In this photo illustration the American social news

Enlarge (credit: Reddit)

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site's content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

Read 4 remaining paragraphs | Comments

OpenAI collapses media reality with Sora, a photorealistic AI video generator

Snapshots from three videos generated using OpenAI's Sora.

Enlarge / Snapshots from three videos generated using OpenAI's Sora.

On Thursday, OpenAI announced Sora, a text-to-video AI model that can generate 60-second-long photorealistic HD video from written descriptions. While it's only a research preview that we have not tested, it reportedly creates synthetic video (but not audio yet) at a fidelity and consistency greater than any text-to-video model available at the moment. It's also freaking people out.

"It was nice knowing you all. Please tell your grandchildren about my videos and the lengths we went to to actually record them," wrote Wall Street Journal tech reporter Joanna Stern on X.

"This could be the 'holy shit' moment of AI," wrote Tom Warren of The Verge.

Read 23 remaining paragraphs | Comments

Canada declares Flipper Zero public enemy No. 1 in car-theft crackdown

Par : Dan Goodin
A Flipper Zero device

Enlarge / A Flipper Zero device (credit: https://flipperzero.one/)

Canadian Prime Minister Justin Trudeau has identified an unlikely public enemy No. 1 in his new crackdown on car theft: the Flipper Zero, a $200 piece of open source hardware used to capture, analyze and interact with simple radio communications.

On Thursday, the Innovation, Science and Economic Development Canada agency said it will “pursue all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry, such as the Flipper Zero, which would allow for the removal of those devices from the Canadian marketplace through collaboration with law enforcement agencies.” A social media post by François-Philippe Champagne, the minister of that agency, said that as part of the push “we are banning the importation, sale and use of consumer hacking devices, like flippers, used to commit these crimes.”

Read 19 remaining paragraphs | Comments

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others

WASHINGTON, DC - JANUARY 11: OpenAI Chief Executive Officer Sam Altman walks on the House side of the U.S. Capitol on January 11, 2024 in Washington, DC. Meanwhile, House Freedom Caucus members who left a meeting in the Speakers office say that they were talking to the Speaker about abandoning the spending agreement that Johnson announced earlier in the week. (Photo by Kent Nishimura/Getty Images)

Enlarge / OpenAI Chief Executive Officer Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC. (Photo by Kent Nishimura/Getty Images) (credit: Getty Images)

On Thursday, The Wall Street Journal reported that OpenAI CEO Sam Altman is in talks with investors to raise as much as $5 trillion to $7 trillion for AI chip manufacturing, according to people familiar with the matter. The funding seeks to address the scarcity of graphics processing units (GPUs) crucial for training and running large language models like those that power ChatGPT, Microsoft Copilot, and Google Gemini.

The high dollar amount reflects the huge amount of capital necessary to spin up new semiconductor manufacturing capability. "As part of the talks, Altman is pitching a partnership between OpenAI, various investors, chip makers and power providers, which together would put up money to build chip foundries that would then be run by existing chip makers," writes the Wall Street Journal in its report. "OpenAI would agree to be a significant customer of the new factories."

Read 8 remaining paragraphs | Comments

Google debuts more powerful “Ultra 1.0” AI model in rebranded “Gemini” chatbot

A promotional image for Google Gemini AI products.

Enlarge (credit: Google)

On Thursday, Google announced that its ChatGPT-like AI assistant, previously called Bard, is now called "Gemini," renamed to reflect the underlying AI language model Google launched in December. Additionally, Google has launched its most capable AI model, Ultra 1.0, for the first time as part of "Gemini Advanced," a $20/month subscription feature.

Untangling Google's naming scheme and how to access the new model is somewhat confusing. To tease out the nomenclature, think of an AI app like Google Bard as a car brand that can swap out different engines under the hood. It's an AI assistant—an application of an AI model with a convenient interface—that can use different AI "engines" to work.

When Bard launched in March 2023, it used a large language model called LaMDA as its engine. In May 2023, Google upgraded Bard to utilize its PaLM 2 language model. In December, Google upgraded Bard yet again to use its Gemini Pro AI model. It's important to note that when Google first announced Gemini (the AI model), the company said it would ship in three sizes that roughly reflected its processing capability: Nano, Pro, and Ultra (with larger being "better"). Until now, Pro was the most capable version of the Gemini model publicly available.

Read 11 remaining paragraphs | Comments

Don’t wear Apple Vision Pro while piloting a self-driving Tesla, officials warn

A mock-up of a person in a car wearing the Apple Vision Pro headset.

Enlarge (credit: Getty Images / Apple / Benj Edwards)

The recent launch of the Apple Vision Pro mixed-reality headset has inspired a number of social media stunts, including a viral video of someone wearing the headset while piloting a Tesla Cybertruck set to self-driving mode. On Monday, this prompted US Secretary of Transportation Pete Buttigieg to issue a warning on social media, reports BBC and The New York Times.

"Reminder—ALL advanced driver assistance systems available today require the human driver to be in control and fully engaged in the driving task at all times," Buttigieg wrote on the social media platform X.

The Apple Vision Pro's mixed-reality features combine elements of stereoscopic VR with camera passthrough so users can see the world around them while they use the device. This has led to people experimenting with wearing the goggles while walking around in public and filming the results for TikTok and YouTube.

Read 4 remaining paragraphs | Comments

Meta will label AI-generated content from OpenAI and Google on Facebook, Instagram

The Meta logo superimposed over a pixelated face in the background.

Enlarge (credit: Meta / Getty Images)

On Tuesday, Meta announced its plan to start labeling AI-generated images from other companies like OpenAI and Google, as reported by Reuters. The move aims to enhance transparency on platforms such as Facebook, Instagram, and Threads by informing users when the content they see is digitally synthesized media rather than an authentic photo or video.

Coming during a US election year that is expected to be contentious, Meta's decision is part of a larger effort within the tech industry to establish standards for labeling content created using generative AI models, which are capable of producing fake but realistic audio, images, and video from written prompts. (Even non-AI-generated fake content can potentially confuse social media users, as we covered yesterday.)

Meta President of Global Affairs Nick Clegg made the announcement in a blog post on Meta's website. "We’re taking this approach through the next year, during which a number of important elections are taking place around the world," wrote Clegg. "During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve."

Read 8 remaining paragraphs | Comments

Mathematicians finally solved Feynman’s “reverse sprinkler” problem

Light-scattering microparticles reveal the flow pattern for the reverse (sucking) mode of a sprinkler, showing vortices and complex flow patterns forming inside the central chamber. Credit: K. Wang et al., 2024

A typical lawn sprinkler features various nozzles arranged at angles on a rotating wheel; when water is pumped in, they release jets that cause the wheel to rotate. But what would happen if the water were sucked into the sprinkler instead? In which direction would the wheel turn then, or would it even turn at all? That's the essence of the "reverse sprinkler" problem that physicists like Richard Feynman, among others, have grappled with since the 1940s. Now, applied mathematicians at New York University think they've cracked the conundrum, per a recent paper published in the journal Physical Review Letters—and the answer challenges conventional wisdom on the matter.

“Our study solves the problem by combining precision lab experiments with mathematical modeling that explains how a reverse sprinkler operates,” said co-author Leif Ristroph of NYU’s Courant Institute. “We found that the reverse sprinkler spins in the ‘reverse’ or opposite direction when taking in water as it does when ejecting it, and the cause is subtle and surprising.”

Ristroph's lab frequently addresses these kinds of colorful real-world puzzles. For instance, back in 2018, Ristroph and colleagues fine-tuned the recipe for the perfect bubble based on experiments with soapy thin films. (You want a circular wand with a 1.5-inch perimeter, and you should gently blow at a consistent 6.9 cm/s.) In 2021, the Ristroph lab looked into the formation processes underlying so-called "stone forests" common in certain regions of China and Madagascar. These pointed rock formations, like the famed Stone Forest in China's Yunnan Province, are the result of solids dissolving into liquids in the presence of gravity, which produces natural convective flows.

Read 10 remaining paragraphs | Comments

SIM-swapping ring stole $400M in crypto from a US company, officials allege

SIM-swapping ring stole $400M in crypto from a US company, officials allege

Enlarge (credit: Wong Yu Liang | Moment)

The US may have uncovered the nation's largest "SIM swap" scheme yet, charging a Chicago man and co-conspirators with allegedly stealing $400 million in cryptocurrency by targeting over 50 victims in more than a dozen states, including one company.

A recent indictment alleged that Robert Powell—using online monikers "R," "R$," and "ElSwapo1"—was the "head of a SIM swapping group" called the “Powell SIM Swapping Crew.” He allegedly conspired with Indiana man Carter Rohn (aka "Carti" and "Punslayer") and Colorado woman Emily Hernandez (allegedly aka "Em") to gain access to victims' devices and "carry out fraudulent SIM swap attacks" between March 2021 and April 2023.

SIM-swap attacks occur when someone fraudulently induces a wireless carrier to "reassign a cell phone number from the legitimate subscriber or user’s SIM card to a SIM card controlled by a criminal actor," the indictment said. Once the swap occurs, the bad actor can defeat multi-factor authentication protections and access online accounts to steal data or money.

Read 14 remaining paragraphs | Comments

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse

Boy in Living Room Wearing Robot Mask

Enlarge (credit: Getty Images)

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI's GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

"AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly," Common Sense Media wrote on X. "That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI."

Read 8 remaining paragraphs | Comments

Masters of the Air: Imagine a bunch of people throwing up, including me

Photograph showing two stars of the show standing in front of a B-17

Enlarge / Our two main heroes so far, Buck and Bucky. Or possibly Bucky and Buck. I forget which is which. (credit: Apple)

I'm writing this article under duress because it's not going to create anything new or try to make the world a better place—instead, I'm going to do the thing where a critic tears down the work of others rather than offering up their own creation to balance the scales. So here we go: I didn't like the first two episodes of Masters of the Air, and I don't think I'll be back for episode three.

The feeling that the show might not turn out to be what I was hoping for has been growing in my dark heart since catching the first trailer a month or so ago—it looked both distressingly digital and also maunderingly maudlin, with Austin Butler's color-graded babyface peering out through a hazy, desaturated cloud of cigarette smoke and 1940s World War II pilot tropes. Unfortunately, the show at release made me feel exactly how I feared it might—rather than recapturing the magic of Band of Brothers or the horror of The Pacific, Masters so far has the depth and maturity of a Call of Duty cutscene.

World War Blech

After two episodes, I feel I've seen everything Masters has to offer: a dead-serious window into the world of B-17 Flying Fortress pilots, wholly lacking any irony or sense of self-awareness. There's no winking and nodding to the audience, no joking around, no historic interviews with salt-and-pepper veterans to humanize the cast. The only thing allowed here is wall-to-wall jingoistic patriotism—the kind where there's no room for anything except God, the United States of America, and bombing the crap out of the enemy. And pining wistfully for that special girl waiting at home.

Read 10 remaining paragraphs | Comments

Dungeons & Dragons turns 50 this year, and there’s a lot planned for it

The three rulebooks for "fantastic medieval wargames" that started it all, released at some point in late January 1974, as seen in <a href="https://bookshop.org/p/books/dungeons-dragons-art-arcana-a-visual-history-sam-witwer/7280339"><em>Dungeons & Dragons Art & Arcana: A Visual History</em></a>.

Enlarge / The three rulebooks for "fantastic medieval wargames" that started it all, released at some point in late January 1974, as seen in Dungeons & Dragons Art & Arcana: A Visual History. (credit: Wizards of the Coast/Ten Speed Press)

"We have just fromed [sic] Tactical Studies Rules, and we wish to let the wargaming community know that a new line of miniature rules is available."

With this letter, written by Gary Gygax to wargaming zine publisher Jim Lurvey, one of the founders of what would become TSR, announced that a January 1974 release for Dungeons & Dragons was forthcoming. This, plus other evidence compiled by Jon Peterson (as pointed out by the Grognardia blog), points to the last Sunday of January 1974 as the best date for the "anniversary" of D&D. The first sale was in "late January 1974," Gygax later wrote, and on the last Sunday of January 1974, Gygax invited potential customers to drop by his house in the afternoon to try it out.

You could argue whether a final draft, printing, announcement, sale, or first session counts as the true "birth" of D&D, but we have to go with something, and Peterson's reasoning seems fairly sound. Gygax's memory, and a documented session at his own house, are a good point to pin down the celebration of this thing that has shaped a seemingly infinite number of other things.

Read 5 remaining paragraphs | Comments

I abandoned OpenLiteSpeed and went back to good ol’ Nginx

Ish is on fire, yo.

Enlarge / Ish is on fire, yo. (credit: Tim Macpherson / Getty Images)

Since 2017, in what spare time I have (ha!), I help my colleague Eric Berger host his Houston-area weather forecasting site, Space City Weather. It’s an interesting hosting challenge—on a typical day, SCW does maybe 20,000–30,000 page views to 10,000–15,000 unique visitors, which is a relatively easy load to handle with minimal work. But when severe weather events happen—especially in the summer, when hurricanes lurk in the Gulf of Mexico—the site’s traffic can spike to more than a million page views in 12 hours. That level of traffic requires a bit more prep to handle.

Hey, it's <a href="https://spacecityweather.com">Space City Weather</a>!

Hey, it's Space City Weather! (credit: Lee Hutchinson)

For a very long time, I ran SCW on a backend stack made up of HAProxy for SSL termination, Varnish Cache for on-box caching, and Nginx for the actual web server application—all fronted by Cloudflare to absorb the majority of the load. (I wrote about this setup at length on Ars a few years ago for folks who want some more in-depth details.) This stack was fully battle-tested and ready to devour whatever traffic we threw at it, but it was also annoyingly complex, with multiple cache layers to contend with, and that complexity made troubleshooting issues more difficult than I would have liked.

So during some winter downtime two years ago, I took the opportunity to jettison some complexity and reduce the hosting stack down to a single monolithic web server application: OpenLiteSpeed.

Read 32 remaining paragraphs | Comments

Google’s latest AI video generator can render cute animals in implausible situations

Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model. (credit: Google)

On Tuesday, Google announced Lumiere, an AI video generator that it calls "a space-time diffusion model for realistic video generation" in the accompanying preprint paper. But let's not kid ourselves: It does a great job of creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.

According to Google, Lumiere utilizes unique architecture to generate a video's entire temporal duration in one go. Or, as the company put it, "We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve."

In layperson terms, Google's tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.

Read 8 remaining paragraphs | Comments

AI-generated puffy pontiff image inspires new warning from Pope Francis

A cropped portion of an AI-generated image of Pope Francis wearing a puffy coat that went viral in March 2023.

Enlarge / A cropped portion of an AI-generated image of Pope Francis wearing a puffy coat that went viral in March 2023. (credit: @skyferrori on Twitter / Getty Images (background))

After a realistic AI-generated image of Pope Francis in a puffy coat went viral on social media last year, the Pope himself apparently took notice, reports Reuters. In a message for the 58th World Day of Social Communications, Francis writes, "We need but think of the long-standing problem of disinformation in the form of fake news, which today can employ 'deepfakes,' namely the creation and diffusion of images that appear perfectly plausible but false (I too have been an object of this)."

The Pope also warns about audio messages that "use a person’s voice to say things which that person never said," he continues. "The technology of simulation behind these programs can be useful in certain specific fields, but it becomes perverse when it distorts our relationship with others and with reality."

In March 2023, a Twitter user named "skyferrori" used the Midjourney v5 image synthesis service to create a convincing fake photo of Pope Francis wearing a long white puffer coat and posted it on the service. It quickly went viral and today stands at over 197,000 likes and 28.1 million views. Many people thought it was a real photo, and it was notable at the time for being one of the first AI-generated images that fooled a large audience online.

Read 3 remaining paragraphs | Comments

Avatar: The Last Airbender trailer has the element-bending action we crave

The Netflix live-action series Avatar: The Last Airbender will hit Netflix on February 22, 2024.

You know the premiere date for Netflix's live-action adaptation, Avatar: The Last Airbender, is drawing nigh because the streaming giant just released an official trailer featuring moments drawn from the original anime series and lots of snazzy element-bending action, plus several adorable shots of Appa. We have high hopes for this series.

As we reported previously, the original anime series was created by Michael Dante DiMartino and Bryan Konietzko. It was set in an Asian-inspired world where certain chosen individuals have the ability to telekinetically manipulate one of four elements (earth, air, water, and fire)—a practice known as "bending." Each generation, there is one Avatar who can bend all four elements and is thus responsible for maintaining harmony among the four elemental nations, as well as serving as a link between the physical and spirit worlds.

A 12-year-old Air Nomad boy named Aang is the current Avatar, but he hid in a state of suspended animation for a century because he was afraid of taking on that huge responsibility. Two Water Tribe siblings, Katara and Sokka, eventually revive Aang, who finds that the Fire Nation has wiped out most of the Air Nomads in his absence. Katara and Sokka join Aang, an airbender, on his quest to master bending each of the remaining three elements. Their mission is hampered by the banished Fire Nation Prince Zuko, who seeks to capture Aang to restore his honor with his father, Fire Lord Ozai, with the help of his uncle Iroh.

Read 4 remaining paragraphs | Comments

Inventor of NTP protocol that keeps time on billions of devices dies at age 85

A photo of David L. Mills taken by Raul654 on April 27, 2005.

Enlarge / A photo of David L. Mills taken by Raul654 on April 27, 2005. (credit: Raul654 / Benj Edwards / Getty Images)

On Thursday, Internet pioneer Vint Cerf announced that Dr. David L. Mills, the inventor of Network Time Protocol (NTP), died peacefully at age 85 on January 17, 2024. The announcement came in a post on the Internet Society mailing list after Cerf was informed of David's death by Mills' daughter, Leigh.

"He was such an iconic element of the early Internet," wrote Cerf.

Dr. Mills created the Network Time Protocol (NTP) in 1985 to address a crucial challenge in the online world: the synchronization of time across different computer systems and networks. In a digital environment where computers and servers are located all over the world, each with its own internal clock, there's a significant need for a standardized and accurate timekeeping system.

Read 6 remaining paragraphs | Comments

Indiana Jones and the Great Circle is a new first-person Nazi-whipping journey

Indiana Jones in front of an alcove in a ruin.

Enlarge / CGI Harrison Ford just can't believe he's getting roped into another globe-trotting adventure. (credit: Bethesda/Machine Games)

Almost two years ago to this day, Bethesda told everyone its Machine Games subsidiary was working on a new Indiana Jones game, one with "an original story." Now we can see what Indiana Jones and the Great Circle is going to look like, with a gameplay trailer showing up during Microsoft's Developer Direct event, and when it's arriving: "2024." You can now wishlist it on Steam and the Xbox store; it's exclusive to those platforms.

Gameplay reveal trailer for Indiana Jones and the Great Circle.

While the game has Harrison Ford's likeness, it's not Ford voicing your character. Troy Baker, the original voice of Joel in The Last of Us, picks up the role of the archaeologist.

From the trailer, Great Circle looks a lot like the modern Wolfenstein games that Machine Games made—and that's a good thing. The New Order and The New Colossus excelled at making you feel more like a human action hero than a shooting tank. They've got a knack for first-person platforming, stunts, and cinematic moments that are nowhere near as painful as in many shooters. They excel at balancing immersing you as a player and letting your character have a personality.

Read 4 remaining paragraphs | Comments

❌