Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierArs Technica

Nginx core developer quits project in security dispute, starts “freenginx” fork

Multiple forks being held by hands

Enlarge (credit: Getty Images)

A core developer of Nginx, currently the world's most popular web server, has quit the project, stating that he no longer sees it as "a free and open source project… for the public good." His fork, freenginx, is "going to be run by developers, and not corporate entities," writes Maxim Dounin, and will be "free from arbitrary corporate actions."

Dounin is one of the earliest and still most active coders on the open source Nginx project and one of the first employees of Nginx, Inc., a company created in 2011 to commercially support the steadily growing web server. Nginx is now used on roughly one-third of the world's web servers, ahead of Apache.

A tricky history of creation and ownership

Nginx Inc. was acquired by Seattle-based networking firm F5 in 2019. Later that year, two of Nginx's leaders, Maxim Konovalov and Igor Sysoev, were detained and interrogated in their homes by armed Russian state agents. Sysoev's former employer, Internet firm Rambler, claimed that it owned the rights to Nginx's source code, as it was developed during Sysoev's tenure at Rambler (where Dounin also worked). While the criminal charges and rights do not appear to have materialized, the implications of a Russian company's intrusion into a popular open source piece of the web's infrastructure caused some alarm.

Read 10 remaining paragraphs | Comments

I abandoned OpenLiteSpeed and went back to good ol’ Nginx

Ish is on fire, yo.

Enlarge / Ish is on fire, yo. (credit: Tim Macpherson / Getty Images)

Since 2017, in what spare time I have (ha!), I help my colleague Eric Berger host his Houston-area weather forecasting site, Space City Weather. It’s an interesting hosting challenge—on a typical day, SCW does maybe 20,000–30,000 page views to 10,000–15,000 unique visitors, which is a relatively easy load to handle with minimal work. But when severe weather events happen—especially in the summer, when hurricanes lurk in the Gulf of Mexico—the site’s traffic can spike to more than a million page views in 12 hours. That level of traffic requires a bit more prep to handle.

Hey, it's <a href="https://spacecityweather.com">Space City Weather</a>!

Hey, it's Space City Weather! (credit: Lee Hutchinson)

For a very long time, I ran SCW on a backend stack made up of HAProxy for SSL termination, Varnish Cache for on-box caching, and Nginx for the actual web server application—all fronted by Cloudflare to absorb the majority of the load. (I wrote about this setup at length on Ars a few years ago for folks who want some more in-depth details.) This stack was fully battle-tested and ready to devour whatever traffic we threw at it, but it was also annoyingly complex, with multiple cache layers to contend with, and that complexity made troubleshooting issues more difficult than I would have liked.

So during some winter downtime two years ago, I took the opportunity to jettison some complexity and reduce the hosting stack down to a single monolithic web server application: OpenLiteSpeed.

Read 32 remaining paragraphs | Comments

❌