A daemon that scans program outputs for repeated patterns, and takes action. Designed for ease of configuration and hackability.
Easy configuration
Few concepts to grasp, no hidden config. Easy to adapt to your needs
Flexible configuration
From IP bans to service restart: react to any log, and do anything in response
Performant
Save your CPU cycles for your services. Written in Rust with a focus on performance
The Supreme Court today decided that Internet service providers cannot be held liable for their customers’ copyright infringement unless they take specific steps that cause users to violate copyrights. The court ruled unanimously in favor of Internet provider Cox Communications, though two justices did not agree with the majority’s reasoning.
The ruling effectively means that ISPs do not have to conduct mass terminations of Internet users accused of illegally downloading or uploading pirated files. If the court had ruled otherwise, ISPs could have been compelled to strictly police their networks for piracy in order to avoid billion-dollar court verdicts under the Digital Millennium Copyright Act (DMCA). //
The court decided today that a service is tailored to infringement if it is not capable of “substantial” or “commercially significant” noninfringing uses. The court cited Sony’s 1984 victory in the Betamax case, in which justices found that the Betamax was capable of noninfringing uses and that Sony’s sale of it did not constitute contributory infringement. Sony’s win in 1984 thus contributed to its loss today.
The Federal Communications Commission yesterday announced it will no longer approve consumer-grade routers made outside of the US, citing a President Trump directive on reducing the use of foreign technology for national security reasons. The action will prevent foreign-made routers from being imported into or sold in the US.
Routers already approved for sale in the US can continue to be sold, and consumers can keep using any router they’ve previously obtained, the FCC said. But the FCC will not approve new device models made at least partly outside the US unless the Department of Defense or Department of Homeland Security determines that the router does not pose national security risks.
The prohibition applies to both US and foreign companies that produce routers outside the US. Foreign production includes “any major stage of the process through which the device is made, including manufacturing, assembly, design, and development.”
“This action means that new models of foreign-produced routers will no longer be eligible for marketing or sale in the US,” FCC Chairman Brendan Carr wrote on X.
It may well be that IP addresses are simply the wrong starting place to fulfil these desires relating to compliance, security, customisation and performance: "You cannot get to where you want to go to from where you appear to be!"
A simple way to compile the "reverse list" of all RIR records that map all assigned IP addresses to the names of the organisations that were allocated or assigned these addresses by an RIR is to extracting the reg-id values and perform a whois lookup on any of the number objects listed in this stats file with that reg-id value, extract the organisation name attribute of the whois response.
I've scripted a process to perform this reverse mapping to run every 24 hours, and the combined extended daily statistics report can be found at:
https://www.potaroo.net/bgp/stats/nro/delegated-nro-extended-org
The format used in this report is to append the organisation name as an additional field appended to each record of an assigned number resource, where the organisation names used in this report are the names recorded in the RIRs' databases.
Much of the reason for this apparent contradiction between the addressed device population of the IPv4 Internet and the actual count of connected devices, which is of course many times larger, is that through the 1990's the Internet rapidly changed from a peer-to-peer architecture to a client/server framework. Clients can initiate network transactions with servers but are incapable of initiating transactions with other clients. Servers are capable of completing connection requests from clients, but cannot initiate such connections with clients. Network Address Translators (NATs) are a natural fit to this client/server model, where pools of clients share a smaller pool of public addresses, and only require the use of an address once they have initiated an active session with a remote server. NATs are the reason why a pool of excess of 30 billion connected devices can be squeezed into a far smaller pool of some 3 billion advertised IPv4 addresses. Services and Applications that cannot work behind NATs are no longer useful in the context of the public Internet and no longer used as a result. In essence, what we did was to drop the notion that an IP address is uniquely associated with a device's identity, and the resultant ability to share addresses across clients largely alleviated the immediacy of the IPv4 addressing problem for the Internet.
However, the pressures of this inexorable growth in the number of deployed devices connected to the Internet implies that the even NATs cannot absorb these growth pressures forever. //
There is a larger question about the underlying networking paradigm in today’s public network. IPv6 attempts to restore the 1980’s networking paradigm of a true peer-to-peer network where every connected device is capable of sending packets to any other connected device. However, today’s networked environment regards such unconstrained connectivity as a liability. Exposing an end client device to unconstrained reachability is regarded as being unnecessarily foolhardy, and today’s network paradigm relies on client-initiated transactions. This is well-suited to NAT-based IPv4 connectivity, and the question regarding the long-term future of an IPv6 Internet is whether we want to bear the costs of maintaining end-client unique addressing plans, or whether NATs in IPv6 might prove to be a most cost-effective service platform for the client side of client/server networks. //
AWS's destiny isn't to lose to Azure or Google. It's to win the infrastructure war and lose the relevance war. To become the next Lumen — the backbone nobody knows they're using, while the companies on top capture the margins and the mindshare.
The cables matter. But nobody's writing blog posts about them. ®
As a web developer, I am thinking again about my experience with the mobile web on the day after the storm, and the following week. I remember trying in vain to find out info about the storm damage and road closures—watching loaders spin and spin on blank pages until they timed out trying to load. Once in a while, pages would finally load or partially load, and I could actually click a second or third link. We had a tiny bit of service but not much. At one point we drove down our main street to find service; eventually finding cars congregating in a closed fast-food parking lot, where there were a few bars of service!
When I was able to load some government and emergency sites, problems with loading speed and website content became very apparent. We tried to find out the situation with the highways on the government site that tracks road closures. I wasn’t able to view the big slow loading interactive map and got a pop-up with an API failure message. I wish the main closures had been listed more simply, so I could have seen that the highway was completely closed by a landslide. //
During the outages, many people got information from the local radio station’s ongoing broadcasts. The best information I received came from an unlikely place: a simple bulleted list in a daily email newsletter from our local state representative. Every day that newsletter listed food and water, power and gas, shelter locations, road and cell service updates, etc.
I was struck by how something as simple as text content could have such a big impact.
In having the best information provided in a simple newsletter list, I found myself wishing for faster loading and more direct websites. Especially ones with this sort of info. At that time, even a plain text site with barely any styles or images would have been better.
wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.
Jou (Mxyzptlk)Silver badge
Reply Icon
Re: The real reason nobody wants to use it
Not sure why they thought that would be a good idea.
Actual I think multiple addresses is a good idea.
-
The FE80::/7 is the former 169.254, always active, used for "same link" things, to some extend it replaces ARP, prevents ARP storms by design. Has the MAC coded into the address.
-
The FEC0::/10 (usually subnetted in /64 packets), similar to 192.168.x.x, but no "default gateway" for Internet desired, only clear other LAN destination routes.
-
The FC00::/7 (usually subnetted in /64 packets), similar to 10.x.x.x, but no "default gateway" for Internet desired, only clear other LAN destination routes.
-
The FD00::/8 DO NOT USE (usually subnetted in /64 packets), similar to 172.16.x.x, but no "default gateway" for Internet desired, only clear other LAN destination routes. This got removed from the standard somewhere in the last 20 years and replaced by FC00::/7 which included FD00::/8, therefore better avoid.
-
The FF00::/8 is multicast, similar to the 224.x.x.x
-
Finally the actual internet address, usually 2001:whateverfirst64bits:your-pseudo-static-part. Depending on the provider your prefix might be /56 /48 as well. The yourpseudosstaticpart is, on many devices, optionally with privacy extensions, so they are random and change over time even if your provider does not force-disconnect-reconnect. How much "privacy" that offers is a discussion for another decade.
Normal homes have 1 and 6. Über-Nerd homes or companies with somewhat clean ipv6 adaption have 1, 2 or 3 (not both please!), and 6 to organize their WAN/LANs. Enlightened Nerds include 5 too.
2 and 3 have the advantage that they are DEFINETLY not to be used for internet, no gateway to the internet, and therefore safe for LAN. I am nerd, but don't give a s, so I have 1 and 6, and my fd address is there for historic reasons since I played with ipv6 over a decade ago but not active in use.
My gripe is a lot of the things around it which makes ipv6 a hassle, especially when your prefix from 6 changes, all you adapters, and I mean ALL ACROSS YOUR WHOLE LAN, have to automatically follow suit. Which means: When connected to the Internet a lot of formerly static ipv4 configuration cannot be static any more - unless your provider gives you a fixed ipv6.
KurganSilver badge
Reply Icon
Re: The real reason nobody wants to use it
My gripe is a lot of the things around it which makes ipv6 a hassle, especially when your prefix from 6 changes, all you adapters, and I mean ALL ACROSS YOUR WHOLE LAN, have to automatically follow suit. Which means: When connected to the Internet a lot of formerly static ipv4 configuration cannot be static any more - unless your provider gives you a fixed ipv6.
This is one of the worst parts of it. And even if your provider gives you a static assignment, what happens when you change provider? Or if you failover on a multi wan connection? Or even try to load balance on a multi wan connection?
The only way IPV6 can be used with the same (even better) flexibility of v4 is when you own you v6 addresses and use a dynamic routing protocol, which is not what a small business usually does. A home user even less.
Then there is the security nightmares v6 can give you. I can't even imagine how many ways of abusing it are simply yet to be discovered, apart from the obvious ones like the fact that even if you don't use v6 to connect to the internet, you LAN has FE80 addresses all around and you have to firewall the hell out of it unless you want someone that penetrated the LAN to use them to move laterally almost for free.
12 hrs
Nanashi
Reply Icon
Re: The real reason nobody wants to use it
fec0::/10 is long deprecated, and it's a bit odd to tell us to avoid fd00::/8 in favor of fc00::/7 when the latter includes the former. fc00::/8 is intended for /48s assigned by some central entity (but none has been set up, since there doesn't seem to be a pressing need for one) and fd00::/8 is for people to select their own random /48s from, so if you want to use ULA then you'll be picking a /48 from fd00::/8.
It's not exactly hard to hand out a new prefix to everything. Your router advertises the new subnet, and every machine across your whole LAN receives it and automatically configures a new IP from it.
Anything that assumes your IPs are never going to change is already broken. Maybe we should focus a teeny bit of the energy we spend complaining about it into fixing the brokenness?
//
Most of your first questions can be broadly answered by a mix of "you advertise a /64 from the prefix that the provider gives you" and "you can use multiple addresses". And it doesn't sound like your use of v4 is very flexible if it can't handle your IPs changing sometimes.
less than half of all netizens use IPv6 today.
To understand why, know that IPv6 also suggested other, rather modest, changes to the way networks operate.
"IPv6 was an extremely conservative protocol that changed as little as possible," APNIC chief scientist Geoff Huston told The Register. "It was a classic case of mis-design by committee."
And that notional committee made one more critical choice: IPv6 was not backward-compatible with IPv4, meaning users had to choose one or the other – or decide to run both in parallel.
For many, the decision of which protocol to use was easy because IPv6 didn't add features that represented major improvements.
"One big surprise to me was how few features went into IPv6 in the end, aside from the massive expansion of address space," said Bruce Davie... //
Davie said many of the security, plug-and-play, and quality of service features that didn't make it into IPv6 were eventually implemented in IPv4, further reducing the incentive to adopt the new protocol. "Given the small amount of new functionality in v6, it's not so surprising that deployment has been a 30 year struggle," he said. //
While IPv6 didn't take off as expected, it's not fair to say it failed.
"IPv6 wasn't about turning IPv4 off, but about ensuring the internet could continue to grow without breaking," said John Curran, president and CEO of the American Registry for Internet Numbers (ARIN).
"In fact, IPv4's continued viability is largely because IPv6 absorbed that growth pressure elsewhere – particularly in mobile, broadband, and cloud environments," he added. "In that sense, IPv6 succeeded where it was needed most, and must be regarded as a success." //
APNIC's Huston, however, thinks that IPv6 has become less relevant to the wider internet.
"I would argue that we actually found a far better outcome along the way," he told The Register. "NATS forced us to think about network architectures in an entirely different way."
That new way is encapsulated in a new technology called Quick UDP Internet Connections (QUIC), that doesn't require client devices to always have access to a public IP address.
"We are proving to ourselves that clients don't need permanent assignment of IP address, which makes the client side of network far cheaper, more flexible, and scalable," he said.
“Really Simple Licensing” makes it easier for creators to get paid for AI scraping. //
Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.
Announced Wednesday morning, the “Really Simple Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.
Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.
And now, with that redesign having been functional and stable for a couple of years and a few billion page views (really!), we want to invite you all behind the curtain to peek at how we keep a major site like Ars online and functional. This article will be the first in a four-part series on how Ars Technica works—we’ll examine both the basic technology choices that power Ars and the software with which we hook everything together.
Thirty years ago today, Netscape Communications and Sun Microsystems issued a joint press release announcing JavaScript, an object scripting language designed for creating interactive web applications. The language emerged from a frantic 10-day sprint at pioneering browser company Netscape, where engineer Brendan Eich hacked together a working internal prototype during May 1995.
While the JavaScript language didn’t ship publicly until that September and didn’t reach a 1.0 release until March 1996, the descendants of Eich’s initial 10-day hack now run on approximately 98.9 percent of all websites with client-side code, making JavaScript the dominant programming language of the web. It’s wildly popular; beyond the browser, JavaScript powers server backends, mobile apps, desktop software, and even some embedded systems. According to several surveys, JavaScript consistently ranks among the most widely used programming languages in the world. //
The JavaScript partnership secured endorsements from 28 major tech companies, but amusingly, the December 1995 announcement now reads like a tech industry epitaph. The endorsing companies included Digital Equipment Corporation (absorbed by Compaq, then HP), Silicon Graphics (bankrupt), and Netscape itself (bought by AOL, dismantled). Sun Microsystems, co-creator of JavaScript and owner of Java, was acquired by Oracle in 2010. JavaScript outlived them all. //
Confusion about its relationship to Java continues: The two languages share a name, some syntax conventions, and virtually nothing else. Java was developed by James Gosling at Sun Microsystems using static typing and class-based objects. JavaScript uses dynamic typing and prototype-based inheritance. The distinction between the two languages, as one Stack Overflow user put it in 2010, is similar to the relationship between the words “car” and “carpet.” //
The language now powers not just websites but mobile applications through frameworks like React Native, desktop software through Electron, and server infrastructure through Node.js. Somewhere around 2 million to 3 million packages exist on npm, the JavaScript package registry.
Ping & traceroute
If you suspect a network problem between the monitoring system and your server, it’s helpful to have a traceroute from your NTP server to the monitoring system. You can traceroute to the monitoring network using 139.178.64.42 and 2604:1380:2:6000::15.
You can ping or traceroute from the monitoring network using HTTP, with:
curl http://trace.ntppool.org/traceroute/8.8.8.8
curl http://trace.ntppool.org/ping/8.8.8.8for /l %i in (1,1,254) do @ping 192.168.1.%i -w 10 -n 1 | find "Reply"
This will ping all addresses from 192.168.1.1 to 192.168.1.254 one time each, wait 10ms for a reply (more than enough time on a local network) and show only the addresses that replied.
All of your MX record, DNS, blacklist and SMTP diagnostics in one integrated tool. Input a domain name or IP Address or Host Name. Links in the results will guide you to other relevant tools and information. And you'll have a chronological history of your results.
If you already know exactly what you want, you can force a particular test or lookup. Try some of these examples:
It’s always DNS
Amazon said the root cause of the outage was a software bug in software running the DynamoDB DNS management system. The system monitors the stability of load balancers by, among other things, periodically creating new DNS configurations for endpoints within the AWS network. A race condition is an error that makes a process dependent on the timing or sequence events that are variable and outside the developers’ control. The result can be unexpected behavior and potentially harmful failures.
In this case, the race condition resided in the DNS Enactor, a DynamoDB component that constantly updates domain lookup tables in individual AWS endpoints to optimize load balancing as conditions change. As the enactor operated, it “experienced unusually high delays needing to retry its update on several of the DNS endpoints.” While the enactor was playing catch-up, a second DynamoDB component, the DNS Planner, continued to generate new plans. Then, a separate DNS Enactor began to implement them.
The timing of these two enactors triggered the race condition, which ended up taking out the entire DynamoDB.
The NTP Pool DNS Mapper tries mapping user IP addresses to their DNS servers (and vice-versa). It also tracks which servers support EDNS-SUBNET and how well that actually matches the client IP (as seen by the HTTP server). You can see a demo at mist.ntppool.org.
It's done to research how to improve the DNS system used by the NTP Pool and potentially other similar applications.
How can I help?
Thank you for asking! The easiest way to help is to help get more data.
If you have a website you can have your users help by adding one of the following two code snippets to your site.
It would be somewhat understandable that Autopilot stops working because Eight Sleep’s backend is down but not being able to even adjust the temperature locally is ridiculous and completely unacceptable for such a high-end (and expensive) product.
A person on X wrote: “Would be great if my bed wasn’t stuck in an inclined position due to an AWS outage. Cmon now.” //
Eight Sleep users will be relieved to hear that the company is working to make their products usable during Internet outages. But many are also questioning why Eight Sleep didn’t implement local control sooner. This isn’t Eight Sleep’s first outage, and users can also experience personal Wi-Fi problems. And there’s an obvious user benefit to being able to control their bed’s elevation and temperature without the Internet or if Eight Sleep ever goes out of business.
For Eight Sleep, though, making flagship features available without its app while still making enough money isn’t easy. Without forcing people to put their Eight Sleep devices online, it would be harder for Eight Sleep to convince people that Autopilot subscriptions should be mandatory. //
mygeek911 Ars Scholae Palatinae
14y
858
Subscriptor++
“AWS ate my sleep” wasn’t on my Bingo card.