It may well be that IP addresses are simply the wrong starting place to fulfil these desires relating to compliance, security, customisation and performance: "You cannot get to where you want to go to from where you appear to be!"
A simple way to compile the "reverse list" of all RIR records that map all assigned IP addresses to the names of the organisations that were allocated or assigned these addresses by an RIR is to extracting the reg-id values and perform a whois lookup on any of the number objects listed in this stats file with that reg-id value, extract the organisation name attribute of the whois response.
I've scripted a process to perform this reverse mapping to run every 24 hours, and the combined extended daily statistics report can be found at:
https://www.potaroo.net/bgp/stats/nro/delegated-nro-extended-org
The format used in this report is to append the organisation name as an additional field appended to each record of an assigned number resource, where the organisation names used in this report are the names recorded in the RIRs' databases.
Much of the reason for this apparent contradiction between the addressed device population of the IPv4 Internet and the actual count of connected devices, which is of course many times larger, is that through the 1990's the Internet rapidly changed from a peer-to-peer architecture to a client/server framework. Clients can initiate network transactions with servers but are incapable of initiating transactions with other clients. Servers are capable of completing connection requests from clients, but cannot initiate such connections with clients. Network Address Translators (NATs) are a natural fit to this client/server model, where pools of clients share a smaller pool of public addresses, and only require the use of an address once they have initiated an active session with a remote server. NATs are the reason why a pool of excess of 30 billion connected devices can be squeezed into a far smaller pool of some 3 billion advertised IPv4 addresses. Services and Applications that cannot work behind NATs are no longer useful in the context of the public Internet and no longer used as a result. In essence, what we did was to drop the notion that an IP address is uniquely associated with a device's identity, and the resultant ability to share addresses across clients largely alleviated the immediacy of the IPv4 addressing problem for the Internet.
However, the pressures of this inexorable growth in the number of deployed devices connected to the Internet implies that the even NATs cannot absorb these growth pressures forever. //
There is a larger question about the underlying networking paradigm in today’s public network. IPv6 attempts to restore the 1980’s networking paradigm of a true peer-to-peer network where every connected device is capable of sending packets to any other connected device. However, today’s networked environment regards such unconstrained connectivity as a liability. Exposing an end client device to unconstrained reachability is regarded as being unnecessarily foolhardy, and today’s network paradigm relies on client-initiated transactions. This is well-suited to NAT-based IPv4 connectivity, and the question regarding the long-term future of an IPv6 Internet is whether we want to bear the costs of maintaining end-client unique addressing plans, or whether NATs in IPv6 might prove to be a most cost-effective service platform for the client side of client/server networks. //
AWS's destiny isn't to lose to Azure or Google. It's to win the infrastructure war and lose the relevance war. To become the next Lumen — the backbone nobody knows they're using, while the companies on top capture the margins and the mindshare.
The cables matter. But nobody's writing blog posts about them. ®
As a web developer, I am thinking again about my experience with the mobile web on the day after the storm, and the following week. I remember trying in vain to find out info about the storm damage and road closures—watching loaders spin and spin on blank pages until they timed out trying to load. Once in a while, pages would finally load or partially load, and I could actually click a second or third link. We had a tiny bit of service but not much. At one point we drove down our main street to find service; eventually finding cars congregating in a closed fast-food parking lot, where there were a few bars of service!
When I was able to load some government and emergency sites, problems with loading speed and website content became very apparent. We tried to find out the situation with the highways on the government site that tracks road closures. I wasn’t able to view the big slow loading interactive map and got a pop-up with an API failure message. I wish the main closures had been listed more simply, so I could have seen that the highway was completely closed by a landslide. //
During the outages, many people got information from the local radio station’s ongoing broadcasts. The best information I received came from an unlikely place: a simple bulleted list in a daily email newsletter from our local state representative. Every day that newsletter listed food and water, power and gas, shelter locations, road and cell service updates, etc.
I was struck by how something as simple as text content could have such a big impact.
In having the best information provided in a simple newsletter list, I found myself wishing for faster loading and more direct websites. Especially ones with this sort of info. At that time, even a plain text site with barely any styles or images would have been better.
wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.
Jou (Mxyzptlk)Silver badge
Reply Icon
Re: The real reason nobody wants to use it
Not sure why they thought that would be a good idea.
Actual I think multiple addresses is a good idea.
-
The FE80::/7 is the former 169.254, always active, used for "same link" things, to some extend it replaces ARP, prevents ARP storms by design. Has the MAC coded into the address.
-
The FEC0::/10 (usually subnetted in /64 packets), similar to 192.168.x.x, but no "default gateway" for Internet desired, only clear other LAN destination routes.
-
The FC00::/7 (usually subnetted in /64 packets), similar to 10.x.x.x, but no "default gateway" for Internet desired, only clear other LAN destination routes.
-
The FD00::/8 DO NOT USE (usually subnetted in /64 packets), similar to 172.16.x.x, but no "default gateway" for Internet desired, only clear other LAN destination routes. This got removed from the standard somewhere in the last 20 years and replaced by FC00::/7 which included FD00::/8, therefore better avoid.
-
The FF00::/8 is multicast, similar to the 224.x.x.x
-
Finally the actual internet address, usually 2001:whateverfirst64bits:your-pseudo-static-part. Depending on the provider your prefix might be /56 /48 as well. The yourpseudosstaticpart is, on many devices, optionally with privacy extensions, so they are random and change over time even if your provider does not force-disconnect-reconnect. How much "privacy" that offers is a discussion for another decade.
Normal homes have 1 and 6. Über-Nerd homes or companies with somewhat clean ipv6 adaption have 1, 2 or 3 (not both please!), and 6 to organize their WAN/LANs. Enlightened Nerds include 5 too.
2 and 3 have the advantage that they are DEFINETLY not to be used for internet, no gateway to the internet, and therefore safe for LAN. I am nerd, but don't give a s, so I have 1 and 6, and my fd address is there for historic reasons since I played with ipv6 over a decade ago but not active in use.
My gripe is a lot of the things around it which makes ipv6 a hassle, especially when your prefix from 6 changes, all you adapters, and I mean ALL ACROSS YOUR WHOLE LAN, have to automatically follow suit. Which means: When connected to the Internet a lot of formerly static ipv4 configuration cannot be static any more - unless your provider gives you a fixed ipv6.
KurganSilver badge
Reply Icon
Re: The real reason nobody wants to use it
My gripe is a lot of the things around it which makes ipv6 a hassle, especially when your prefix from 6 changes, all you adapters, and I mean ALL ACROSS YOUR WHOLE LAN, have to automatically follow suit. Which means: When connected to the Internet a lot of formerly static ipv4 configuration cannot be static any more - unless your provider gives you a fixed ipv6.
This is one of the worst parts of it. And even if your provider gives you a static assignment, what happens when you change provider? Or if you failover on a multi wan connection? Or even try to load balance on a multi wan connection?
The only way IPV6 can be used with the same (even better) flexibility of v4 is when you own you v6 addresses and use a dynamic routing protocol, which is not what a small business usually does. A home user even less.
Then there is the security nightmares v6 can give you. I can't even imagine how many ways of abusing it are simply yet to be discovered, apart from the obvious ones like the fact that even if you don't use v6 to connect to the internet, you LAN has FE80 addresses all around and you have to firewall the hell out of it unless you want someone that penetrated the LAN to use them to move laterally almost for free.
12 hrs
Nanashi
Reply Icon
Re: The real reason nobody wants to use it
fec0::/10 is long deprecated, and it's a bit odd to tell us to avoid fd00::/8 in favor of fc00::/7 when the latter includes the former. fc00::/8 is intended for /48s assigned by some central entity (but none has been set up, since there doesn't seem to be a pressing need for one) and fd00::/8 is for people to select their own random /48s from, so if you want to use ULA then you'll be picking a /48 from fd00::/8.
It's not exactly hard to hand out a new prefix to everything. Your router advertises the new subnet, and every machine across your whole LAN receives it and automatically configures a new IP from it.
Anything that assumes your IPs are never going to change is already broken. Maybe we should focus a teeny bit of the energy we spend complaining about it into fixing the brokenness?
//
Most of your first questions can be broadly answered by a mix of "you advertise a /64 from the prefix that the provider gives you" and "you can use multiple addresses". And it doesn't sound like your use of v4 is very flexible if it can't handle your IPs changing sometimes.
less than half of all netizens use IPv6 today.
To understand why, know that IPv6 also suggested other, rather modest, changes to the way networks operate.
"IPv6 was an extremely conservative protocol that changed as little as possible," APNIC chief scientist Geoff Huston told The Register. "It was a classic case of mis-design by committee."
And that notional committee made one more critical choice: IPv6 was not backward-compatible with IPv4, meaning users had to choose one or the other – or decide to run both in parallel.
For many, the decision of which protocol to use was easy because IPv6 didn't add features that represented major improvements.
"One big surprise to me was how few features went into IPv6 in the end, aside from the massive expansion of address space," said Bruce Davie... //
Davie said many of the security, plug-and-play, and quality of service features that didn't make it into IPv6 were eventually implemented in IPv4, further reducing the incentive to adopt the new protocol. "Given the small amount of new functionality in v6, it's not so surprising that deployment has been a 30 year struggle," he said. //
While IPv6 didn't take off as expected, it's not fair to say it failed.
"IPv6 wasn't about turning IPv4 off, but about ensuring the internet could continue to grow without breaking," said John Curran, president and CEO of the American Registry for Internet Numbers (ARIN).
"In fact, IPv4's continued viability is largely because IPv6 absorbed that growth pressure elsewhere – particularly in mobile, broadband, and cloud environments," he added. "In that sense, IPv6 succeeded where it was needed most, and must be regarded as a success." //
APNIC's Huston, however, thinks that IPv6 has become less relevant to the wider internet.
"I would argue that we actually found a far better outcome along the way," he told The Register. "NATS forced us to think about network architectures in an entirely different way."
That new way is encapsulated in a new technology called Quick UDP Internet Connections (QUIC), that doesn't require client devices to always have access to a public IP address.
"We are proving to ourselves that clients don't need permanent assignment of IP address, which makes the client side of network far cheaper, more flexible, and scalable," he said.
“Really Simple Licensing” makes it easier for creators to get paid for AI scraping. //
Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.
Announced Wednesday morning, the “Really Simple Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.
Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.
And now, with that redesign having been functional and stable for a couple of years and a few billion page views (really!), we want to invite you all behind the curtain to peek at how we keep a major site like Ars online and functional. This article will be the first in a four-part series on how Ars Technica works—we’ll examine both the basic technology choices that power Ars and the software with which we hook everything together.
Thirty years ago today, Netscape Communications and Sun Microsystems issued a joint press release announcing JavaScript, an object scripting language designed for creating interactive web applications. The language emerged from a frantic 10-day sprint at pioneering browser company Netscape, where engineer Brendan Eich hacked together a working internal prototype during May 1995.
While the JavaScript language didn’t ship publicly until that September and didn’t reach a 1.0 release until March 1996, the descendants of Eich’s initial 10-day hack now run on approximately 98.9 percent of all websites with client-side code, making JavaScript the dominant programming language of the web. It’s wildly popular; beyond the browser, JavaScript powers server backends, mobile apps, desktop software, and even some embedded systems. According to several surveys, JavaScript consistently ranks among the most widely used programming languages in the world. //
The JavaScript partnership secured endorsements from 28 major tech companies, but amusingly, the December 1995 announcement now reads like a tech industry epitaph. The endorsing companies included Digital Equipment Corporation (absorbed by Compaq, then HP), Silicon Graphics (bankrupt), and Netscape itself (bought by AOL, dismantled). Sun Microsystems, co-creator of JavaScript and owner of Java, was acquired by Oracle in 2010. JavaScript outlived them all. //
Confusion about its relationship to Java continues: The two languages share a name, some syntax conventions, and virtually nothing else. Java was developed by James Gosling at Sun Microsystems using static typing and class-based objects. JavaScript uses dynamic typing and prototype-based inheritance. The distinction between the two languages, as one Stack Overflow user put it in 2010, is similar to the relationship between the words “car” and “carpet.” //
The language now powers not just websites but mobile applications through frameworks like React Native, desktop software through Electron, and server infrastructure through Node.js. Somewhere around 2 million to 3 million packages exist on npm, the JavaScript package registry.
Ping & traceroute
If you suspect a network problem between the monitoring system and your server, it’s helpful to have a traceroute from your NTP server to the monitoring system. You can traceroute to the monitoring network using 139.178.64.42 and 2604:1380:2:6000::15.
You can ping or traceroute from the monitoring network using HTTP, with:
curl http://trace.ntppool.org/traceroute/8.8.8.8
curl http://trace.ntppool.org/ping/8.8.8.8for /l %i in (1,1,254) do @ping 192.168.1.%i -w 10 -n 1 | find "Reply"
This will ping all addresses from 192.168.1.1 to 192.168.1.254 one time each, wait 10ms for a reply (more than enough time on a local network) and show only the addresses that replied.
All of your MX record, DNS, blacklist and SMTP diagnostics in one integrated tool. Input a domain name or IP Address or Host Name. Links in the results will guide you to other relevant tools and information. And you'll have a chronological history of your results.
If you already know exactly what you want, you can force a particular test or lookup. Try some of these examples:
It’s always DNS
Amazon said the root cause of the outage was a software bug in software running the DynamoDB DNS management system. The system monitors the stability of load balancers by, among other things, periodically creating new DNS configurations for endpoints within the AWS network. A race condition is an error that makes a process dependent on the timing or sequence events that are variable and outside the developers’ control. The result can be unexpected behavior and potentially harmful failures.
In this case, the race condition resided in the DNS Enactor, a DynamoDB component that constantly updates domain lookup tables in individual AWS endpoints to optimize load balancing as conditions change. As the enactor operated, it “experienced unusually high delays needing to retry its update on several of the DNS endpoints.” While the enactor was playing catch-up, a second DynamoDB component, the DNS Planner, continued to generate new plans. Then, a separate DNS Enactor began to implement them.
The timing of these two enactors triggered the race condition, which ended up taking out the entire DynamoDB.
The NTP Pool DNS Mapper tries mapping user IP addresses to their DNS servers (and vice-versa). It also tracks which servers support EDNS-SUBNET and how well that actually matches the client IP (as seen by the HTTP server). You can see a demo at mist.ntppool.org.
It's done to research how to improve the DNS system used by the NTP Pool and potentially other similar applications.
How can I help?
Thank you for asking! The easiest way to help is to help get more data.
If you have a website you can have your users help by adding one of the following two code snippets to your site.
It would be somewhat understandable that Autopilot stops working because Eight Sleep’s backend is down but not being able to even adjust the temperature locally is ridiculous and completely unacceptable for such a high-end (and expensive) product.
A person on X wrote: “Would be great if my bed wasn’t stuck in an inclined position due to an AWS outage. Cmon now.” //
Eight Sleep users will be relieved to hear that the company is working to make their products usable during Internet outages. But many are also questioning why Eight Sleep didn’t implement local control sooner. This isn’t Eight Sleep’s first outage, and users can also experience personal Wi-Fi problems. And there’s an obvious user benefit to being able to control their bed’s elevation and temperature without the Internet or if Eight Sleep ever goes out of business.
For Eight Sleep, though, making flagship features available without its app while still making enough money isn’t easy. Without forcing people to put their Eight Sleep devices online, it would be harder for Eight Sleep to convince people that Autopilot subscriptions should be mandatory. //
mygeek911 Ars Scholae Palatinae
14y
858
Subscriptor++
“AWS ate my sleep” wasn’t on my Bingo card.
One company, a California-based startup named Muon Space, is partnering with SpaceX to bring Starlink connectivity to low-Earth orbit. Muon announced Tuesday it will soon install Starlink terminals on its satellites, becoming the first commercial user, other than SpaceX itself, to use Starlink for in-flight connectivity in low-Earth orbit. //
Putting a single Starlink mini-laser terminal on a satellite would keep the spacecraft connected 70 to 80 percent of the time, according to Greg Smirin, Muon’s president. There would still be some downtime as the laser reconnects to different Starlink satellites, but Smirin said a pair of laser terminals would allow a satellite to reach 100 percent coverage. //
SpaceX’s mini-lasers are designed to achieve link speeds of 25Gbps at distances up to 2,500 miles (4,000 kilometers). These speeds will “open new business models” for satellite operators who can now rely on the same “Internet speed and responsiveness as cloud providers and telecom networks on the ground,” Muon said in a statement. //
Live video from space has historically been limited to human spaceflight missions or rocket-mounted cameras that operate for a short time.
One example of that is the dazzling live video beamed back to Earth, through Starlink, from SpaceX’s Starship rockets. The laser terminals on Starship operate through the extreme heat of reentry, returning streaming video as plasma envelops the vehicle. This environment routinely causes radio blackouts for other spacecraft as they reenter the atmosphere. With optical links, that’s no longer a problem.
“This starts to enable a whole new category of capabilities, much the same way as when terrestrial computers went from dial-up to broadband,” Smirin said. “You knew what it could do, but we blew through bulletin boards very quickly to many different applications.”
Announced on September 24, Cloudflare’s Content Signals Policy is an effort to use the company’s influential market position to change how content is used by web crawlers. It involves updating millions of websites’ robots.txt files. //
Historically, robots.txt simply includes a list of paths on the domain that were flagged as either “allow” or “disallow.” It was technically not enforceable, but it became an effective honor system because there are advantages to it for the owners of both the website and the crawler: Website owners could dictate access for various business reasons, and it helped crawlers avoid working through data that wouldn’t be relevant. //
The Content Signals Policy initiative is a newly proposed format for robots.txt that intends to do that. It allows website operators to opt in or out of consenting to the following use cases, as worded in the policy:
- search: Building a search index and providing search results (e.g., returning hyperlinks and short excerpts from your website’s contents). Search does not include providing AI-generated search summaries.
- ai-input: Inputting content into one or more AI models (e.g., retrieval augmented generation, grounding, or other real-time taking of content for generative AI search answers).
- ai-train: Training or fine-tuning AI models.
Cloudflare has given all of its customers quick paths for setting those values on a case-by-case basis. Further, it has automatically updated robots.txt on the 3.8 million domains that already use Cloudflare’s managed robots.txt feature, with search defaulting to yes, ai-train to no, and ai-input blank, indicating a neutral position.
Scientists at the University of California, San Diego, and the University of Maryland, College Park, say they were able to pick up large amounts of sensitive traffic largely by just pointing a commercial off-the-shelf satellite dish at the sky from the roof of a university building in San Diego.
In its paper, Don't Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites [PDF], the team describes how it performed a broad scan of IP traffic on 39 GEO satellites across 25 distinct longitudes and found that half of the signals they picked up contained cleartext IP traffic.
This included unencrypted cellular backhaul data sent from the core networks of several US operators, destined for cell towers in remote areas. Also found was unprotected internet traffic heading for in-flight Wi-Fi users aboard airliners, and unencrypted call audio from multiple VoIP providers.
According to the researchers, they were able to identify some observed satellite data as corresponding to T-Mobile cellular backhaul traffic. This included text and voice call contents, user internet traffic, and cellular network signaling protocols, all "in the clear," but T-Mobile quickly enabled encryption after learning about the problem.
More seriously, the team was able to observe unencrypted traffic for military systems including detailed tracking data for coastal vessel surveillance and operational data of a police force.
In addition, they found retail, financial, and banking companies all using unencrypted satellite communications to link their internal networks at various sites. The researchers were able to see unencrypted login credentials, corporate emails, inventory records, and information from ATM cash dispensers.