488 private links
Discover and continuously monitor every SSL/TLS certificate in your network for expiration and revocation to avoid PKI-related downtime and risk.
There is no such thing as traceroute.
I used to deliver network training at work. It was freeform, I was given wide latitude to design it as I saw fit, so I focused on things that I had seen people struggling with - clearly explaining VLANs in a less abstract manner than most literature, for instance, as well as actually explaining how QoS queuing works, which very few people understand properly.
One of the "chapters" in my presentation was about traceroute, and it more or less said "Don't use it, because you don't know how, and almost nobody you'll talk to does either, so try your best to ignore them." This is not just my opinion, it's backed up by people much more experienced than me. For a good summary I highly recommend this presentation.
But as good as that deck is, I always felt it left out a crucial piece of information: Traceroute, as far as the industry is concerned, does not exist.
Look it up. There is no RFC. There are no ports for traceroute, no rules in firewalls to accommodate it, no best practices for network operators. Why is that?
Traceroute has no history
First off: Yes, there is a traceroute RFC. It's RFC1393, it's 31 years old, and to my knowledge nothing supports it. The RFCs are jam-packed with brilliant ideas nobody implemented. This is one of them. The traceroute we have is completely unrelated to this.
Unsurprisingly however, it's a good description of how a traceroute protocol should work. //
As the linked presentation explains, traceroute simply no longer works in the modern world, at least not "as designed" - and it no longer can work that way, for several reasons not the least that networks have been abstracted in ways it did not anticipate.
There are now things like MPLS, which operate by encapsulating IP - in other words, putting a bag over a packets head, throwing it in the back of a van, driving it across town and letting it loose so it has no idea how far it's traveled. Without getting much further into how that works: It is completely impossible for it to satisfy the expectations of traceroute.
This "tool" works purely at layer 3, so it's impossible for it to adapt to the sort of "layer 12-dimensional-chess" type shenanigan that MPLS does - and there are other problems, but they're all getting ahead of reality, since traceroute never even worked correctly as intended, and there's no reason it would.
Traceroute, you see, is "clever," which is an engineering term that means "fragile." When programmers discover something "clever," any ability they may have had to assess its sustainability or purpose-fit often goes out the window, because it's far more important to embrace the "cleverness" than to solve a problem reliably. //
I can't count how many times this happened, but I do remember after about four years of doing this, I had come up with a method for getting more accurate latency stats: just ping -i .1. Absolutely hammer the thing with pings while you have the customer test their usual business processes, and it'll be easier to see latency spikes if something is eating up too much bandwidth.
What I discovered is that running two of these in parallel would produce exactly 50% packet loss, with total reliability. I then tested and found that if I just fired up three or four normal pings, at the default interval, it would do the same thing. 30% or 40% packet loss.
There is no telling how many issues we prolonged because everyone was running their own pings simultaneously and the kernel was getting overloaded and throwing some of them out. This is a snapshot of every network support center, everywhere. It is a bad scene.
yuliyp 40 days ago | next [–]
I think the "The Worst Diagnostics In The World" section is a bit simplistic about what traceroute does tell you. It can tell you lots of thing beyond "you can reach all the way". Specifically, it can tell you at least some of the networks and locations your packet went through and it can tell you how far it definitely got. These are extremely powerful tools as they rule out lots of problems. It's useful to be able to hand an ISP a "look, I can reach X location in your network and then the traceroute died" and they can't wonder "are you sure your firewall isn't blocking it?"
It's still a super-common tool for communicating issues between networking teams at various ASes. That the author's ISP thought they were too small to provide reasonable support to is not a strike against traceroute. Rather, it's a strike against that ISP.
Gather around the fire for another retelling of computer networking history. //
Systems Approach A few weeks ago I stumbled onto an article titled "Traceroute isn’t real," which was reasonably entertaining while also not quite right in places.
I assume the title is an allusion to birds aren’t real, a well-known satirical conspiracy theory, so perhaps the article should also be read as satire. You don’t need me to critique the piece because that task has been taken on by the tireless contributors of Hacker News, who have, on this occasion, done a pretty good job of criticism.
One line that jumped out at me in the traceroute essay was the claim "it is completely impossible for [MPLS] to satisfy the expectations of traceroute." //
Many of them hated ATM with a passion – this was the height of the nethead vs bellhead wars – and one reason for that was the “cell tax.” ATM imposed a constant overhead (tax) of five header bytes for every 48 bytes of payload (over 10 percent), and this was the best case. A 20-byte IP header, by contrast, could be amortized over 1500-byte or longer packets (less than 2 percent).
Even with average packet sizes around 300 bytes (as they were at that time) IP came out a fair bit more efficient. And the ATM cell tax was in addition to the IP header overhead. ISPs paid a lot for their high-speed links and most were keen to use them efficiently. //
The other field that we quickly decided was essential for the tag header was time-to-live (TTL). It is the nature of distributed routing algorithms that transient loops can happen, and packets stuck in loops consume forwarding resources – potentially even interfering with the updates that will resolve the loop. Since labelled packets (usually) follow the path established by IP routing, a TTL was non-negotiable. I think we might have briefly considered something less than eight bits for TTL – who really needs to count up to 255 hops? – but that idea was discarded.
Route account
Which brings us to traceroute. Unlike the presumed reader of “Traceroute isn’t real,” we knew how traceroute worked, and we considered it an important tool for debugging. There is a very easy way to make traceroute operate over any sort of tunnel, since traceroute depends on packets with short TTLs getting dropped due to TTL expiry. //
ISPs didn’t love the fact that random end users can get a picture of their internal topology by running traceroute. And MPLS (or other tunnelling technologies) gave them a perfect tool for obscuring the topology.
First of all you can make sure that interior routers don’t send ICMP time exceeded messages. But you can also fudge the TTL when a packet exits a tunnel. Rather than copying the outer (MPLS) TTL to the inner (IP) TTL on egress, you can just decrement the IP TTL by one. Hey presto, your tunnel looks (to traceroute) like a single hop, since the IP TTL only decrements by one as packets traverse the tunnel, no matter how many router hops actually exist along the tunnel path. We made this a configurable option in our implementation and allowed for it in RFC 3032. //
John Smith 19Gold badge
Coat
Interesting stuff
Sorry but yes I do find this sort of stuff interesting.
Without an understanding of how we got here, how will we know where to go next?
Just a thought. //
doublelayerSilver badge
Responding to headlines never helps
This article's author goes to great lengths to argue against another post based on that post's admittedly bad headline. The reason for that is simple: the author has seen the "isn't real" bit of the headline and jumped to bad conclusions. It's not literal, but it's also not satire a la "birds aren't real". The article itself explains what they mean with the frequent claims that traceroute "doesn't exist":
From a network perspective, traceroute does not exist. It's simply an exploit, a trick someone discovered, so it's to be expected that it has no defined qualities. It's just random junk being thrown at a host, hoping that everything along the paths responds in a way that they are explicitly not required to. Is it any surprise that the resulting signal to noise ratio is awful?
I would have phrased this differently, without the hyperbole, because that clearly causes problems. This response makes no point relevant to the network administration consequences of a traceroute command that is pretty much only usable by people with a lot of knowledge about the topology of any networks they're tracing through and plenty more about what that command is actually doing. Where it does respond, specifically the viability of traceroute in MPLS, it simplifies the problem by pointing out that you can, if you desire, manually implement the TTL field, then goes on to describe the many different ways you can choose not to, ways that everyone chose to use. It is fair to say the author of the anti-traceroute article got it wrong when they claimed that MPLS couldn't support it, but in practice, "couldn't support" looks very similar to "doesn't because they deliberately chose not to". It is similar enough that it doesn't invalidate the author's main point, that traceroute is a command that is dangerous in the hands of people who aren't good at understanding why it doesn't give them as much information as they think it does. //
ColinPaSilver badge
It's the old problem
You get the first version out there, and see how popular it is. If it is popular you can add more widgets to it.
If you spend time up front doing all things, that with hindsight, you should have done, you would never ship it. Another problem is you can also add all the features you think might be used, in the original version, and then find they are not used, or have been superseded.
I was told, get something out there, for people to try. When people come hammering on your door, add the things that multiple people want.
20 hrs
the spectacularly refined chapSilver badge
Re: It's the old problem
Cf the OSI network stack, which took so long to standardise that widespread adoption of IP had already filled the void it was intended to.
In some ways that is not ideal, 30+ years on there is still no standard job submission protocol for IP, OSI had it from the start.
listmonk is a self-hosted, high performance one-way mailing list and newsletter manager. It comes as a standalone binary and the only dependency is a Postgres database.
Creating a website doesn't have to be complicated or expensive. With the Publii app, the most intuitive CMS for static sites, you can create a beautiful, safe, and privacy-friendly website quickly and easily; perfect for anyone who wants a fast, secure website in a flash. //
The goal of Publii is to make website creation simple and accessible for everyone, regardless of skill level. With an intuitive user interface and built-in privacy tools, Publii combines powerful and flexible options that make it the perfect platform for anyone who wants a hassle-free way to build and manage a blog, portfolio or documentation website.
listmonk is a self-hosted, high performance one-way mailing list and newsletter manager. It comes as a standalone binary and the only dependency is a Postgres database. //
Simple API to send arbitrary transactional messages to subscribers using pre-defined templates. Send messages as e-mail, SMS, Whatsapp messages or any medium via Messenger interfaces.
Manage millions of subscribers across many single and double opt-in one-way mailing lists with custom JSON attributes for each subscriber. Query and segment subscribers with SQL expressions.
Use the fast bulk importer (~10k records per second) or use HTTP/JSON APIs or interact with the simple table schema to integrate external CRMs and subscriber databases.
Write HTML e-mails in a WYSIWYG editor, Markdown, raw syntax-highlighted HTML, or just plain text.
Use the media manager to upload images for e-mail campaigns on the server's filesystem, Amazon S3, or any S3 compatible (Minio) backend.
SpaceX: "Small-but-meaningful updates" can boost speed from about 100Mbps to 1Gbps.
When the British government announced last week that it was transferring sovereignty of an island in the Indian Ocean to the country of Mauritius, Gareth immediately realized its online implications: the end of the .io domain suffix. In this piece, he explores how geopolitical changes can unexpectedly disrupt the digital world. His exploration of historical precedents—such as the fall of the Soviet Union and the breakup of Yugoslavia—offers valuable context for tech founders, users, and observers. //
On October 3, the British government announced that it was giving up sovereignty over a small tropical atoll in the Indian Ocean known as the Chagos Islands. The islands would be handed over to the neighboring island country of Mauritius, about 1,100 miles off the southeastern coast of Africa.
The story did not make the tech press, but perhaps it should have. The decision to transfer the islands to their new owner will result in the loss of one of the tech and gaming industry’s preferred top-level domains: .io. //
Once this treaty is signed, the British Indian Ocean Territory will cease to exist. Various international bodies will update their records. In particular, the International Standard for Organization (ISO) will remove country code “IO” from its specification. The Internet Assigned Numbers Authority (IANA), which creates and delegates top-level domains, uses this specification to determine which top-level country domains should exist. Once IO is removed, the IANA will refuse to allow any new registrations with a .io domain. It will also automatically begin the process of retiring existing ones. (There is no official count of the number of extant .io domains.)
Officially, .io—and countless websites—will disappear. At a time when domains can go for millions of dollars, it’s a shocking reminder that there are forces outside of the internet that still affect our digital lives. //
.io has become popular with startups, particularly those involved in crypto. These are businesses that often identify with one of the original principles of the internet—that cyberspace grants a form of independence to those who use it. Yet it is the long tail of real-world history that might force on them a major change.
The IANA may fudge its own rules and allow .io to continue to exist. Money talks, and there is a lot of it tied up in .io domains. However, the history of the USSR and Yugoslavia still looms large, and the IANA may feel that playing fast and loose with top-level domains will only come back to haunt it.
Whatever happens, the warning for future tech founders is clear: Be careful when picking your top-level domain. Physical history is never as separate from our digital future as we like to think.
Published 1997
Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff
The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. //
In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies),
SAN DIEGO — A U.S. Navy chief who wanted the internet so she and other enlisted leaders could scroll social media, check sports scores and watch movies while deployed had an unauthorized Starlink satellite dish installed on a warship and lied to her commanding officer to keep it secret, according to investigators.
Internet access is restricted while a ship is underway to maintain bandwidth for military operations and to protect against cybersecurity threats. //
She and more than a dozen other chief petty officers used it to send messages home and keep up with the news and bought signal amplifiers during a stop in Pearl Harbor, Hawaii, after they realized the wireless signal did not cover all areas of the ship, according to the investigation.
Those involved also used the Chief Petty Officer Association’s debit card to pay off the $1,000 monthly Starlink bill.
The network was not shared with rank-and-file sailors.
Marrero tried to hide the network, which she called “Stinky,” by renaming it as a printer, denying its existence and even intercepting a comment about the network left in the commanding officer's suggestion box, according to the investigation.
Tailscale is a VPN service that makes the devices and applications you own accessible anywhere in the world, securely and effortlessly. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other.
The Benefits
Building on top of a secure network fabric, Tailscale offers speed, stability, and simplicity over traditional VPNs.
Tailscale is fast and reliable. Unlike traditional VPNs, which tunnel all network traffic through a central gateway server, Tailscale creates a peer-to-peer mesh network (called a tailnet):
SIMPLE & INEXPENSIVE WEBSITE MONITORING.
Pricing
You only pay for what you use, check by check. 1 credit = 1 check.
For example, check 10 websites every 2 minutes from 1.83€/month (up to 5.49€/m)
Requests:
200,000 = 5€
500,000 = 10€
SMS alerts costs 7500 credits (≈ 0.10€) per message. ///
30 days of one check every 3 minutes = 1440/month
One of the most common pre-sales questions we get at rsync.net is:
"Why should I pay a per gigabyte rate for storage when these other providers are offering unlimited storage for a low flat rate?"
The short answer is: paying a flat rate for unlimited storage, or transfer, pits you against your provider in an antagonistic relationship. This is not the kind of relationship you want to have with someone providing critical functions.
Now for the long answer...
JCI now offers FreeBSD 11 Cloud Servers that provide significant enhancements over previous versions of FreeBSD. Under FreeBSD 11 you will be running a true virtual cloud server and not the more limited "jail" VPS. This allows complete independent server instances with on-the fly expandability, secure root access and custom backup capability.
Choose the server from our standard FreeBSD server plans below with the memory, disk, IPs, bandwidth and backup required to support your application.
Once we engineered a selective shutdown switch into the Internet, and implemented a way to do what Internet engineers have spent decades making sure never happens, we would have created an enormous security vulnerability. We would make the job of any would-be terrorist intent on bringing down the Internet much easier.
Computer and network security is hard, and every Internet system we’ve ever created has security vulnerabilities. It would be folly to think this one wouldn’t as well. And given how unlikely the risk is, any actual shutdown would be far more likely to be a result of an unfortunate error or a malicious hacker than of a presidential order.
But the main problem with an Internet kill switch is that it’s too coarse a hammer.
Yes, the bad guys use the Internet to communicate, and they can use it to attack us. But the good guys use it, too, and the good guys far outnumber the bad guys.
Shutting the Internet down, either the whole thing or just a part of it, even in the face of a foreign military attack would do far more damage than it could possibly prevent. And it would hurt others whom we don’t want to hurt.
For years we’ve been bombarded with scare stories about terrorists wanting to shut the Internet down. They’re mostly fairy tales, but they’re scary precisely because the Internet is so critical to so many things.
Why would we want to terrorize our own population by doing exactly what we don’t want anyone else to do? And a national emergency is precisely the worst time to do it.
Just implementing the capability would be very expensive; I would rather see that money going toward securing our nation’s critical infrastructure from attack.
Welcome to Hurricane Electric's Network Looking Glass. The information provided by and the support of this service are on a best effort basis.
ping, traceroute
video
NetChoice often argued out of both sides of their mouth when Section 230 protections were in play. During back and forth with NetChoice counsel, Justice Gorsuch observed that NetChoice’s argument was, conveniently, both sides of the coin:
“So it’s speech for the purposes of the First Amendment, your speech, your editorial control, but when we get to Section 230, your submission is that that isn’t your speech?
So now, the cases head back to the lower courts, who've been tasked with doing their homework and using the proper framework to analyze the issues. //
anon-7lqi anon-tf71
4 hours ago
i think administratively you can declare any platform with more that 25% market share as a "public square".
Public squares are obliged to allow speech that smaller venues do not have to.
keeps 230 intact. focuses the law on the companies large enough to impact the public in any meaningful way
JustCause_for_Liberty anon-7lqi
3 hours ago edited
I do not even think its that hard. They get to declare if they are publishers or platforms. If you are a publisher you get no protections from 230 and are subject to liability claims for all content. If you are a platform you get liability protections from 230 but lose all rights to moderate content from users or their speech and posts. If laws are broken from users then refer those to law enforcement. Otherwise its not their job.
Just FYI their self identification of publisher or platform is for the entirety of that service. You either have to sell the Company or completely shut down the service and deploy a completely separate service afterwards to redeclare.