One company, a California-based startup named Muon Space, is partnering with SpaceX to bring Starlink connectivity to low-Earth orbit. Muon announced Tuesday it will soon install Starlink terminals on its satellites, becoming the first commercial user, other than SpaceX itself, to use Starlink for in-flight connectivity in low-Earth orbit. //
Putting a single Starlink mini-laser terminal on a satellite would keep the spacecraft connected 70 to 80 percent of the time, according to Greg Smirin, Muon’s president. There would still be some downtime as the laser reconnects to different Starlink satellites, but Smirin said a pair of laser terminals would allow a satellite to reach 100 percent coverage. //
SpaceX’s mini-lasers are designed to achieve link speeds of 25Gbps at distances up to 2,500 miles (4,000 kilometers). These speeds will “open new business models” for satellite operators who can now rely on the same “Internet speed and responsiveness as cloud providers and telecom networks on the ground,” Muon said in a statement. //
Live video from space has historically been limited to human spaceflight missions or rocket-mounted cameras that operate for a short time.
One example of that is the dazzling live video beamed back to Earth, through Starlink, from SpaceX’s Starship rockets. The laser terminals on Starship operate through the extreme heat of reentry, returning streaming video as plasma envelops the vehicle. This environment routinely causes radio blackouts for other spacecraft as they reenter the atmosphere. With optical links, that’s no longer a problem.
“This starts to enable a whole new category of capabilities, much the same way as when terrestrial computers went from dial-up to broadband,” Smirin said. “You knew what it could do, but we blew through bulletin boards very quickly to many different applications.”
Announced on September 24, Cloudflare’s Content Signals Policy is an effort to use the company’s influential market position to change how content is used by web crawlers. It involves updating millions of websites’ robots.txt files. //
Historically, robots.txt simply includes a list of paths on the domain that were flagged as either “allow” or “disallow.” It was technically not enforceable, but it became an effective honor system because there are advantages to it for the owners of both the website and the crawler: Website owners could dictate access for various business reasons, and it helped crawlers avoid working through data that wouldn’t be relevant. //
The Content Signals Policy initiative is a newly proposed format for robots.txt that intends to do that. It allows website operators to opt in or out of consenting to the following use cases, as worded in the policy:
- search: Building a search index and providing search results (e.g., returning hyperlinks and short excerpts from your website’s contents). Search does not include providing AI-generated search summaries.
- ai-input: Inputting content into one or more AI models (e.g., retrieval augmented generation, grounding, or other real-time taking of content for generative AI search answers).
- ai-train: Training or fine-tuning AI models.
Cloudflare has given all of its customers quick paths for setting those values on a case-by-case basis. Further, it has automatically updated robots.txt on the 3.8 million domains that already use Cloudflare’s managed robots.txt feature, with search defaulting to yes, ai-train to no, and ai-input blank, indicating a neutral position.
Scientists at the University of California, San Diego, and the University of Maryland, College Park, say they were able to pick up large amounts of sensitive traffic largely by just pointing a commercial off-the-shelf satellite dish at the sky from the roof of a university building in San Diego.
In its paper, Don't Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites [PDF], the team describes how it performed a broad scan of IP traffic on 39 GEO satellites across 25 distinct longitudes and found that half of the signals they picked up contained cleartext IP traffic.
This included unencrypted cellular backhaul data sent from the core networks of several US operators, destined for cell towers in remote areas. Also found was unprotected internet traffic heading for in-flight Wi-Fi users aboard airliners, and unencrypted call audio from multiple VoIP providers.
According to the researchers, they were able to identify some observed satellite data as corresponding to T-Mobile cellular backhaul traffic. This included text and voice call contents, user internet traffic, and cellular network signaling protocols, all "in the clear," but T-Mobile quickly enabled encryption after learning about the problem.
More seriously, the team was able to observe unencrypted traffic for military systems including detailed tracking data for coastal vessel surveillance and operational data of a police force.
In addition, they found retail, financial, and banking companies all using unencrypted satellite communications to link their internal networks at various sites. The researchers were able to see unencrypted login credentials, corporate emails, inventory records, and information from ATM cash dispensers.
Are your links not malicious looking enough?
This tool is guaranteed to help with that!
What is this and what does it do?
This is a tool that takes any link and makes it look malicious. It works on the idea of a redirect. Much like https://tinyurl.com/ for example. Where tinyurl makes an url shorter, this site makes it look malicious.
Place any link in the below input, press the button and get back a fishy(phishy, heh...get, it?) looking link. The fishy link doesn't actually do anything, it will just redirect you to the original link you provided.
It’s official: AOL’s dial-up internet has taken its last bow.
AOL previously confirmed it would be pulling the plug on Tuesday (Sept. 30) — writing in a brief update on its support site last month that it “routinely evaluates” its offerings and had decided to discontinue dial-up, as well as associated software “optimized for older operating systems,” from its plans.
Dial-up is now no longer advertised on AOL’s website. As of Wednesday, former company help pages like “connect to the internet with AOL Dialer” appeared unavailable — and nostalgic social media users took to the internet to say their final goodbyes.
lukem
If you're going to use test values in your test systems, why not use test values allocated for documentation purposes that aren't expected to be used in "live" networks?
IETF RFC 5737 section 3 allocates three IPv4 CIDR ranges for documentation:
192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
September 5, 2025 at 8:02 am
A command-line utility for retrieving files using HTTP, HTTPS and FTP protocols.
This guy literally dropped a 3-hour masterclass on building an web AI business from scratch
Re: I saw similar a couple times in that timeframe ...
My recollection, because I started to make phone bill payments in those years, was that the local operating telcos (first the “Baby Bells” and then their ever-merging successors) had two types of residential service on offer: one at a nominally lower base cost plus a charge for every local call, and one at a supposedly higher base cost that allowed unlimited local calling. Both, of course, charged a king’s ransom for a domestic long-distance call. An overseas long-distance call required a cardiologist when your bill arrived.
The Web Era arrives, the browser wars flare, and a bubble bursts.
Welcome to the second article in our three-part series on the history of the Internet. //
In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.
A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.
"The NYT portrayed the Marubo people as a community unable to handle basic exposure to the internet, highlighting allegations that their youth had become consumed by pornography shortly after receiving access," the plaintiffs say.
This is the kind of information that all the sites you visit, as well as their advertisers and any embedded widget, can see and collect about you.
Discover and continuously monitor every SSL/TLS certificate in your network for expiration and revocation to avoid PKI-related downtime and risk.
There is no such thing as traceroute.
I used to deliver network training at work. It was freeform, I was given wide latitude to design it as I saw fit, so I focused on things that I had seen people struggling with - clearly explaining VLANs in a less abstract manner than most literature, for instance, as well as actually explaining how QoS queuing works, which very few people understand properly.
One of the "chapters" in my presentation was about traceroute, and it more or less said "Don't use it, because you don't know how, and almost nobody you'll talk to does either, so try your best to ignore them." This is not just my opinion, it's backed up by people much more experienced than me. For a good summary I highly recommend this presentation.
But as good as that deck is, I always felt it left out a crucial piece of information: Traceroute, as far as the industry is concerned, does not exist.
Look it up. There is no RFC. There are no ports for traceroute, no rules in firewalls to accommodate it, no best practices for network operators. Why is that?
Traceroute has no history
First off: Yes, there is a traceroute RFC. It's RFC1393, it's 31 years old, and to my knowledge nothing supports it. The RFCs are jam-packed with brilliant ideas nobody implemented. This is one of them. The traceroute we have is completely unrelated to this.
Unsurprisingly however, it's a good description of how a traceroute protocol should work. //
As the linked presentation explains, traceroute simply no longer works in the modern world, at least not "as designed" - and it no longer can work that way, for several reasons not the least that networks have been abstracted in ways it did not anticipate.
There are now things like MPLS, which operate by encapsulating IP - in other words, putting a bag over a packets head, throwing it in the back of a van, driving it across town and letting it loose so it has no idea how far it's traveled. Without getting much further into how that works: It is completely impossible for it to satisfy the expectations of traceroute.
This "tool" works purely at layer 3, so it's impossible for it to adapt to the sort of "layer 12-dimensional-chess" type shenanigan that MPLS does - and there are other problems, but they're all getting ahead of reality, since traceroute never even worked correctly as intended, and there's no reason it would.
Traceroute, you see, is "clever," which is an engineering term that means "fragile." When programmers discover something "clever," any ability they may have had to assess its sustainability or purpose-fit often goes out the window, because it's far more important to embrace the "cleverness" than to solve a problem reliably. //
I can't count how many times this happened, but I do remember after about four years of doing this, I had come up with a method for getting more accurate latency stats: just ping -i .1. Absolutely hammer the thing with pings while you have the customer test their usual business processes, and it'll be easier to see latency spikes if something is eating up too much bandwidth.
What I discovered is that running two of these in parallel would produce exactly 50% packet loss, with total reliability. I then tested and found that if I just fired up three or four normal pings, at the default interval, it would do the same thing. 30% or 40% packet loss.
There is no telling how many issues we prolonged because everyone was running their own pings simultaneously and the kernel was getting overloaded and throwing some of them out. This is a snapshot of every network support center, everywhere. It is a bad scene.
yuliyp 40 days ago | next [–]
I think the "The Worst Diagnostics In The World" section is a bit simplistic about what traceroute does tell you. It can tell you lots of thing beyond "you can reach all the way". Specifically, it can tell you at least some of the networks and locations your packet went through and it can tell you how far it definitely got. These are extremely powerful tools as they rule out lots of problems. It's useful to be able to hand an ISP a "look, I can reach X location in your network and then the traceroute died" and they can't wonder "are you sure your firewall isn't blocking it?"
It's still a super-common tool for communicating issues between networking teams at various ASes. That the author's ISP thought they were too small to provide reasonable support to is not a strike against traceroute. Rather, it's a strike against that ISP.
Gather around the fire for another retelling of computer networking history. //
Systems Approach A few weeks ago I stumbled onto an article titled "Traceroute isn’t real," which was reasonably entertaining while also not quite right in places.
I assume the title is an allusion to birds aren’t real, a well-known satirical conspiracy theory, so perhaps the article should also be read as satire. You don’t need me to critique the piece because that task has been taken on by the tireless contributors of Hacker News, who have, on this occasion, done a pretty good job of criticism.
One line that jumped out at me in the traceroute essay was the claim "it is completely impossible for [MPLS] to satisfy the expectations of traceroute." //
Many of them hated ATM with a passion – this was the height of the nethead vs bellhead wars – and one reason for that was the “cell tax.” ATM imposed a constant overhead (tax) of five header bytes for every 48 bytes of payload (over 10 percent), and this was the best case. A 20-byte IP header, by contrast, could be amortized over 1500-byte or longer packets (less than 2 percent).
Even with average packet sizes around 300 bytes (as they were at that time) IP came out a fair bit more efficient. And the ATM cell tax was in addition to the IP header overhead. ISPs paid a lot for their high-speed links and most were keen to use them efficiently. //
The other field that we quickly decided was essential for the tag header was time-to-live (TTL). It is the nature of distributed routing algorithms that transient loops can happen, and packets stuck in loops consume forwarding resources – potentially even interfering with the updates that will resolve the loop. Since labelled packets (usually) follow the path established by IP routing, a TTL was non-negotiable. I think we might have briefly considered something less than eight bits for TTL – who really needs to count up to 255 hops? – but that idea was discarded.
Route account
Which brings us to traceroute. Unlike the presumed reader of “Traceroute isn’t real,” we knew how traceroute worked, and we considered it an important tool for debugging. There is a very easy way to make traceroute operate over any sort of tunnel, since traceroute depends on packets with short TTLs getting dropped due to TTL expiry. //
ISPs didn’t love the fact that random end users can get a picture of their internal topology by running traceroute. And MPLS (or other tunnelling technologies) gave them a perfect tool for obscuring the topology.
First of all you can make sure that interior routers don’t send ICMP time exceeded messages. But you can also fudge the TTL when a packet exits a tunnel. Rather than copying the outer (MPLS) TTL to the inner (IP) TTL on egress, you can just decrement the IP TTL by one. Hey presto, your tunnel looks (to traceroute) like a single hop, since the IP TTL only decrements by one as packets traverse the tunnel, no matter how many router hops actually exist along the tunnel path. We made this a configurable option in our implementation and allowed for it in RFC 3032. //
John Smith 19Gold badge
Coat
Interesting stuff
Sorry but yes I do find this sort of stuff interesting.
Without an understanding of how we got here, how will we know where to go next?
Just a thought. //
doublelayerSilver badge
Responding to headlines never helps
This article's author goes to great lengths to argue against another post based on that post's admittedly bad headline. The reason for that is simple: the author has seen the "isn't real" bit of the headline and jumped to bad conclusions. It's not literal, but it's also not satire a la "birds aren't real". The article itself explains what they mean with the frequent claims that traceroute "doesn't exist":
From a network perspective, traceroute does not exist. It's simply an exploit, a trick someone discovered, so it's to be expected that it has no defined qualities. It's just random junk being thrown at a host, hoping that everything along the paths responds in a way that they are explicitly not required to. Is it any surprise that the resulting signal to noise ratio is awful?
I would have phrased this differently, without the hyperbole, because that clearly causes problems. This response makes no point relevant to the network administration consequences of a traceroute command that is pretty much only usable by people with a lot of knowledge about the topology of any networks they're tracing through and plenty more about what that command is actually doing. Where it does respond, specifically the viability of traceroute in MPLS, it simplifies the problem by pointing out that you can, if you desire, manually implement the TTL field, then goes on to describe the many different ways you can choose not to, ways that everyone chose to use. It is fair to say the author of the anti-traceroute article got it wrong when they claimed that MPLS couldn't support it, but in practice, "couldn't support" looks very similar to "doesn't because they deliberately chose not to". It is similar enough that it doesn't invalidate the author's main point, that traceroute is a command that is dangerous in the hands of people who aren't good at understanding why it doesn't give them as much information as they think it does. //
ColinPaSilver badge
It's the old problem
You get the first version out there, and see how popular it is. If it is popular you can add more widgets to it.
If you spend time up front doing all things, that with hindsight, you should have done, you would never ship it. Another problem is you can also add all the features you think might be used, in the original version, and then find they are not used, or have been superseded.
I was told, get something out there, for people to try. When people come hammering on your door, add the things that multiple people want.
20 hrs
the spectacularly refined chapSilver badge
Re: It's the old problem
Cf the OSI network stack, which took so long to standardise that widespread adoption of IP had already filled the void it was intended to.
In some ways that is not ideal, 30+ years on there is still no standard job submission protocol for IP, OSI had it from the start.
listmonk is a self-hosted, high performance one-way mailing list and newsletter manager. It comes as a standalone binary and the only dependency is a Postgres database.
Creating a website doesn't have to be complicated or expensive. With the Publii app, the most intuitive CMS for static sites, you can create a beautiful, safe, and privacy-friendly website quickly and easily; perfect for anyone who wants a fast, secure website in a flash. //
The goal of Publii is to make website creation simple and accessible for everyone, regardless of skill level. With an intuitive user interface and built-in privacy tools, Publii combines powerful and flexible options that make it the perfect platform for anyone who wants a hassle-free way to build and manage a blog, portfolio or documentation website.
listmonk is a self-hosted, high performance one-way mailing list and newsletter manager. It comes as a standalone binary and the only dependency is a Postgres database. //
Simple API to send arbitrary transactional messages to subscribers using pre-defined templates. Send messages as e-mail, SMS, Whatsapp messages or any medium via Messenger interfaces.
Manage millions of subscribers across many single and double opt-in one-way mailing lists with custom JSON attributes for each subscriber. Query and segment subscribers with SQL expressions.
Use the fast bulk importer (~10k records per second) or use HTTP/JSON APIs or interact with the simple table schema to integrate external CRMs and subscriber databases.
Write HTML e-mails in a WYSIWYG editor, Markdown, raw syntax-highlighted HTML, or just plain text.
Use the media manager to upload images for e-mail campaigns on the server's filesystem, Amazon S3, or any S3 compatible (Minio) backend.
SpaceX: "Small-but-meaningful updates" can boost speed from about 100Mbps to 1Gbps.