Are your links not malicious looking enough?
This tool is guaranteed to help with that!
What is this and what does it do?
This is a tool that takes any link and makes it look malicious. It works on the idea of a redirect. Much like https://tinyurl.com/ for example. Where tinyurl makes an url shorter, this site makes it look malicious.
Place any link in the below input, press the button and get back a fishy(phishy, heh...get, it?) looking link. The fishy link doesn't actually do anything, it will just redirect you to the original link you provided.
It’s official: AOL’s dial-up internet has taken its last bow.
AOL previously confirmed it would be pulling the plug on Tuesday (Sept. 30) — writing in a brief update on its support site last month that it “routinely evaluates” its offerings and had decided to discontinue dial-up, as well as associated software “optimized for older operating systems,” from its plans.
Dial-up is now no longer advertised on AOL’s website. As of Wednesday, former company help pages like “connect to the internet with AOL Dialer” appeared unavailable — and nostalgic social media users took to the internet to say their final goodbyes.
lukem
If you're going to use test values in your test systems, why not use test values allocated for documentation purposes that aren't expected to be used in "live" networks?
IETF RFC 5737 section 3 allocates three IPv4 CIDR ranges for documentation:
192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
September 5, 2025 at 8:02 am
A command-line utility for retrieving files using HTTP, HTTPS and FTP protocols.
This guy literally dropped a 3-hour masterclass on building an web AI business from scratch
Re: I saw similar a couple times in that timeframe ...
My recollection, because I started to make phone bill payments in those years, was that the local operating telcos (first the “Baby Bells” and then their ever-merging successors) had two types of residential service on offer: one at a nominally lower base cost plus a charge for every local call, and one at a supposedly higher base cost that allowed unlimited local calling. Both, of course, charged a king’s ransom for a domestic long-distance call. An overseas long-distance call required a cardiologist when your bill arrived.
The Web Era arrives, the browser wars flare, and a bubble bursts.
Welcome to the second article in our three-part series on the history of the Internet. //
In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.
A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.
"The NYT portrayed the Marubo people as a community unable to handle basic exposure to the internet, highlighting allegations that their youth had become consumed by pornography shortly after receiving access," the plaintiffs say.
This is the kind of information that all the sites you visit, as well as their advertisers and any embedded widget, can see and collect about you.
Discover and continuously monitor every SSL/TLS certificate in your network for expiration and revocation to avoid PKI-related downtime and risk.
There is no such thing as traceroute.
I used to deliver network training at work. It was freeform, I was given wide latitude to design it as I saw fit, so I focused on things that I had seen people struggling with - clearly explaining VLANs in a less abstract manner than most literature, for instance, as well as actually explaining how QoS queuing works, which very few people understand properly.
One of the "chapters" in my presentation was about traceroute, and it more or less said "Don't use it, because you don't know how, and almost nobody you'll talk to does either, so try your best to ignore them." This is not just my opinion, it's backed up by people much more experienced than me. For a good summary I highly recommend this presentation.
But as good as that deck is, I always felt it left out a crucial piece of information: Traceroute, as far as the industry is concerned, does not exist.
Look it up. There is no RFC. There are no ports for traceroute, no rules in firewalls to accommodate it, no best practices for network operators. Why is that?
Traceroute has no history
First off: Yes, there is a traceroute RFC. It's RFC1393, it's 31 years old, and to my knowledge nothing supports it. The RFCs are jam-packed with brilliant ideas nobody implemented. This is one of them. The traceroute we have is completely unrelated to this.
Unsurprisingly however, it's a good description of how a traceroute protocol should work. //
As the linked presentation explains, traceroute simply no longer works in the modern world, at least not "as designed" - and it no longer can work that way, for several reasons not the least that networks have been abstracted in ways it did not anticipate.
There are now things like MPLS, which operate by encapsulating IP - in other words, putting a bag over a packets head, throwing it in the back of a van, driving it across town and letting it loose so it has no idea how far it's traveled. Without getting much further into how that works: It is completely impossible for it to satisfy the expectations of traceroute.
This "tool" works purely at layer 3, so it's impossible for it to adapt to the sort of "layer 12-dimensional-chess" type shenanigan that MPLS does - and there are other problems, but they're all getting ahead of reality, since traceroute never even worked correctly as intended, and there's no reason it would.
Traceroute, you see, is "clever," which is an engineering term that means "fragile." When programmers discover something "clever," any ability they may have had to assess its sustainability or purpose-fit often goes out the window, because it's far more important to embrace the "cleverness" than to solve a problem reliably. //
I can't count how many times this happened, but I do remember after about four years of doing this, I had come up with a method for getting more accurate latency stats: just ping -i .1. Absolutely hammer the thing with pings while you have the customer test their usual business processes, and it'll be easier to see latency spikes if something is eating up too much bandwidth.
What I discovered is that running two of these in parallel would produce exactly 50% packet loss, with total reliability. I then tested and found that if I just fired up three or four normal pings, at the default interval, it would do the same thing. 30% or 40% packet loss.
There is no telling how many issues we prolonged because everyone was running their own pings simultaneously and the kernel was getting overloaded and throwing some of them out. This is a snapshot of every network support center, everywhere. It is a bad scene.
yuliyp 40 days ago | next [–]
I think the "The Worst Diagnostics In The World" section is a bit simplistic about what traceroute does tell you. It can tell you lots of thing beyond "you can reach all the way". Specifically, it can tell you at least some of the networks and locations your packet went through and it can tell you how far it definitely got. These are extremely powerful tools as they rule out lots of problems. It's useful to be able to hand an ISP a "look, I can reach X location in your network and then the traceroute died" and they can't wonder "are you sure your firewall isn't blocking it?"
It's still a super-common tool for communicating issues between networking teams at various ASes. That the author's ISP thought they were too small to provide reasonable support to is not a strike against traceroute. Rather, it's a strike against that ISP.
Gather around the fire for another retelling of computer networking history. //
Systems Approach A few weeks ago I stumbled onto an article titled "Traceroute isn’t real," which was reasonably entertaining while also not quite right in places.
I assume the title is an allusion to birds aren’t real, a well-known satirical conspiracy theory, so perhaps the article should also be read as satire. You don’t need me to critique the piece because that task has been taken on by the tireless contributors of Hacker News, who have, on this occasion, done a pretty good job of criticism.
One line that jumped out at me in the traceroute essay was the claim "it is completely impossible for [MPLS] to satisfy the expectations of traceroute." //
Many of them hated ATM with a passion – this was the height of the nethead vs bellhead wars – and one reason for that was the “cell tax.” ATM imposed a constant overhead (tax) of five header bytes for every 48 bytes of payload (over 10 percent), and this was the best case. A 20-byte IP header, by contrast, could be amortized over 1500-byte or longer packets (less than 2 percent).
Even with average packet sizes around 300 bytes (as they were at that time) IP came out a fair bit more efficient. And the ATM cell tax was in addition to the IP header overhead. ISPs paid a lot for their high-speed links and most were keen to use them efficiently. //
The other field that we quickly decided was essential for the tag header was time-to-live (TTL). It is the nature of distributed routing algorithms that transient loops can happen, and packets stuck in loops consume forwarding resources – potentially even interfering with the updates that will resolve the loop. Since labelled packets (usually) follow the path established by IP routing, a TTL was non-negotiable. I think we might have briefly considered something less than eight bits for TTL – who really needs to count up to 255 hops? – but that idea was discarded.
Route account
Which brings us to traceroute. Unlike the presumed reader of “Traceroute isn’t real,” we knew how traceroute worked, and we considered it an important tool for debugging. There is a very easy way to make traceroute operate over any sort of tunnel, since traceroute depends on packets with short TTLs getting dropped due to TTL expiry. //
ISPs didn’t love the fact that random end users can get a picture of their internal topology by running traceroute. And MPLS (or other tunnelling technologies) gave them a perfect tool for obscuring the topology.
First of all you can make sure that interior routers don’t send ICMP time exceeded messages. But you can also fudge the TTL when a packet exits a tunnel. Rather than copying the outer (MPLS) TTL to the inner (IP) TTL on egress, you can just decrement the IP TTL by one. Hey presto, your tunnel looks (to traceroute) like a single hop, since the IP TTL only decrements by one as packets traverse the tunnel, no matter how many router hops actually exist along the tunnel path. We made this a configurable option in our implementation and allowed for it in RFC 3032. //
John Smith 19Gold badge
Coat
Interesting stuff
Sorry but yes I do find this sort of stuff interesting.
Without an understanding of how we got here, how will we know where to go next?
Just a thought. //
doublelayerSilver badge
Responding to headlines never helps
This article's author goes to great lengths to argue against another post based on that post's admittedly bad headline. The reason for that is simple: the author has seen the "isn't real" bit of the headline and jumped to bad conclusions. It's not literal, but it's also not satire a la "birds aren't real". The article itself explains what they mean with the frequent claims that traceroute "doesn't exist":
From a network perspective, traceroute does not exist. It's simply an exploit, a trick someone discovered, so it's to be expected that it has no defined qualities. It's just random junk being thrown at a host, hoping that everything along the paths responds in a way that they are explicitly not required to. Is it any surprise that the resulting signal to noise ratio is awful?
I would have phrased this differently, without the hyperbole, because that clearly causes problems. This response makes no point relevant to the network administration consequences of a traceroute command that is pretty much only usable by people with a lot of knowledge about the topology of any networks they're tracing through and plenty more about what that command is actually doing. Where it does respond, specifically the viability of traceroute in MPLS, it simplifies the problem by pointing out that you can, if you desire, manually implement the TTL field, then goes on to describe the many different ways you can choose not to, ways that everyone chose to use. It is fair to say the author of the anti-traceroute article got it wrong when they claimed that MPLS couldn't support it, but in practice, "couldn't support" looks very similar to "doesn't because they deliberately chose not to". It is similar enough that it doesn't invalidate the author's main point, that traceroute is a command that is dangerous in the hands of people who aren't good at understanding why it doesn't give them as much information as they think it does. //
ColinPaSilver badge
It's the old problem
You get the first version out there, and see how popular it is. If it is popular you can add more widgets to it.
If you spend time up front doing all things, that with hindsight, you should have done, you would never ship it. Another problem is you can also add all the features you think might be used, in the original version, and then find they are not used, or have been superseded.
I was told, get something out there, for people to try. When people come hammering on your door, add the things that multiple people want.
20 hrs
the spectacularly refined chapSilver badge
Re: It's the old problem
Cf the OSI network stack, which took so long to standardise that widespread adoption of IP had already filled the void it was intended to.
In some ways that is not ideal, 30+ years on there is still no standard job submission protocol for IP, OSI had it from the start.
listmonk is a self-hosted, high performance one-way mailing list and newsletter manager. It comes as a standalone binary and the only dependency is a Postgres database.
Creating a website doesn't have to be complicated or expensive. With the Publii app, the most intuitive CMS for static sites, you can create a beautiful, safe, and privacy-friendly website quickly and easily; perfect for anyone who wants a fast, secure website in a flash. //
The goal of Publii is to make website creation simple and accessible for everyone, regardless of skill level. With an intuitive user interface and built-in privacy tools, Publii combines powerful and flexible options that make it the perfect platform for anyone who wants a hassle-free way to build and manage a blog, portfolio or documentation website.
listmonk is a self-hosted, high performance one-way mailing list and newsletter manager. It comes as a standalone binary and the only dependency is a Postgres database. //
Simple API to send arbitrary transactional messages to subscribers using pre-defined templates. Send messages as e-mail, SMS, Whatsapp messages or any medium via Messenger interfaces.
Manage millions of subscribers across many single and double opt-in one-way mailing lists with custom JSON attributes for each subscriber. Query and segment subscribers with SQL expressions.
Use the fast bulk importer (~10k records per second) or use HTTP/JSON APIs or interact with the simple table schema to integrate external CRMs and subscriber databases.
Write HTML e-mails in a WYSIWYG editor, Markdown, raw syntax-highlighted HTML, or just plain text.
Use the media manager to upload images for e-mail campaigns on the server's filesystem, Amazon S3, or any S3 compatible (Minio) backend.
SpaceX: "Small-but-meaningful updates" can boost speed from about 100Mbps to 1Gbps.
When the British government announced last week that it was transferring sovereignty of an island in the Indian Ocean to the country of Mauritius, Gareth immediately realized its online implications: the end of the .io domain suffix. In this piece, he explores how geopolitical changes can unexpectedly disrupt the digital world. His exploration of historical precedents—such as the fall of the Soviet Union and the breakup of Yugoslavia—offers valuable context for tech founders, users, and observers. //
On October 3, the British government announced that it was giving up sovereignty over a small tropical atoll in the Indian Ocean known as the Chagos Islands. The islands would be handed over to the neighboring island country of Mauritius, about 1,100 miles off the southeastern coast of Africa.
The story did not make the tech press, but perhaps it should have. The decision to transfer the islands to their new owner will result in the loss of one of the tech and gaming industry’s preferred top-level domains: .io. //
Once this treaty is signed, the British Indian Ocean Territory will cease to exist. Various international bodies will update their records. In particular, the International Standard for Organization (ISO) will remove country code “IO” from its specification. The Internet Assigned Numbers Authority (IANA), which creates and delegates top-level domains, uses this specification to determine which top-level country domains should exist. Once IO is removed, the IANA will refuse to allow any new registrations with a .io domain. It will also automatically begin the process of retiring existing ones. (There is no official count of the number of extant .io domains.)
Officially, .io—and countless websites—will disappear. At a time when domains can go for millions of dollars, it’s a shocking reminder that there are forces outside of the internet that still affect our digital lives. //
.io has become popular with startups, particularly those involved in crypto. These are businesses that often identify with one of the original principles of the internet—that cyberspace grants a form of independence to those who use it. Yet it is the long tail of real-world history that might force on them a major change.
The IANA may fudge its own rules and allow .io to continue to exist. Money talks, and there is a lot of it tied up in .io domains. However, the history of the USSR and Yugoslavia still looms large, and the IANA may feel that playing fast and loose with top-level domains will only come back to haunt it.
Whatever happens, the warning for future tech founders is clear: Be careful when picking your top-level domain. Physical history is never as separate from our digital future as we like to think.
Published 1997
Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff
The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. //
In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies),
SAN DIEGO — A U.S. Navy chief who wanted the internet so she and other enlisted leaders could scroll social media, check sports scores and watch movies while deployed had an unauthorized Starlink satellite dish installed on a warship and lied to her commanding officer to keep it secret, according to investigators.
Internet access is restricted while a ship is underway to maintain bandwidth for military operations and to protect against cybersecurity threats. //
She and more than a dozen other chief petty officers used it to send messages home and keep up with the news and bought signal amplifiers during a stop in Pearl Harbor, Hawaii, after they realized the wireless signal did not cover all areas of the ship, according to the investigation.
Those involved also used the Chief Petty Officer Association’s debit card to pay off the $1,000 monthly Starlink bill.
The network was not shared with rank-and-file sailors.
Marrero tried to hide the network, which she called “Stinky,” by renaming it as a printer, denying its existence and even intercepting a comment about the network left in the commanding officer's suggestion box, according to the investigation.