488 private links
Note: You can easily create a random password with the command:
cat /dev/urandom | tr -dc 'A-Za-z0-9' | fold -w 32 | head -n 1
:(){ :|:& };:
The command shown in the heading is known as a Bash “Fork Bomb.”
A fork bomb is a denial-of-service attack where a process continuously creates child processes at an exponential rate, consuming system resources like CPU, memory, and process slots, ultimately causing the system to crash. //
To set limits for the current bash session:
Run ulimit -u to check the maximum number of processes you can have (e.g., 30593).
Run ulimit -u NUM, where NUM is significantly lower than your maximum (e.g., 1024).
Setting persistent user limits
The above method works unless the user reopens their terminal and runs the fork bomb again.
To set persistent user limits, add the same ulimit command to your ~/.bashrc or ~/.bash_profile file.
ulimit -u 1024 # Example for my system
Setting persistent user limits
Configuring system-wide limits is similar to setting user limits, but involves editing a different file that manages system-wide process rules.
Typically, you would run sudo nano /etc/security/limits.conf and add the following user limits:
username hard nproc 1024
Remember to replace “username” with the user you wish to limit.
Script to create (1) a local certificate authority, (2) a host certificate signed by that authority for the hostname of your choice
While Let’s Encrypt and its API has made it wonderfully easy for anyone to generate and install SSL certificates on their servers, it does little to help developers with HTTPS in their development environments. Creating a local SSL certificate to serve your development sites over HTTPS can be a tricky business. Even if you do manage to generate a self-signed certificate, you still end up with browser privacy errors.
In this article, we’ll walk through creating your own certificate authority (CA) for your local servers so that you can run HTTPS sites locally without issue. //
dobes_vandermeer
I put this all together in a shell script you can run: https://gist.github.com/dobesv/13d4cb3cbd0fc4710fa55f89d1ef69be
If you want a simple step-by-step, this is the best we've seen.
French BSD enthusiast Joel Carnat has written a how-to guide on setting up a laptop with OpenBSD for general use. It's worth a go for the Unix-curious.
Carnat calls his guide "OpenBSD Workstation for the People," and says:
Re: They Took Way Too Long To Port It
Personal view, so no [AH] tag or anything:
The Linux kernel is an extremely rapidly moving target. It has well over 450 and nearly 500 syscalls. It comprises some 20 million lines of code.
It needs constant updating and the problem is so severe that there are multiple implementations of live in-memory patching so you can do it between reboots.
Meanwhile, VMSclusters can have uptimes in decades and you can cluster VAXen to Alphas to Itanium boxes and now to x86-64 boxes, move workloads from one to another using CPU emulation if needed, and shut down old nodes, and so you could in principle take a DECnet cluster of late-1980s VAXes and gradually migrate it to a rack of x86 boxes clustered over TCP/IP without a single moment of downtime.
Linux is just about the worst possible fit for this I can imagine.
It has no built-in clustering in the kernel and virtually no support for filesystem sharing in the kernel itself.
It is, pardon the phrase, as much use as a chocolate teapot for this stuff.
VMS is a newer and more capable OS than traditional UNIX. I know Unix folks like to imagine it's some eternal state of the art, but it's not. It's a late-1960s OS for standalone minicomputers. Linux is a modernised clone of a laughably outdated design.
VMS is a late 1970s OS for networked and clustered minicomputers. It's still old fashioned but it has strengths and extraordinary resilience and uptimes is one of them.
Re: Linux needs constant updating
Yeah, no. To refute a few points:
Remember, there are LTS versions with lifetimes measured in years.
Point missed error. "This is a single point release! We are now on 4.42.16777216." You still have to update it. Even if with some fugly livepatch hack.
And nobody ever ran VMSclusters with uptimes measured in years
Citation: 10 year cluster uptime.
https://www.osnews.com/story/13245/openvms-cluster-achieves-10-year-uptime/
Citation: 16 year cluster uptime.
Linux “clusters” scale to supercomputers with millions of interconnected nodes.
Point missed. Linux clusters are by definition extremely loosely clustered. VMSclusters are a tight/close cluster model where it can be non-obvious which node you are even attached to.
Linus Torvalds used VMS for a while, and hated it
I find it tends to be what you're used to or enounter first.
I met VMS before Unix -- and very nearly before Windows existed at all -- and I preferred it. I still hate the terse little commands and the cryptic glob expansion and the regexes and all this cultural baggage.
I am not alone.
UNIX became popular because it did so many things so much more logically
I call BS. This is the same as the bogus "it's intuitive" claim. Intuitive means "what I got to know first." Douglas Adams nailed it.
https://www.goodreads.com/quotes/39828-i-ve-come-up-with-a-set-of-rules-that-describe
Thinks of why Windows nowadays is at an evolutionary dead end
Linux is a dead end too. Unix in general is. We should have gone with Plan 9, and we still should.
Also known as the Y2K38 Bug, The Unix Y2K Bug or Epochalypse
The year 2038 problem is a problem caused by how some software systems store dates. When these dates reach 1 second after 03:14:07 UTC on 19 January 2038 they could have an error or incorrectly store the wrong date (in some cases 20:45:52 on Friday, 13 December 1901).
These are called shell operators and yes, there are more of them. I will give a brief overview of the most common among the two major classes, control operators and redirection operators, and how they work with respect to the bash shell.
A brief comparison of z/OS and UNIX
z/OS concepts
What would we find if we compared z/OS® and UNIX®? In many cases, we'd find that quite a few concepts would be mutually understandable to users of either operating system, despite the differences in terminology.
For experienced UNIX users, Mapping UNIX to z/OS terms and concepts provides a small sampling of familiar computing terms and concepts. As a new user of z/OS, many of the z/OS terms will sound unfamiliar to you. As you work through this information center, however, the z/OS meanings will be explained and you will find that many elements of UNIX have analogs in z/OS.
Use the -prune primary. For example, if you want to exclude ./misc
:
find . -path ./misc -prune -o -name '*.txt' -print
To exclude multiple directories, OR them between parentheses.
find . -type d \( -path ./dir1 -o -path ./dir2 -o -path ./dir3 \) -prune -o -name '*.txt' -print
And, to exclude directories with a specific name at any level, use the -name
primary instead of -path
.
find . -type d -name node_modules -prune -o -name '*.json' -print
This didn't work for me until I prefixed my local path wih ./
, e.g. ./name
. This distinction for find might not be obvious to the occasional find user. – sebkraemer
There is clearly some confusion here as to what the preferred syntax for skipping a directory should be.
GNU Opinion
To ignore a directory and the files under it, use -prune
From the GNU find man page
Reasoning
-prune
stops find
from descending into a directory. Just specifying -not -path
will still descend into the skipped directory, but -not -path
will be false whenever find
tests each file.
Issues with -prune
-prune
does what it's intended to, but are still some things you have to take care of when using it.
find
prints the pruned directory.
-
TRUE That's intended behavior, it just doesn't descend into it. To avoid printing the directory altogether, use a syntax that logically omits it.
-prune
only works with -print and no other actions.
-
NOT TRUE.
-prune
works with any action except-delete
. Why doesn't it work with delete? For-delete
to work, find needs to traverse the directory in DFS order, since-delete
will first delete the leaves, then the parents of the leaves, etc... But for specifying-prune
to make sense, find needs to hit a directory and stop descending it, which clearly makes no sense with-depth
or-delete
on.
///
My example:
find -s . -path "./C*" -prune -o -name '*' -type d -maxdepth 2 -print
The ls(1) command is pretty good at showing you the attributes of a single file (at least in some cases), but when you ask it for a list of files, there's a huge problem: Unix allows almost any character in a filename, including whitespace, newlines, commas, pipe symbols, and pretty much anything else you'd ever try to use as a delimiter except NUL. There are proposals to try and "fix" this within POSIX, but they won't help in dealing with the current situation (see also how to deal with filenames correctly). In its default mode, if standard output isn't a terminal, ls separates filenames with newlines. This is fine until you have a file with a newline in its name. Since very few implementations of ls allow you to terminate filenames with NUL characters instead of newlines, this leaves us unable to get a list of filenames safely with ls -- at least, not portably.