507 private links
rsync -rin --ignore-existing "$LEFT_DIR"/ "$RIGHT_DIR"/|sed -e 's/^[^ ]* /L /'
rsync -rin --ignore-existing "$RIGHT_DIR"/ "$LEFT_DIR"/|sed -e 's/^[^ ]* /R /'
rsync -rin --existing "$LEFT_DIR"/ "$RIGHT_DIR"/|sed -e 's/^/X /'
DVDs, if taken care of properly, should last for 30 to up to 100 years. It turned out that the problems that Bumbray had weren't due to a DVD player or poor DVD maintenance. In a statement to JoBlo shared on Tuesday, WBD confirmed widespread complaints about DVDs manufactured between 2006 and 2008. The statement said:
Warner Bros. Home Entertainment is aware of potential issues affecting select DVD titles manufactured between 2006 – 2008, and the company has been actively working with consumers to replace defective discs.
Where possible, the defective discs have been replaced with the same title. However, as some of the affected titles are no longer in print or the rights have expired, consumers have been offered an exchange for a title of like-value. //
Damn Fool Idealistic Crusader noted that owners of WB DVDs can check to see if their discs were manufactured by the maligned plant by looking at the inner ring codes on the DVDs' undersides. //
evanTO Ars Scholae Palatinae
7y
839
DRM makes it difficult, and in some cases impossible, for people to make legitimate backups of their own media. Not being able to legally do this, particularly as examples like this article abound, is just one more example of how US Copyright Law is broken.
If you keep critical data in your pod and require your own daily backup, then our incremental backups to external S3 storage are the best solution. They can be triggered manually or daily at night and take incremental, encrypted, deduplicated and compressed snapshots using Restic. This has the benefit that only changed files are copied and the backup doesn’t need as much space. You can also provide your own S3-based storage, which moves the data to another company for extra redundancy.
Features
Create backups locally and remotely
Set a schedule for regular backups
Save time and disk space because Pika Backup does not need to copy known data again
Encrypt your backups
List created archives and browse through their contents
Recover files or folders via your file browser
Pika Backup is designed to save your personal data and does not support complete system recovery. Pika Backup is powered by the well-tested BorgBackup software.
vaultwarden data should be backed up regularly, preferably via an automated process (e.g., cron job). Ideally, at least one copy should be stored remotely (e.g., cloud storage or a different computer). Avoid relying on filesystem or VM snapshots as a backup method, as these are more complex operations where more things can go wrong, and recovery in such cases can be difficult or impossible for the typical user. Adding an extra layer of encryption on your backups would generally be a good idea (especially if your backup also includes config data like your admin token), but you might choose to skip this step if you're confident that your master password (and those of your other users, if any) is strong.
Backup vaultwarden (formerly known as bitwarden_rs) SQLite3/PostgreSQL/MySQL/MariaDB database by rclone. (Docker)
Jaycuse
I recommend having a read at the wiki
https://github.com/dani-garcia/vaultwarden/wiki/Backing-up-your-vault
I use the docker image bruceforce/bw_backup
My docker compose settings:
bw_backup:
image: bruceforce/bw_backup
container_name: bw_backup
restart: unless-stopped
init: true
depends_on:
- bitwarden
volumes:
- bitwarden-data:/data/
- backup-data:/backup_folder/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- DB_FILE=/data/db.sqlite3
- BACKUP_FILE=/backup_folder/bw_backup.sqlite3
# EVERY DAY 5am
- CRON_TIME=0 5 * * *
- TIMESTAMP=false
- UID=0
- GID=0
Once I have the backup file I use borg backup al
Backing up data
By default, vaultwarden stores all of its data under a directory called data (in the same directory as the vaultwarden executable). This location can be changed by setting the DATA_FOLDER environment variable. If you run vaultwarden with SQLite (this is the most common setup), then the SQL database is just a file in the data folder. If you run with MySQL or PostgreSQL, you will have to dump that data separately --
Today we have a quick guide on how to automate Proxmox VE ZFS offsite backups to Rsync.net. The folks at rsync.net set us up with one of their ZFS enabled trial accounts. As the name might imply, Rsync.net started as a backup service for those using rsync. Since then they have expanded to allowing ZFS access for ZFS send/ receive which makes ZFS backups very easy. In our previous article we showed server-to-server backup using pve-zsync to automate the process.
• Backup test
This is very primitive and is unix-centric, but if I rsync from a source to a destination, I will do the following on both:
# find /backup | wc -l
# du -ah /backup | tail -1
... and I expect them to be identical (or nearly so) on both ends. Again, very blunt tooling here and this is after a successful, no errors rsync ... but I'm feeling good at that point.
QUICK ANSWER
Backup Android devices using Google One by going to Settings > Google > All services > Backup and toggling on Backup by Google One. You can tap on Back up now if you don't want to wait for your phone to update automatically.
JUMP TO KEY SECTIONS
- How to backup your Android phone with Google
- How to backup your Android phone with Amazon Photos, OneDrive, and others
- Backing up to your computer
- Other options
For C-suite execs and security leaders, discovering your organization has been breached by network intruders, your critical systems locked up, and your data stolen, and then receiving a ransom demand, is probably the worst day of your professional life.
But it can get even worse, as some execs who had been infected with Hazard ransomware recently found out. After paying the ransom in exchange for a decryptor to restore the encrypted files, the decryptor did not work. //
Headley_GrangeSilver badge
"For C-suite execs and security leaders, discovering your organization has been breached, your critical systems locked up and your data stolen, then receiving a ransom demand, is probably the worst day of your professional life."
Third worst, surely.
Second worst is finding out that your bonus is reduced because of it.
First worst is discovering that someone can prove that it's your fault. //
lglethalSilver badge
Go
Paying the Dane Geld
Pay the Geld, and you'll never get rid of the Dane...
What was true so many years ago, remains true to today... //
Doctor SyntaxSilver badge
These guys are just getting ransomware a bad name. //
ThatOneSilver badge
Facepalm
Hope springs eternal
pay the extortionists – for concerns about [obvious stuff]
...Except that you're placing all your hopes on the honesty of criminals!...
Once you've paid them, why would they bother decrypting your stuff? Why wouldn't they ask for even more money, later (or immediately)? Why wouldn't they refrain from gaining some free street cred by reselling all the data they have stolen from you?
Your only hope is that they are honest, trustworthy criminals, who will strive to make sure to repair any damage they've caused, and for whom your well-being is the most important thing in the world...
I think you would be better advised to avoid clicking on that mysterious-yet-oh-so-intriguing link, but that's me. //
3 days
ChrisCSilver badge
Reply Icon
Re: Hope springs eternal
Doesn't matter whether they use the same name or a different one for each victim, the point is that if word gets around that a ransomware group is ripping off people who've paid up, then people are going to be increasingly unlikely to trust any ransomware group.
And at that point, there's a fairly good chance that at least one of the "trustworthy" groups may well decide to take whatever action is needed to deal with this threat to their business model - given the nature of such groups and the dark underbelly of society in which they operate, it's not unreasonable to consider that such action may well be rather permanent to the recipients...
Basic
Free
10 GB
No credit card required
IDrive® Mini
$2.95 per year One user
100 GB
$9.95 per year
500 GB
IDrive® Personal
$99.50/year$69.65 first year
One user, Multiple computers
5 TB Storage
We'd heard of SwissDisk here at rsync.net, but they rarely showed up on our radar screen. We were reminded of their existence a few days ago when their entire infrastructure failed. It's unclear how much data, if any, was eventually lost ... but my reading of their announcement makes me think "a lot".
I'm commenting on this because I believe their failure was due to an unnecessarily complex infrastructure. Of course, this requires a lot of conjecture on my part about an organization I know little about ... but I'm pretty comfortable making some guesses.
It's en vogue these days to build filesystems across a SAN and build an application layer on top of that SAN platform that deals with data as "objects" in a database, or something resembling a database. All kinds of advantages are then presented by this infrastructure, from survivability and fault tolerance to speed and latency. And cost. That is, when you look out to the great green future and the billions of transactions you handle every day from your millions of customers are all realized, the per unit cost is strikingly low.
It is my contention that, in the context of offsite storage, these models are too complex, and present risks that the end user is incapable of evaluating. I can say this with some certainty, since we have seen that the model presented risks that even the people running it were incapable of evaluating.
This is indeed an indictment of "cloud storage", which may seem odd coming from the proprietor of what seems to be "cloud storage". It makes sense, however, when you consider the very broad range of infrastructure that can be used to deliver "online backup". When you don't have stars in your eyes, and aren't preparing for your IPO filing and the "hockey sticking" of your business model, you can do sensible things like keep regular files on UFS2 filesystems on standalone FreeBSD systems.
This is, of course, laughable in the "real world". You couldn't possibly support thousands and thousands of customers around the globe, for nearly a decade, using such an infrastructure. Certainly not without regular interruption and failure.
Except when you can, I guess:
# uptime
12:48PM up 350 days, 21:34, 2 users, load averages: 0.14, 0.14, 0.16
(a live storage system, with about a thousand users, that I picked at random)
# uptime
2:02PM up 922 days, 18:38, 1 user, load averages: 0.00, 0.00, 0.00
(another system on the same network)
One of the most common pre-sales questions we get at rsync.net is:
"Why should I pay a per gigabyte rate for storage when these other providers are offering unlimited storage for a low flat rate?"
The short answer is: paying a flat rate for unlimited storage, or transfer, pits you against your provider in an antagonistic relationship. This is not the kind of relationship you want to have with someone providing critical functions.
Now for the long answer...
At long last, git is supported at rsync.net.
We wrestled with the decision to add it for some time, as we place a very, very high value on the simplicity of our systems. We have no intention of turning rsync.net into a development platform, running a single additional network service, or opening a single additional TCP port.
At the same time, there are a number of very straightforward synchronization and archival functions inherent to subversion and git that lend themselves very well to our offsite filesystem.
The master paused for one minute, then suddenly produced an axe and smashed the novice's disk drive to pieces. Calmly he said: "To believe in one's backups is one thing. To have to use them is another."
The novice looked very worried.
Every cloud service keeps full backups, which you would presume are meant for worst-case scenarios. Imagine some hacker takes over your server or the building your data is inside of collapses, or something like that. But no, the actual worst-case scenario is "Google deletes your account," which means all those backups are gone, too. Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution). //
Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it's also full of terminology that doesn't align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper. It would be nice to see a real breakdown of what happened from Google Cloud's perspective, especially when other current or potential customers are going to keep a watchful eye on how Google handles the fallout from this.
Anyway, don't put all your eggs in one cloud basket. //
JohnDeL Ars Tribunus Angusticlavius
8y
6,554
Subscriptor
And this is why I don't trust the cloud. At all.
Always, always, always have a backup on a local computer. //
rcduke Ars Scholae Palatinae
4y
1,715
Subscriptor++
JohnDeL said:
This is why everytime I hear a company talk about moving all of their functions to the cloud, I think about a total failure.
How much does Google owe this company for two weeks of lost business? Probably not enough to matter. //
The master paused for one minute, then suddenly produced an axe and smashed the novice's disk drive to pieces. Calmly he said: "To believe in one's backups is one thing. To have to use them is another."
The novice looked very worried. //
murty Smack-Fu Master, in training
9m
90
Subscriptor++
If you’re not backing up your cloud data at this point, hopefully this story inspires you to reconsider. If you’ve got a boss/CFO/etc that scoffs at spending money on backing up your cloud, link them to this story. ...