My favorite hidden gems in a base UNIX install are `tac` (print lines in reverse order) and `tr` (character substitutions).
Interesting: comm: https://linux.die.net/man/1/comm
Compare sorted files FILE1 and FILE2 line by line.
With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files.
-1 suppress column 1 (lines unique to FILE1)
-2 suppress column 2 (lines unique to FILE2)
-3 suppress column 3 (lines that appear in both files)
Weak: adduser/useradd (hard to non-interactive), chmod (could use file/dir filter with -R).
Least: systemctl, journalctl.
(It's a guilty pleasure to write shell pipelines that use awk to write a shell script and then pipe that script in sh, I find it easier than looking up the bizzaro syntax for loops in bash in the info pages.)
My command to backup selected file using Tarsnap. find /Users/xyz/Analysis -type f \( -name '.pdf' -o -name '.docx' \) -print0 | tarsnap --dry-run --no-default-config --print-stats --humanize-numbers -c --null -T-
This command file files ending with docx and pdf to back with tarsnap. The "-" following the "-T" option allows to pass the name using std-in via find command
printf - shell is natively great at interpolation already, but having C-style printf formatting is often useful and echo has a lot of footguns
wc is also useful, mostly as "wc -l". If you keep data in line oriented, human readable form, "wc -l" counts data items.
Probably the pipe itself would be my favorite next.
Then in no particular order: tail, cut, xargs, wc, tr, grep, sort, uniq.