Most of my notes are for my reference, but this one is for you! Several people have asked me how to take the first (or next) steps on the path to Unix command line mastery. Here is a collection of commands that I feel are essential for a proper computer professional to know about. If you’re a regular person who would like to leverage the built-in power of your Unix operating system, it may be good to have a look at this list and see if there is anything that you might find useful.

I tried to check every listed command on FreeBSD and OSX. The entire list works on those platforms unless otherwise noted. Be advised, however, that outside of Linux, the exact syntax of the commands and their options may differ slightly (you know what to do!).

Beginners may safely skip the subsections labeled "ℱancy" which cover (even) more esoteric aspects of the topic.

Also you may find my old Unix Beginner notes helpful. That covers more of the major concepts while this list looks specifically at the collection of commands that I personally use and feel that others will generally find useful.

All of my help notes live at or use which is the short version.

Note that you can directly refer back to a specific command in this document with a URL in your browser formatted like this.

Or, once you get the hang of this stuff, you can read sections directly from a Unix command line like this.

$ CMD=cat && wget -qO- | sed -n "/== ${CMD}/,/==/p"

Here I’m using wget and sed to show me the cat command’s section. That’s pretty handy. On second thought, I’ll be using these notes myself too!

HOWTO: Be The UNIX Expert
    |  Start with  |
    | Unix problem |
    Obviously you don't but
    did you think you knew ---(no)---.
    what you were doing?             |
          |                          |
        (yes)                        |
          |                          V
    +----------------------+   +------------------------------+
    | man relevant_command |   | Think of way to              |
    | Search for options   |<--| express the problem          |
    | involved.            |   | using many or none of the    |
    +----------------------+   | ambiguous keywords involved: |
            |                  | mount, cat, cut, tail, kill  |
      Fix problem?----(no)---->| Search for that on the web.  |
            |                  +------------------------------+
    | You win 0 exit codes! |


This is the most important command because if you can remember it and the existence of the other commands you need, you can instantly get a thorough reminder of exactly what you need to know. man is short for "manual" and in the old days it was envisioned that the system would help you typeset the documentation and many people probably did that. But Unix people were some of the first to realize that actually getting it onto paper didn’t really improve, well, anything. Using an electronic version, you can keep it very up-to-date, do some brutally fast searching (use the slash key "/"), and zero in on what you need to know very quickly.

It is simple to use in normal cases. To see the documentation for the cal program just do this.

man cal

When you’re done with man just hit "q" to quit.

The reason these notes are mostly for you and not me is that I use man all the time. The advantage these notes have for beginners is that I am highlighting my favorite aspects of the various Unix tools while man pages are usually scrupulously thorough about every mote of functionality. It doesn’t help that man pages are written in an idiosyncratic style that is extremely compact and efficient. At first glance it looks about as useful and inviting as reading legalese in a license agreement. This can be overwhelming for beginners. But trust me, man pages are a very powerful resource. It’s good to bravely face their style quirks until you can refer to them without fear. Remember that the alternative is "dummy" "help" which says inane stuff like, "The paste icon pastes." Grr. Better to know the answer is there but your abilities are not than the other way around.

ℱancy man

If you’re a C programmer or doing some other serious business, you may need to specify the section number of the manual. For example, man stat gets you the man page of the Unix stat command. However if you’re looking for the C programming system call you need this.

man 2 stat

Normal people never need to worry about this (by definition). If you do, man man has a nice list of what the sections are.


Nothing is more frustrating than being stuck in some program you want to stop using. This is also true of a command line terminal. The exit command is really a feature of the shell. Beginners don’t need to worry about what that means — just know that it will exit your shell’s session, probably closing your terminal too.

ℱancy exit

I personally like to just type "Ctrl-D" at the beginning of a new line. This exits the shell immediately. Note that as with stat, the man page for exit can be misleading. There is a C function called exit() which is what the man page talks about.

As you begin to truly understand all your options, you may find that ordinary GUI operations become more and more tedious. Here’s a very powerful trick involving the exit command. This is how I start my big clumsy web browser.

$ grep BROW ~/.bashrc
export BROWSER=/home/xed/bin/firefox/firefox
alias ff='bash -c "${BROWSER} --new-instance --P default &";exit'

The "normal" way to start a browser from the command line is pretty good. I can just type "firefox" (and with command completion, that is pretty easy — f - i - r - tab). But the problem with that is that the terminal then stays open the whole time waiting for you to finish using the browser. With the alias shown above, I type ff and hit enter — the browser starts and the terminal I launched it from disappears, thanks to the exit command.

What is so valuable about this approach? Well, if you use command line terminals a lot, as you can imagine I do, it’s likely that you have a very fast way to launch a terminal. I think that many systems come preconfigured with Ctrl+Alt+T as a way to launch one. I always define Ctrl+Shift+N to instantly launch a (new) terminal. This means that when I want to run Firefox, I press Ctrl+Shift+N, type "ff", and then enter. If you think there is a faster mouse-based way to reliably do that (and anything else that needs doing), I will happily and strongly disagree with you.


Got a bunch of busy junk on the terminal screen and you want to clear it? Just use the clear command. Another way to achieve the same effect is to just press Ctrl-L. This works in many text interactive situations including the shell. The advantage is that you can clear previous lines while working on a new command line. Why use clear as a command? It is helpful if you want to put it into a script to make sure the screen is not cluttered before starting some output. I find it very helpful in scripts that print out a small report that you want to watch change.

This will keep the screen filled with a report of sensor readings updated every 10 minutes in a way that is not cluttered.

while sensors; do sleep 600 && clear; done


When dealing with shell built-in functions, for example exit or read, you can use the help command to get more information about them. This is way easier than reading the 46k word man page for bash which also incidentally contains that information.

ℱancy help

You can just specify a part of the word you want help with. If it is unambiguous it will work.

help he  # Help on "help"
help de  # Help on "declare"


Sometimes you have a shell and you don’t have a GUI thing and you want to turn the system off in a polite way. The answer is to use the shutdown command. I do it like this.

sudo shutdown -h now

Or if you want it to shutdown in 5 minutes, do it like this.

sudo shutdown -h 5

The -h means "halt" as opposed to -r which means "reboot" (it will turn off, but then come back to life). You can also just use the commands halt or reboot. I tend not to do that because they are actually aliases for the systemctl command, not that there’s anything wrong with that. There are many ways to do the job. I think it’s traditional for init 0 to also shut a system down, but I’d save that until you know what you’re doing and why you’d choose that option.


I have set up Unix accounts for users who only surf the web. The one command line program they did need to use, as I stood there coaching them, is the passwd program so that they could change their password. Simply type passwd and it will ask you for your existing password. (This prevents jokers from messing with you by changing your password while you’re away from your desk.)

Here’s a thing that I find weird: normal people find it weird that when they type their password nothing happens. People have been so conditioned by GUI form boxes that if there are not dots showing up, they easily get confused. I’ve been amazed by this at least a dozen times and that’s dealing with PhD level researchers. So… when you type your private passwords that no one else is supposed to see they will not show up on the screen. Got it? Ok.

The next face palm opportunity for the person administering your account is what constitutes a smart password. We’re all familiar with password strength meters on web sites, etc. Linux also does some quick checking to see if your password is idiotic. I don’t know all the rules, but here’s a rough idea based on my experience.

  • Don’t use your own username in a password.

  • Don’t use a word sitting in the system’s spelling dictionary.

  • Don’t make it very short (less than 7 characters).

  • Don’t repeat characters or patterns too much.

  • Don’t use keyboard patterns (QWERTY, etc.).

I’ve had people sit down to type their password into a system that I and their colleagues are all counting on them to keep secure with a decent password and they get rejection after rejection. Don’t let that be you! I’m talking to you ann123!

Note that if you want a terrible password, Linux is cool with that, but you must run it as root. So maybe like this for user lax.

sudo passwd lax

Note that it won’t ask you for your current password as root because it assumes that if you’re root, it can’t stop you from doing what you want anyway.

true / false / :

Some of the strangest commands in Unix don’t do anything at all. The true command just immediately exits and emits an exit code saying that the run went fine. The false command does almost the same thing, but its exit code seems to complain that something wasn’t right (but that’s what you asked for!). This is useful for scripting more often than you’d expect. It’s like Python’s pass command which also does nothing — it allows you to fill blanks that need filling.

By the way, if you’re a normal person you probably don’t need to know about exit codes at all and you can use all of these nothing commands interchangeably.

One interesting use of these commands is to create a new blank file.

true > mynewblankfile

This will create an empty file called mynewblankfile. This is because true produces no output and directing that to a file (with >) is filling that file with the nothing. Use some caution because if the file already existed and had something important in it, it will be gone. In other words this technique is especially good at blanking an existing file.

A similar thing is the shell builtin :. It is basically a shell level version of true. You can even make empty files (or make files empty) with it the same way as with true. And if that’s too much, you actually don’t need any command really. Just >mynewblankfile and nothing else will do the same.

I like to use : to take random notes. Need to jot down a phone number and you just have a Unix terminal? Type : 6191234567 and nothing bad will happen. Don’t get too poetic doing that however, since the shell still tries to make sense of what you typed. If you’ve used complex syntax, you can invoke an error. A safer similar way to do the same thing is to use the shell comment character like this.

$ # Most things are 'ok' to type here with no side effects.

Or if you like hard to remember Bash tricks, you can type something and then after the fact press "escape" then "#" for the same effect.

ℱancy : Sometimes you want some command that normally produces output to not produce output. The traditional way people do this is by redirecting the output to the "null" device which gobbles it up and that’s that. It looks like this in practice.

$ ls > /dev/null

In that command, no output is produced. There is another technique which I find easier to type — pipe your output to true or :.

$ ls|:

The colon command accepts all the output of the ls command, does nothing with it, and both processes end quietly. This works with true and false too.


The time command does not tell the time for you. If you want that, see the date command. The time command is a verb and it times durations of programs for you. This is very useful when quantifying performance issues.

Let’s see how long it takes to look up the IP addresses of two domains.

$ time host -t A has address
real    0m0.017s
user    0m0.012s
sys     0m0.000s
$ time host -t A has address
real    0m0.028s
user    0m0.016s
sys     0m0.000s

Why is Google slower? Who knows. But is Google slower? Today the answer is yes.

One neat trick I really like is to use the time like a stopwatch. Type this on the command line and press enter when you’re ready to start.

$ time read

It will then sit there waiting for you to create some input (as described in help read). Simply type enter as your only input when you’re done timing. You’ll get a nice accurate report on the duration of the interval between you pressing enter the first time and the second.

ℱancy time

The "real", "user", and "sys" values get into some hairy internal details. Basically, the "real" is the wall clock time or how long you needed to wait from start to finish. This can be affected by what else is going on with your system and how long that process had to wait in any queues. The sum of the "sys" and "user" times reflects how much CPU time was actually used by this process (in kernel system mode and user mode). Here is all you ever wanted to know about that.


The echo command is another shell built-in. This means it’s part of your shell rather than a stand-alone executable. Fortunately its normal use is very easy. It is like a "print" statement in other languages and it simply takes the things you specify — called the arguments — and it sends them back out to you. It echoes them.

$ echo "Repeat this back to me."
Repeat this back to me.

Since the argument gets expanded you can do helpful things like this too.

$ echo /tmp/*jpg
/tmp/forest.jpg /tmp/flower.jpg

Or you can check the value of variables.

$ echo "User $USER uses this editor: $EDITOR"
User xed uses this editor: /usr/bin/vim

ℱancy echo

Remember just a few sentences ago when I said that it was not a stand-alone executable? Well, that’s not exactly true. On many Linux systems, not only will echo be a (very important) shell built-in command, but there will also be a stand-alone C program that does the same thing. Why? Good question. Probably some subtle performance benefits. For example, here’s Bash doing a million echo commands.

time seq 1000000 | while read N; do echo $N; done
real    0m16.858s

And here’s the C stand-alone.

time seq 1000000 | xargs /usr/bin/echo '{}' \;
real    0m0.959s

This shows that the shell is powerful but it’s not optimized for performance. In real scripts, if you’re dumping something on the screen it’s probably fine to always use the shell built-in. For more details about these very similar commands see Bash’s help echo or the stand-alone’s man echo.

One little trick you can do with both versions of the command is the -n option which suppresses the new line. This is handy for things where you want to consolidate the output a bit. Here is a nice demo of dots being printed, perhaps to indicate some subprocess is being completed.

$ for N in {1..50}; do echo -n . ; sleep .2; done ; echo


I include cal near the top of my list of commands to learn because it is so innocuous, obvious, and useful. I pretty much always use it to generate sample material to use in examples when I’m giving demonstrations. However, it is genuinely useful!

I use the -3 option a lot because I’m often curious about what day of the week a date next month lies on. So cal is easy to use even if calendars are inherently complicated. For example here is how to get a calendar of September 1752.

$ cal 9 1752
   September 1752
Su Mo Tu We Th Fr Sa
       1  2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

The reason it looks broken is because the cal command is not. Enjoy learning about that!

ℱancy cal

It seems that modern versions of cal have a bug. If you do a which cal, you’ll find that it’s really a link to ncal. When run as cal, it makes the normal calendar but it also tries to highlight the current day. Which can be annoying (directing it into Vim or a cut and paste style operation). In theory the escape codes for the highlighting are supposed to be automatically suppressed when piped, but that is not always effective I have found. Anyway, the fix is easy enough. Use this.

ncal -hb

The bug which is interesting is that the -h suppresses the highlighting for the normal ncal style of calendar. But when you just try cal -h, you get an undocumented feature of the help page. So, ya, that’s not right. Still easy to work around. I’m sure they’ll sort this out soon. Normal classic cal from ancient times is still a simple cal and 99.9% of the time it does what you want anyway.


The date command can be easy. Type it and you get the current date and the current time. Easy and handy! Besides being good for a quick time check, it can also be good to append its output to log files.

ℱancy date

The date command seems easy, but there is immense depth in this command. First of all, some very OCD people made sure it is as accurate as can be. You can have it produce dates that are a certain time from now (or any other time you specify) in the past or future.

And the formatting of the resulting date is extremely flexible. Whatever you need labeled with a date, this command can make it correct.

less / more

The less command is a true workhorse. It is a "pager" which is a program that shows you text one page at a time. If, for example, you have a list of all the hockey games Tim Horton played in, that will most likely not fit on one screen. By using the less command you can scroll through the list as you need to.

You can use something like this.

$ less timhortons

And it will show you the first screenful of games from your file. Hit space and it will go on to the second page. Repeat until happy.

One thing helpful to know about less that will help it stick in your memory is that it is actually the second generation of pager. The classic ancient pager was called by the sensible name more as in "show me more every time I hit the space bar". The deficiency of more (which is still included for old school people; try it out) is that if you pass up your region of interest, too bad. And with more, at the end of the document, it exits.

With less however, it just stops at the bottom of the document and waits for you to further navigate arbitrarily. To go to the top of the file, press "g" to go to the end "G". I think page up and page down work but I use space and "b" for "back". I also heavily use "j" and "k" (vi keys) to move up and down one line at a time.

Importantly, to quit using less, press "q". (Just like when using man which usually is configured to actually use less.)

ℱancy less

The less command is so powerful that it often does more than you’d like. If you send it an HTML file, some configurations of less can filter the HTML into readable text. If this is not what you want you can pipe the file to less - where the dash indicates you want to page the standard input. Without a hint of what kind of file it is, it won’t do anything overly ambitious with it.

I like to dress up console output with control characters to make things have nice colors. To preserve this in less use the -r (raw) option. Easy!

top / htop

A very common useful task for a Unix command shell is to find out what the hell is going wrong on the system. If the performance has come grinding to a crawl, you can find out what is responsible. The top command is universally available on all good operating systems and shows an ongoing display of the "top" processes as measured by CPU resource use. The ones near the eponymous top gobbling up 99% of your CPU will likely be the ones you want to attend to.

The top command is fine, but as with more to less there is an improved version called htop. That is a nicer version in many ways and the functionality is more powerful too. If you have it, use it.

For people who really like to know what’s going on, considering installing the dstat package. I like to SSH into a machine I’m playing a game on (from another computer) and then run dstat. This shows me in real time while I play/experiment how much network traffic, disk activity, memory, and CPU usage is being consumed. This is a great trick to get at the source of problems.

ℱancy top

Sometimes you don’t need an interactive display. Maybe you’re creating a log of activity and you want to record the top processes periodically. Maybe you just want to check CPU business before a job runs to make sure no one else is doing anything. Unfortunately top’s non-interactive mode (called the "batch" mode) is kind of sucky. But, hey, this is unix, we can deal with it.

This shows top’s very informative header and the top 3 processes.

$ top -bn1 | head -n10

If you really just need the top process and that’s it, you can use something like this.

$ top -bn1 | sed -n 7,8p # With header labeling everything.
$ top -bn1 | sed -n 8p   # Just the top process' line.

Or if you need only the top CPU percent field.

$ top -bn1 -o %CPU | sed -n '8s@  *@\t@gp' | cut -f10

This is good for scripts and can be especially handy to see if something is running at 99% or 100%, i.e. very busy. You can become even better informed by having just the mean and standard deviation of the top 4 processes displayed.

$ top -bn1 -o %CPU | sed -n '8,11s@  *@\t@gp' | cut -f10 | \
  awk '{X+=$1;Y+=$1^2}END{print X/NR, sqrt(Y/NR-(X/NR)^2)}'

This is useful on a 4 core machine to see if all of them are being used. As you can see, you have complete control of the infinite possibilities.


The ps command shows a list of your processes. It is a venerable Unix command that professionals need to know about. But normal people can stick with htop pretty much always. I’ve never ever found the ps command useful without at least some "options". Mostly I use ps -ef but there are other arcane styles too.

It’s mostly handy for processes you do not need or want which are running but quietly (escaping the notice of top by being near the "bottom"). For example check this out

$ ps -ef | grep Mode[m]
root       500     1  0 14:39 ?   00:00:00 /usr/sbin/ModemManager

I’m using the ps command to find out if I have something called "ModemManager" running. Yup. I do. Wow, even though it doesn’t take much for resources, I am pretty sure I don’t have a SIM modem on board and I’m even more sure I’m not going to get teleported to 1995.


So you’ve found a process that needs to die. In the previous section, I found ModemManager running and found its process ID is 500. This should kill it.

$ kill 500

ℱancy kill

Some processes ignore the default kill signal (SIGTERM, i.e. terminate). That’s super rude in my opinion and the next level of retribution is this.

$ kill -9 500
$ kill -SIGKILL 500

These are the same thing but one may be more explanatory if it’s in a script. The -9 is the traditional common way to specify SIGKILL and is a bit more emphatic than plain kill.

If it’s not your process, you might need a sudo. (And, of course, next time you reboot, it will probably come back. But that’s another topic.)


This simply Prints the Working Directory. I actually remember it as "present working directory" for some reason. I don’t use this command much because I have this reported in my shell prompt so I’m always seeing it on every command. If you are in a situation where that’s not the case, it can be useful.

ℱancy pwd

The shell usually has an environment variable that contains this directory. You can use or check it in scripts, but here’s how to just emulate the pwd command.

$ echo $PWD


This command Changes the current Directory to whatever you specify. For beginners the reasonable question is, what does it mean to be "in" a directory? Basically the shell just keeps track of some directory that we can pretend you’re "in" as if each node on the directory tree were a room. The end use of this is that you can specify actions with ambiguous locations and the system can assume you must be talking about the directory you are "in". It’s a decent system, but as you become more proficient, it’s good to start appreciating that it’s all kind of fake really.

ℱancy cd

Imagine you’re working on some project. Say in this directory.

[~]$ cd X/myspecialproject/

Then someone distracts you and wants you to acquire a bunch of files to look at locally on your machine. You don’t want to muck up the pristine beauty of myspecialproject and you don’t want to keep these files once you’ve taken a look at them; /tmp is a good choice.

If you are only going to jump over to one distracting directory and come right back, you can use the special cd argument -. That looks like this.

[~/X/myspecialproject]$ cd /tmp
[/tmp]$ cd -
[~/X/myspecialproject]$ echo "Returned to the previous directory."

However, if you are going to potentially do many things in many directories, simply preserving your last used directory is not sufficient. A Bash feature I often forget about but which can be very handy is the directory stack. This allows you to save directory paths on a stack. This sounds kind of overly technical but follow along with the typical use case and you can see how easy and useful it is — if you can remember to use it!

Instead of using the normal cd /tmp to change to the /tmp directory, do this.

[~/X/myspecialproject]$ pushd /tmp
/tmp ~/X/myspecialproject
[/tmp]$ mkdir -p a/b/c && cd a/b/c && touch the_distraction

Now you can do the distracting thing with those messy files in the /tmp directory (or any directory or sequence of directories) and when you are done and ready to resume where you left off, you simply do this.

[/tmp]$ popd
[~/X/myspecialproject]$ echo "Back to myspecialproject directory"

In summary, if you want to cd with the option of returning back to your current directory, use - to return from very simple diversions and for complex diversions, bookmark your current directory by using pushd when changing to the distraction directory.

mkdir / rmdir

Like living things, computer science loves its trees! My understanding of filesystem design is that this tree aspect of how your files are organized is for your benefit only. Deeper down, they’re not really in the trees you made up. But again, it’s an ok system and allows you to pretty well keep things kind of organized. (To give you an idea of how else it could be done, I believe that file organization interfaces should be in Cartesian 3d space which our minds are naturally optimized for, not tree topologies which computer science nerds can eventually force us to get used to.) Anyway, if you need another branch in your organizational scheme, create it with mkdir. This will make a directory.

$ mkdir cats
$ mkdir cats/tigers

Actually, it will specifically make a subdirectory since the top level (the trunk of the tree) should already be established when the system starts.

ℱancy mkdir

If one of the intermediate directories is not present, you can use the -p (for "parent") option and have it create them automatically.

$ mkdir -p mammals/placental/feline/tigers/bengal/cincinnati

The rm in rmdir is for "remove" and it does what you might expect and removes a directory. The catch is that the directory must be empty. This prevents serious catastrophes where you lose a lot of important directory contents. If you really want to delete a directory and contents there are ways to do that, but rmdir is not one of them. (See the very dangerous rm -r for that.)


You’re trying to remove a directory and it won’t let you because it is not empty, how do you see what is there? Perhaps the most fundamental and common Unix command is ls which shows you the files a directory contains. It can also list a particular file. That may seem pointless at first glance — why would you do something like this?

$ ls myfile

Nothing you didn’t already know, right? But you can use "globbing" patterns which is a bit more interesting.

$ ls myfile*
myfile1 myfile2 myfile3

ℱancy ls

The ls command is so critical to my day to day existence that I personally must have an alias for it that optimizes it for my needs. The alias I use is this.

alias v='/bin/ls -laFk --human-readable --color=auto'

Let’s break that down and look at each option.

  • -l Produces a "long" listing. This also includes owner, group, permissions, and timestamp. I always want to see this.

  • -a Shows all the files. Why would it not? Well, I guess it’s considered a "feature" but files that begin with a period are often hidden in Unix operations. So something like /home/xed/.bashrc (where I define this alias) will not show up when I run ls /home/xed/ —  this drives me crazy! So the -a cures that and stops hiding the truth from me. That to me is the most Windows-like "feature" of Unix — and therefore obviously super annoying!

  • -F This is also called --classify and it will put a little symbol on the end of the file to indicate what kind of file it is (directory? device? named pipe?). I like it and it reduces confusion for me, but for others, maybe it wouldn’t.

  • -k The default of ls is to show sizes in "blocks" which to me is damn near useless. This makes it show kilobytes which is more in line with what I can understand.

  • --human-readable Not just kB but I also want it to convert to GB if that makes sense. I’d leave this off only if I’m using ls output to do some kind of calculation. And I don’t do that often.

  • --color=auto Colorizes the output if possible. A nice feature.

Remember, if you have a fancy alias like this, you can always just type ls to get default behavior.

(Why the "v"? Well, in ancient days, Slackware used to come with that defined and I got used to it and now can’t live without it. For modern humans, I’d recommend using ll as an alias since that’s very commonly defined for modern distributions. Type alias ll to see if it is defined already on your system and to what if so.)


I mentioned that the true command can be used to create new and empty files. The first thing most people who need this think to do is use the touch command. It will definitely do that job and that is that.

touch mynewfile

ℱancy touch

But to really appreciate touch you need to understand why it is named the way it is. In complicated build systems, it is common for some component B to depend on component A. If A is modified, B must be rebuilt. If neither A nor B has changed then you can mostly get away with leaving them alone. But sometimes, for strange but strangely common reasons, you want to pretend like A did get modified so that B definitely does get rebuilt. The way systems like this figure it is by looking at the timestamps. If B is newer than A, fine. But if A becomes newer than B, then B needs the rebuild to catch up. The idea is that we "touch" A but don’t really do anything to it. This just means that its timestamp is now, which is presumably later than B’s and B will be lined up for a rebuild.

That’s the historical proper use of that command, but it comes in handy for any kind of hanky panky with timestamps. I’ve used it, for example, when I’ve messed up my timestamps on photos I copied (i.e. timestamps changed from when the photo was taken to the inappropriate "now" of when I’m looking through them). If you want to tamper with evidence in the filesystem’s metadata, the touch command is critical!

chown / chgrp

If you’ve traditionally used bad operating systems, you may not have any experience with the concept of file "ownership". In Unix, files have "owners". They also have "groups". This allows the owner to limit who can do things (see chmod) with the files. In some cases the group property can allow multiple people to do things with the file but exclude non-members.

It is very common in modern Linux systems for every user to have their own group. So if I am user xed, it is common for there to exist a group xed. This is simple and obscures the point of groups. In fancy systems that may be different. Here I’m showing the id command for my user account on a high-quality commercial web host.

:-> [][~]$ id
uid=71849(xed) gid=1000(users) groups=1000(users)

I am user 71849 but I am in the group 1000, "users". (And only that group.) This means that I have proper control over files whose ownership property is set to 71849. If I want to change the ownership I could try to chown command. It probably won’t work because if I don’t already own the files, they’re probably off-limits. And if I do, the target owner property that is not me is probably also off-limits.

In practice, this command is most always used with sudo. A good use case is when I copy files from a system like the one shown and those files are owned by xed(71849). But on my systems, I like to be be xed(11111). Since 71849 won’t even be in my /etc/passwd list of valid users on my system this command might be useful.

sudo chown xed:xed file_from_webhost

Note the syntax of xed:xed changes the user to the xed user and the group to the xed group (from pair’s group 1000 which is wrong). You can leave off the :xed if the group is already fine. Now instead of being owned by a nonexistent user, it is owned by my real local account and I can now access it properly.

As you can see, I can change the group with the chown file, but if for some very strange reason you need to change only the group, you can use chgrp. In practice, I have almost never used this thanks to chown being usually sufficient.

ℱancy chown

The mischief ownership problems can cause! Most users can just stick with simple organizational strategies but if you need to dig deeper, check out my Unix permissions notes which take a deep look into some of the details of topic.

Speaking of mischief, all of these "ch" commands take an option -R which means "recursive". If you specify a directory in the argument list, it will open that directory and change all of the contents. Recursively. This is very useful for making major changes to big archives but obviously it’s very easy to cause subtle problems with your file metadata. Subtle problems and complete hosings!


Focusing even more on permissions is the chmod command. People typically pronounce the "mod" part as they do in the word "modify". I’ve heard "chuh-mod", "change-mod", and I personally say "see-aych-mod". However the command really is there to change the file mode bits. Most people think that this is about permissions (true) and only permissions (not true).

To be extra confusing for beginners there are two totally different syntaxes that can be used to specify a file’s "mode". The way I like to do things is the "advanced" way but I think of it as the simple way because I’m used to it now. It might be better said that there is a computer friendly way and a human friendly way. I like the computer friendly way.

For normal people doing normal things there is a pretty limited palette of what you’d ever need to do with this. Let’s look at some of these. The commands are followed by explanatory comments after the # (i.e. not part of the required command).

chmod 644 myfile # Normal file read/write by owner, read by all
chmod 600 myfile # Normal file read/write by owner, locked to all others
chmod 755 myprogram # Program read/write/execute by owner, read/execute by all
chmod 700 myprogram # Program read/write/execute by owner, locked to all others

The only other quirky thing is that programs that need execute permission have the same "mode" as directories which need access permission.

chmod 755 mydir # Directory read/write/access by owner, read/access by all
chmod 700 mydir # Directory read/write/access by owner, locked to all others

If you can memorize that (or look it up here) that’s 99% of your chmod chores.

ℱancy chmod

If you’re a computer science person you should appreciate that these strange numbers (644,700, etc) are 3 digit octal numbers requiring 3 bits each. Each of the digits represent a class of user: 1st, the user herself, 2nd, the group members, and 3rd, everyone else. And each of the three bits in each of the digits is a switch controlling 1:executable, 2:writable, and 4:readable. Doing the math on the 644 example we see that the owner has 4(readable) plus 2(writable) and the group members and everyone else has just 4(readable).

That’s how it really works and if you’re a computer science major you should embrace this as a nice example of binary encoding — but if you are not, you can safely ignore this.

I won’t even go into the other scheme for specifying permissions. I know it but I hardly ever use it. There are some very weird cases where it does seem necessary when changing tons of files selectively. But deep down, all files have octal modes as described.

Ok, that’s not even true. I said they were 3 digit octal numbers, but the full mode is 4 digits. If you’re interested in that see my Unix permissions notes which covers a lot of very strange stuff.


Got too much stuff? (You can check on that with df.) The rm command will get rid of some (or all!) of it. Of course this is exactly like saying that a chainsaw will get rid of some trees or a stick of dynamite will get rid of some rocks or a scalpel will get rid of a tumor, etc. Sure it will, if used correctly. If you bungle it however,… Wait, that’s not right… When you bungle it! When you bungle it, you want to be relieved that you have in place a good system of backups. Seriously. Come up with and adhere to a good system of backups! Do it.

Ok, your real stuff is backed up. You’re on some throwaway system. You’re a malicious psychopath who is actively trying to destroy someone’s life. How do Unix people get rid of files? Like this.

rm moribundfile

That’s it. It’s gone. If you’re nervous you can try this.

$ rm -iv moribundfile
rm: remove regular empty file 'moribundfile'? y
removed 'moribundfile'

The -v, as with many Unix commands, stands for "verbose" which is why it reported that the file is "removed". (Normally it kills silently). The -i is for "interactive" causing it to pause and ask you to think first and then confirm your reckless ambitions. I find this is kind of pointless and I don’t use it. To me it is as useless as saying, "Are you sure? Are you sure you’re sure? Are you sure you’re sure you’re sure?" until it gets annoying. But hey, create your own illusion of safety. Many systems come with the rm (and others) aliased to rm -i to force the issue. Again, I find it not helpful.

If you come from a computer environment designed for normal people, I have an important fact to tell you about: Not only do files not go in the "trash can" when using the rm command, there is no such thing! That "garbage can" or "recycle bin" stuff is a ridiculous fiction created by an office supply company which naturally saw the world through a bizarre office supply lens.

If for some bad reason you think the "trash" is a legitimate and useful innovation, simply create a directory called "Trash" and mv your files to it. Done. That’s all that is going on with desktop "trashes". There are two massive problems with that approach of course. One, sometimes you need to free up space because your drive is full and pretending to delete data but not really doing it is absurd. And two, sometimes some files really, really, really need to be deleted (…or hey, let’s give that malware another chance!). A bonus problem is bungling the "emptying" of the trash and not even reaping the putative benefits of it.

Folks, let me stress it again: Have! Good! Backups! There is no substitute. Don’t fool yourself into false security and performance problems with a pointless illusory "trashcan".

If you want some kind of crutch to keep you from deleting things you didn’t want to delete, a far better system (in addition to sensible backups) is to always get in the habit of checking your rm arguments with ls.

Imagine I had the notion to delete these parasitic files, i.e. all the ones in this directory.

rm .cache/mozilla/firefox/h1337c28.default/cache2/entries/*

It is very, very reasonable to run this command first.

ls .cache/mozilla/firefox/h1337c28.default/cache2/entries/*

If that looks good, pull the trigger with rm.

ℱancy rm

You can pick off one file after another like a sniper as shown above. But sometimes you need a much bigger calamity to strike. Note that things can go from "oops" to "you’re fired!" very quickly when you start applying Unix’s power to getting rid of stuff.

If you need to get rid of an entire tree, including sub trees and sub sub trees, etc., you need to use the very dangerous -r "recursive" option. This will get rid of the entire mozilla tree.

rm -r .cache/mozilla/

Use -v if you want to see what files it’s finding and purging. That sounds safe, but with big deletions, that can get tedious. Having all the millions of files of a big system you’re trying to clear go scrolling by can slow things down a lot. Sometimes I start it with -v, check that it’s deleting what I intended, press Ctrl-C to interrupt it, and then restart it silently. Use good judgement there.

By the way, if you’re cool enough to be reading this, you’re warmly invited to be my "friend" on Steam where I am known as rmdashrstar. Now you know why.


The simple explanation is that the cp command copies files. The details can get somewhat complex but usually it’s as simple as doing this.

cp myfile myduplicatefile

Now you have two files with identical contents (see md5sum to prove it).

As with other commands that can make a mess of your file system, it is good to be very careful and to use the -v verbose flag.

I think the most common use I have for cp is when I’m about to mess with some configuration. Here’s an example of what I mean.

sudo cp -v /etc/ssh/sshd_config /etc/ssh/sshd_config.orig

I’m basically preserving a copy of this important (SSH server) configuration file so that if the edit I’m planning goes awry, I can restore it to its original condition (e.g. with mv).

ℱancy cp

In fact, making a backup of files I’m about to mess with is about all I ever do with cp. This may seem strange to people who might naturally assume that this command is one of the most important cornerstones of Unix. Maybe for some people it is, but I actually don’t use it much and here is why.

First of all, the whole idea of copying implies a duplication of resource requirements. So right out of the gate, it is almost an exemplar of inefficiency. You might say, what about backups? Indeed, it is definitely not efficient to lose all your work to media failure (or administration clumsiness). But for making backups I tend to always use the more serious tool, rsync. Always. After all, I’m just as likely to back up to a completely separate machine and cp just can’t keep up.

If it’s an intermediate scale between entire archives and single configuration files, I’m often keeping things backed up and organized with version control (hg, cvs, git). This leaves few jobs that need to be done with cp. When I reflect on what I’ve used cp for I’m kind of embarrassed at how clumsy the entire premises of those operations were.


What do you get when you combine the cp command and the rm command? You get the mv command! As far as I know it is largely superfluous. You could copy the file and then remove the original to create the same effect as mv. But mv is intuitive and simple when that’s what you want to do.

The move operation is also pretty much the same thing as "rename". The syntax is simple enough for simple things.

mv oldname newname

Or relocating with the same name. This will put the oldname file into the directory oldfiles.

mv oldname /home/xed/oldfiles/

It can be good to use the -v (verbose) flag to output a report of what got moved.

mv -v oldname newname

ℱancy mv

Normally the mv command is lighting fast. After all, it’s just relabeling things really. But sometimes those ones and zeros in a file do need to actually move! If you’re changing which filesystem a file lives on, it needs to actually go and occupy new disk space. In these cases, it’s often better for large operations to use cp or even rsync.

Another quirk of mv is that it can move multiple files into a single directory. In that case the directory must be at the end of the argument list.

mv file1 file2 file3 dir4files/

Sometimes if you make a mistake and do something like mv *jpg it will complain that the last jpg file it finds is not a directory and the other files can’t move to it. A much sadder case is when it is coincidentally a directory and you accidentally muddle everything up. Been there; done that.


In bad operating systems you can make "shortcuts" or "aliases" that look like files but point elsewhere. The proper name for such a thing is a "link". For most everybody most always, the link is a "symbolic link" or "symlink". To create a symlink the proper way use this syntax.

ln -s /tmp /home/xed/Downloads

What that example does is it creates a symlink called "Downloads" in my home directory. This is not a file or a directory, it is a symlink. However, it points to a directory, the universal /tmp directory so this symlink acts like a directory. What this does for me practically is that when a garish clumsy program like Chrome downloads things into the "Downloads" directory, heh heh, well, it goes to the /tmp directory — and the next time I reboot my computer, all that cruft is deleted.

My helpful way to remember the order of the arguments for ln -s is: the real thing comes first (/tmp) and the fake thing comes second (the symlink, Downloads).

ℱancy ln

Note that the -s is for "symbolic". It would seem there is another kind and there is. The default type of link that the plain ln command produces is called a "hard" link. What’s different about that is that it is not different from normal files. Your filesystem will see it as not just a file like the one it’s linked to but as the actual file itself. In other words, you will have multiple names (and complete file records) for the same exact blob of ones and zeros on the filesystem. If you change the hard link you’re changing the target too. Isn’t that what happens with symlinks though? Yes, but the difference is that if you delete the source of the hard link, the linked file will carry on as a regular file. If a symlink’s referent is deleted, well, now it just causes errors (e.g. "No such file or directory").

Normally you should stay away from hard links without a very good reason. And obviously you should stay away from circular symlink redirection cycles.



This stands for "Web Get" and it gets things from the web. In reality it is a tiny web browser that just acquires the web resource and leaves you to look at it any way you like. Super useful. My wget notes.

Note that if you are a Mac user, you should have a look at the man page for curl which is a very similar program that is installed by default on Macs.


The grep command is one of Unix’s most famous. Among serious computer professionals the word "grep" is used in conversation as a verb that means to extract specific information from a larger source. Its use can be very complex, but mostly it’s quite simple conceptually and practically. Does a file named "myfile" contain the phrase "searchterm"? Find out with this.

$ grep searchterm myfile

Here’s a simple useful example that I like. If you look at the Linux file /proc/cpuinfo you get a big dumping of stuff. But if you narrow what you’re interested in with grep, you get this.

$ grep processor /proc/cpuinfo
processor   : 0
processor   : 1

That would indicate to me that I’m on a 2 core machine. Let’s try a different approach on a different machine. Combining grep with other tools shows this.

$ grep processor /proc/cpuinfo | wc -l

This one is an eight core machine. Now I have a good way to check how many cores a machine has. Once you start using grep for various things, you’ll start to realize how powerful it is.

A neat trick you can do with grep is solve crossword puzzles. All you need besides grep is a list of all the words in your language. Since computers can check your spelling these days, this is usually already present. Using the normal Linux spelling word list here is how I can find a six letter word that starts with "c" and ends with "xed".

$ grep '^c..xed$' /usr/share/dict/words

There are tons of options to grep but there are two that I use way more than the others. The first is -i which makes the searching case insensitive. That way you’ll find "PROCESSOR" and "Processor" as well as "processor" (if they’re there). This is very useful when scanning natural language texts.

The other very useful option is -v which inVerts the sense of the search. That is, it shows you all lines that do not contain the search phrase. One example is if you’re trying to do some further processing of another command which has a descriptive header. For example, the df disk free command prints this at the top of its output.

Filesystem     1K-blocks     Used Available Use% Mounted on

That’s nice for standalone use, but in a script, I may be able to take that for granted and I want all the lines but this line that I know will always contain "Filesystem".

df | grep -v Filesystem

That does it. But since it’s always the first line maybe you could have just used tail.

df | tail -n +2

Fair enough. But what about if you want to exclude all the entries for tmpfs (I have 6)? This will cut off the header and exclude the tmpfs entries.

df | tail -n +2 | grep -v tmpfs

ℱancy grep

It turns out that there are usually many "grep" commands you can run on a typical system.

$ which grep egrep fgrep rgrep

These are mostly aliases (actually links) which automatically invoke particular options. Most of the subtlety has to do with "regular expressions". The search term that grep uses is actually a very powerful syntax called regular expressions. This ancient syntax is easy to use in its simplest cases but can get very hairy quickly. If you’re interested in learning more, you can check out my 2003 Regular Expression Tutorial. Maybe I’ll update that some day.

Regular expressions are so fundamental to grep that the name "grep" itself stands for "global regular expression parser".


The find command is not grep. It does not find things in files. What it does is finds files in the filesystem tree. This may not seem useful if you only have a couple of files. "Which pocket are my keys in?" is not a difficult question. "Where in my house are my lost keys?" could be. When your filesystem becomes a haystack, find can quickly locate needles.

One simple but useful use of find is to just dump out all the files in a tree’s branch. This is different from ls because it operates recursively looking through all sub directories too. We can use this to see just how massive the haystack is.

$ find / | wc -l

On my system running this (as root) shows that my entire filesystem tree has over a half million files. If you have misplaced one of them, you can see how find can be useful.

The normal way to use it is to specify a starting directory and a search criteria, usually a name. Here is an example I have used in the past when I’m trying to test speakers.

$ find /usr -iname "*wav"

I need a sound file, any sound file! I don’t care where it comes from. I know from experience that there will be some giant package stored somewhere below the /usr node in the tree which will include some sound files. Running this command informs me that there is a file called /usr/lib/libreoffice/share/gallery/sounds/strom.wav (and 47 others), just like the (iname) pattern I specified asked for. I never would have found that by casual file manager browsing. The "i" in iname stands for case Insensitive so that file.WAV will also be found. If you don’t want that, just use -name and then the pattern.

Remember, that the starting directory (the parents and ancestors of which you will not search) is specified first. Then comes the filter to narrow down what you’re looking for.

ℱancy find

Of course that simple usage is fine but the tip of the iceberg. The find command does a ton of other things. In fact, if it’s possible to filter files by their location based on their metadata (timestamps, ownership, etc) then it’s likely that the find command can do it. There is almost a full programming language of find filter syntax to make every thing plausible, possible.

My find notes have many more examples of find including some more exotic situations.

mount / umount

To "mount" a disk means to have the operating system detect it and confirm that it’s organized in a compatible way and that it’s ready for business. That’s all. The important thing for beginners is to understand that it is possible for disks to be connected and not mounted. In bad operating systems it is often because they don’t understand how good operating systems organize their filesystems. But in good operating systems, it may be strategic to not mount a drive to ensure that it is left completely untouched. Mounting can also come with options so that a drive could be mounted, but only for reading for example. This allows you to do forensics without any chance of modifying or corrupting anything.

Another important thing to know is that in environments that normal people use, the concept of "ejecting" a disk is really an unmounting process. Unix can do this explicitly with a lot of very fine control using the umount command.

Other than that, modern systems mount things automagically and regular users can ignore the topic.

ℱancy mount

Professionals however should know how to do this explicitly. The general format is something like this.

sudo mount /dev/sdc2 /mnt/mybackupdrive

The first argument is the device and the second is the "mountpoint", that is, where in the tree will this new system graft on. I think of the mount command in the same way I think of the ln command: real thing first, fake thing second.

A common thing to mount is a USB drive or SD card that is used in some device like a camera. The camera will inevitably want to use a crappy VFAT file system. One of the ways it is crappy is that it doesn’t handle unix metadata such as ownerships and permissions very well. I’ve had success using an option to the mount command.

mount -v -t vfat -o uid=11111,gid=11111 /dev/sdg /mnt/sdg

One other tip is that when you’ve unmounted a volume with umount there still may be outstanding writes that need to finish up. To make sure they are finished, issue the sync command and wait for it to finish. If you’ve done a umount and then a sync it is safe to remove the drive (assuming it’s removable!).


Files are ones and zeros on your disk, but the illusion of files is stored by the filesystem that knows things about the file. This can be very useful to query. The stat command prints out all of a file’s metadata that it can find. Here is an example.

$ stat /bin/true
  File: /bin/true
  Size: 31464       Blocks: 64         IO Block: 4096   regular file
Device: 811h/2065d  Inode: 5767266     Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-01-10 09:23:01.105907652 -0500
Modify: 2017-02-22 07:23:45.000000000 -0500
Change: 2019-01-08 13:25:53.607888592 -0500
 Birth: -

Note this is especially good for finding out more about timestamps.


One day on the path to being a serious computer user, you will get a "No space left on device" error. Thereafter you will have a heightened sense of vigilance for that problem being possible. How do you check to see how much of your Disk is Free to use? The df command. By itself, it shows you all the filesystems it knows about and how full they are. If you specify a mount point or a device (or more than one) it shows you only that. Here I’m checking the space used (sort of the opposite of "free" isn’t it?) on my main top level filesystem (/).

$ df /
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sdb1      117716872 17804248  93889868  16% /

I don’t know about you but I can’t understand that because it’s written in 1k "blocks" which is hard for me to think about. Adding the -h option, for human-readable, cleans things up nicely. For humans anyway.

$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       113G   17G   90G  16% /

So I have a 113 gigabyte drive that I’m using 17 gigabytes of. It’s really a 128GB drive but remember that the filesystem sets aside some of that to organize the metadata for the half million or so files on a typical system.

ℱancy df

Sometimes you need to see filesystems that are hidden. Use the -a option to really see All of them. Sometimes you use symlinks to add some more space to your setup and it’s easy to get confused about where the data really is being written. You can give df a path to a file or directory and it will show just the location of where it truly resides.


The df command is great for big picture knowledge about the entire disk but what if you want to know how much a particular branch of your filesystem’s tree is taking? The du command can add that up to your specification. By itself, I find du kind of useless. It will show you the size (in hard to understand blocks) of each directory in the current directory and below. That can get cumbersome. A better way is to limit it with the -s summary option and to use the human readable option, -h. Here’s what I generally do.

$ du -hs /home/xed/.mozilla
122M    /home/xed/.mozilla

Here we can see that my stupid clumsy browser’s working directory is packed with 122MB of garbage (mostly cached web stuff I suppose).

I can also count up all the subdirectories this contains.

$ du /home/xed/.mozilla | wc -l

That’s 623 (including the top level one) which means that this directory is filled with a baroque maze that’s less of a tree and more like a bramble patch. When planning backups, this kind of targeted analysis can be handy.

ℱancy du

What if you want to see which directory trees are the big ones? I use the --max-depth option to only descend one level. Here are the top 4 (tail -n4) biggest directory trees in my home directory.

$ du --max-depth=1 /home/xed | sort -n | head -n-1 | tail -n4
47780   /home/xed/.config
125648  /home/xed/.mozilla
691912  /home/xed/Downloads
1905432 /home/xed/.cache

Note that they’re all related to browsers!

This technique can also be useful if you have a bunch of users and want to see who is hogging all the space.


Shows memory stats. As in "free memory". Use the -h human readable option so you don’t have to think as much about what it means. Memory management on a Linux system is complex and baroque (though freakishly effective) so don’t feel bad if it’s not all crystal clear. I don’t exactly know everything it’s trying to tell me. But it’s a good quick check of memory.

On Linux, you can also do cat /proc/meminfo for similar information.

Probably Linux only. Not on FreeBSD or OSX.


Shows block devices. Wonder what drives are on your system? Use this. Wonder what device name that USB flash drive was given? Run lsblk before inserting it and then run it again after and see which device just showed up.

Not on FreeBSD. Probably Linux only.

ℱancy du

I use this so often that I have this very helpful alias defined.


This eliminates some of the LSBLK cruft like major and minor device numbers (that I have yet to ever care about) and replaces them with some interesting things like file system type and the UUID code which can uniquely and positively identify a device. Not 100% sure if you’re formatting that USB flash drive you just inserted or your main hard drive? Double check the UUIDs and you can’t easily choose the wrong one.


Need to see a list of your USB devices? Good for finding out things like your USB hub is swallowing some or your peripherals.

Not on FreeBSD. Probably Linux only.

ℱancy lsusb

The main thing I do with this command is to see if the USB system is identifying a new USB device that I’ve just plugged in. Here is a smooth workflow for isolating just that.

$ lsusb | tee /tmp/usblist
$ diff /tmp/usblist <(lsusb)

Now you’ve conclusively isolated it and can be sure it’s present without hunting through the entire list. Also, don’t forget to check dmesg.


Need to see a list of your internal PCI peripherals? Even if it’s physically not on a card per se, this will show you what the PCI subsystem knows about. It’s most handy for figuring out which graphics adaptor chipset you have so you can target obtaining the correct driver.

It can also be useful for figuring out what other chipsets you have on various hardware (NIC, sound, etc).

Not on FreeBSD. Probably Linux only.


Something bad happen at boot? Getting some strange hardware related error? You can check the kernel’s "ring buffer" where it dumps internal messages that it thinks you may want to know about. To do this use the dmesg command. On some systems, this is considered privileged information and you’ll have to use sudo.

sudo dmesg

Or if you want to know if some Linux thing is active you can grep for it. Here are some examples.

sudo dmesg | grep -i nvidia
sudo dmesg | grep -i ipv6

This checks to see if the Nvidia driver started ok. And the other checks to see if IPv6 is active.

What the kernel tells you is a bit arcane but it’s still good to know how to check it and do some web searching for any problems that you find reported that concern you.

The kernel has other ways of providing you information. For example check out this command.

cat /proc/cpuinfo /proc/meminfo

The kernel will quickly pretend that there are two files (cpuinfo and meminfo) that contain a bunch of stats that the kernel knows. The cat command will dump them out for you. Very handy. Try it.

ℱancy dmesg

The timestamps dmesg produces are in time intervals since the computer was powered up. I find that to be pretty useless. The -e flag will give you sensible timestamps that let you know if some logged event is related to the problem you just had a minute ago.

Sometimes if you’re doing some robotics kind of thing with weird hardware and you want to see it get detected or whatever, you may need to monitor the kernel’s ring buffer in real time. Fortunately it’s simple to do. Check out these options.

sudo dmesg -wH


This is supposed to tell you about the platform you’re running on. This is often used in scripts so the script can know what kind of system you’re on. For less boring use try the -a option.

$ uname
$ uname -a
Linux ctrl-sb 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64 GNU/Linux

I tend never to use this. Instead, I usually just do this.

$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION="9 (stretch)"

In the old days, I did this and it still mostly works.

$ cat /etc/issue
Debian GNU/Linux 9 \n \l


Start by installing this somehow if it’s not there. On Debian/Ubuntu type systems do this.

$ sudo apt-get install lm-sensors

Then have it figure out what sensors it can read (basically it’s autoconfiguring).

$ sudo sensors-detect

I just hit enter to go with the defaults of every question. Because I’m lazy.

Then you can put it to use and see what your computer knows about its sensors, usually temperature data.

$ sensors
Adapter: ISA adapter
Physical id 0:  +28.0°C  (high = +84.0°C, crit = +100.0°C)
Core 0:         +27.0°C  (high = +84.0°C, crit = +100.0°C)
Core 1:         +28.0°C  (high = +84.0°C, crit = +100.0°C)
Core 2:         +26.0°C  (high = +84.0°C, crit = +100.0°C)
Core 3:         +26.0°C  (high = +84.0°C, crit = +100.0°C)

That’s pretty handy to know, especially if you are having intermittent shutdowns on hot summer days.

Not on FreeBSD or OSX. Probably Linux only.



Produces a massive dumping of everything the system knows about your hardware. You need to use sudo and you probably need to pipe it to something like less or grep to be useful at all. Still, good to know when you need to find things out about your system. I think it is a good choice for archiving a hardware profile of machines you manage or care about.

By the way, DMI stands for Desktop Management Interface.

I found dmidecode on FreeBSD but not OSX.


The tr command can be simple. It stands for "translate" but I think of it as "transpose" too — something like that. As an example, if you need all n’s converted to b’s and all a’s converted to e’s, this will do it.

$ cal | head -n1 | tr na be
Jebuery 2019

ℱancy tr

Things can get more complicated with ranges of characters. Here is a technique to get ROT13 output.

$ echo "This is mildly obfuscated." | tr [A-Za-z] [N-ZA-Mn-za-m]
Guvf vf zvyqyl boshfpngrq.
$ echo "Guvf vf zvyqyl boshfpngrq." | tr [A-Za-z] [N-ZA-Mn-za-m]
This is mildly obfuscated.

Here I explain a cool application of tr that I came up with.



The cut command can be used to extract specific columns of data. You pretty much always need two options for this command. First you need to specify a delimiter, that is a character that will be the one which separates your fields. For example, in comma separated values, you’d say -d,. The other option is the -f which is the field number you want. Here I’m taking the 3rd field when separated by commas.

$ echo "one fish,two fish,red fish,blue fish" | cut -d, -f3
red fish

If I separate by spaces, I get a different result.

$ echo "one fish,two fish,red fish,blue fish" | cut -d' ' -f3


It turns out that a lot of people comfortably get away with not knowing about the cut command because they use awk. If you only know one thing about awk it should be that it is very good at easily extracting fields (i.e. columns) from lines of text. One nice thing about awk is that its default delimiter is space, so you often can just do something like this.

$ echo "one fish,two fish,red fish,blue fish" | awk '{print $3}'

And if you need a different delimiter, you can do this.

$ echo "one fish,two fish,red fish,blue fish" | awk -F, '{print $3}'
red fish

Or this.

$ echo "one fish,two fish,red fish,blue fish" | awk 'BEGIN{FS=","}{print $3}'
red fish

That’s really all normal people need to know about awk.

ℱancy awk

But that’s just the tip of the iceberg! The reason that awk is generally better than cut is that it is way more powerful than the simple cut (which is fine if you’re going for minimal).

Awk is one of Brian Kernighan’s favorite languages. This is surprising since he is the "K" in the original K&R C. But less surprising because he is also the "K" in awK which he helped write. It is in fact a complete and powerful programming language. I have written some amazingly powerful and effective programs in Awk and I encourage professionals to get familiar with its power and potential.

Here is an example of a Bash program I wrote that creates specialty Awk programs and then runs them to solve problems in a custom yet scalable way.

Here is an example of how I used Awk to create pie charts on the fly from the Unix command line.

Definitely check out my my awk notes for other interesting applications and ideas.

cat / tac / rev

Short for "conCATenate". Although this is really to join multiple files together it is commonly used to just dump the contents of a file to standard output where it can be picked up by pipes and sent as input to other programs.

tac is a little known but fun and somewhat useful command that returns the lines of input starting with the last and ending with the first. I found tac on FreeBSD but not OSX.

rev is another little known but fun and less useful command that returns the lines of input reversed (right to left becomes left to right). Let me know if you come up with a brilliant application of this.

ℱancy cat

Don’t use cat! Use file arguments and redirections instead where possible. If you’re not concatenating, you probably don’t really need cat. That said, normal people shouldn’t feel bad for using it as a do-all convenience function. It works. It’s just that if you want to get to the next level of efficiency, it’s good to economize your scripts by getting rid of cats when possible.

One helpful simple use of cat is when a program tries too hard to format data. For example, the ls command tries to figure out how wide your terminal is and pack as many columns of output as possible. This normally makes sense, but if for some reason you want the display output in one continuous column, just pipe it to cat.


Most people don’t know about this command because it is rarely needed. But when it is, it’s nice to solve your problem in a way that you can be sure is direct and efficient. It is one of the many programs written by Richard Stallman personally.

Basically, like it says in the name, it splits data. Into multiple files.

Here’s an example. If you have a giant list of a million ligands and you have 100 computers that can check to see which ligands bind to a certain protein, you could use the split command to produce 100 files each containing 10000 ligands.

The split command is the natural complement of the cat command. How would you reassemble files produced with split? Simply concatenate them with cat.


The diff command simply finds the differences in two files. Well, no, not simply. It actually is a very hard problem to even formulate an efficient way for you to be apprised of what is different. diff solves that hard problem and other subtleties related to the task. If you think you have a better way to express file differences than diff, you’re probably wrong. Let’s look at an example.

Here I list block devices with the lsblk command and save the output to a file. I do this once before inserting a USB flash drive and once after.

$ lsblk > /tmp/before
$    # ... Now I insert a USB drive.
$ lsblk > /tmp/after
$ diff /tmp/before /tmp/after
> sdc      8:32   1  29.9G  0 disk
> ├─sdc1   8:33   1     1G  0 part /media/xed/ef6b577f-d3c3-4075-8da8-333d031b4515
> └─sdc2   8:34   1  28.9G  0 part

What diff shows me is only the part of the two output files that is different. Its format says that around line 7 you need to add the following lines (which start with ">"). On "diffs" (as they are called), deleted lines are indicated with the symbol is "<".

Note that the order of how the files are specified is important. The "diff" reflects what must be done to the first specified file to achieve the state found in the second.

ℱancy diff

That’s nice, you’re thinking, but maybe you don’t see yourself with a huge diff agenda. It turns out that this program is a cornerstone of human civilization. This is because pretty much all version control systems like RCS, CVS, Mercurial, and Git (and therefore all software) use diffs to keep track of what changed.

There is another related Unix command called patch (written by Perl creator Larry Wall) which takes something like /tmp/before plus a diff and produces a /tmp/after. If you’re wanting to tell Linus Torvalds about some brilliant change you have in mind for the Linux kernel, the normal way to do this is to post a diff (in email is fine) which will effect your changes if it is "patched" into the code. This is where the word "patch" in software contexts comes from.

Here is a hardcore use of diffs where I created a script to apply numerous custom patches to fix a terrible but important dataset. The point of this example is that I am taking for granted that there is no better system to explicitly record and apply what needs to be changed than diff and patch.


I have a lot of respect for diff of course, but in my experience I have more occasions where I need to know if something changed rather than what about it changed. Make no mistake, diff can do that job, but there is a simpler way.

The MD5 message digest algorithm creates a "hash" of input data. This means that it makes a short (128 bits) numeric summary of input.

Think of it like some kind of inscrutable rhyming slang. Why would a "person from the United States of America" be called a "septic"? Well, "septic" from "septic tank" rhymes with "Yank" which is short for Yankee. You input "American" to a Cockney and you get "septic". WTF — go figure. Without a complex breakdown of the algorithm, you just have to accept that it is what it is. Hashes are similar.

If I feed md5sum that phrase, its rhyming slang produces this.

$ echo "person from the United States of America" | md5sum
f3b811b934ee28ba9e55b29c6658c5b7  -

That 32 character nickname is no more or less understandable than "septic". What it is, however, is unique. If I send it anything else, that "nickname" will be reliably different. And not just a little different but completely different in a completely random looking way. Just like rhyming slang.

Where is this useful? Well, everywhere for starters! This kind of thing is used heavily to make cryptographic keys. Something like this (and formerly this exact thing) was used to record your password in Unix /etc/passwd files. That allowed the system to check if it was really you without actually recording the secret word. It would just check the secret word you enter at log in by running it through md5sum and seeing if the hash on record matches.

Here is a simpler example that I run into a lot. Let’s say you have a big music collection or a big photo collection. With multiple backups and offload operations, it’s easy to get some files that are duplicates. (If this hasn’t happened to you, you have not managed enough files.) The question md5sum is ideal to answer is, "Are these files exactly the same?" Just run something like this.

$ md5sum /etc/resolv.conf /var/run/NetworkManager/resolv.conf
069bf43916faa9ee49158e58ac36704b  /etc/resolv.conf
069bf43916faa9ee49158e58ac36704b  /var/run/NetworkManager/resolv.conf

Here the MD5 hash for these two files is identical — these files contain the same exact 1s and 0s in the same exact order. (I cheated here a bit since the first is a symlink linking to the second.) If these are two distinct files, one of them is redundant.

On FreeBSD and OSX there is an equivalent command called simply md5.

ℱancy md5sum

Ok, so how would one find all the duplicated files on a filesystem? The md5sum command will surely be at the heart of the solution.

I wrote this little one liner script which takes a starting top level path as input and searches the entire sub tree for files which are actually the same. It saves a bit of time by only looking at the beginning of the files instead of the whole thing. Note that apparently 1k of an mp3 is not enough to distinguish it from others reliably enough.

find $1 -type f \
    | while read X; do head -c 100000 "$X" | echo -n `md5sum | cut -b-32`; echo " $X"; done \
    | sort -k1 \
    | awk 's==$1{print p;p=$0;print p}{s=$1;p=$0}' | uniq

Run like this:

./undup /media/WDUSB500TB/musicvault > musicdups

Just to sprinkle a little confusion and philosophical doubt into the topic, it turns out that it is possible that md5 for two different inputs will cause the same hash to be output. This is called a "collision" and it is very rare. Very. Rare. It is rare in the same way that choosing two random drops of water on earth would find them touching each other. Still for some life or death applications (mainly cryptography), this is not rare enough. There are fancier (n.b. harder to compute!) hash algorithms where the collision potential is something you could comfortably bet your life on. Here are nerds in late 2018 debating whether MD5 is sufficient for anything. To figure out which songs in your collection are dups, I say it’s more than fine.


One huge disconnect between the real world and computer science education seems to be the emphasis on sorting algorithms. You know who writes production code containing sorting algorithms? Nobody. Because it has all been exhaustively implemented for all practical purposes. In the Unix world that done deal is the sensibly named sort command. Need something sorted? It’s an off-the-shelf solution that’s probably better than what the typical CS education was going to provide for.

Don’t worry too much about how this works, but the following code can produce helpful examples.

for N in {1..5}; do echo $(($RANDOM%10));done

That basically outputs 5 random numbers between (and including) 0 and 9. If that output is sent (with a pipe) to the sort command you get something like this.

$ for N in {1..5}; do echo $(($RANDOM%10));done | sort

Those random numbers are now sorted. However for humans there lurks some unintuitive behavior. Let’s try it with numbers from 0 to 99.

for N in {1..5}; do echo $(($RANDOM%100));done | sort

That 5 does not look sorted. But in fact, from a text standpoint, it is. If you want the sort to be done from a numerical perspective, add the -n flag.

$ for N in {1..5}; do echo $(($RANDOM%100));done | sort -n

Now the single digit number does come first even though "3" is bigger than the "1" of the "13" and the "2" of the "27".

Need to reverse the sort? You could pipe it to tac or simply use the -r option. This produces the alphabet in reverse.

for N in {a..z}; do echo $N;done | sort -r

Need to "sort" things into a random order? The -R command can do that. This produces the alphabet in a random order. Run it multiple times.

for N in {a..z}; do echo $N;done | sort -R

ℱancy sort

Although that last example works, if you want to sort things randomly, you need to be careful. The sort command actually randomizes sets of duplicates. This means if there are two entries that are the same, they will still be stuck together after -R and that is not really quite random. You can see this by counting unique values with something like this.

$ for N in {1..1000}; do echo $(( $RANDOM%20 )) ; done | sort -R | uniq -c | wc -l

How could there only be 20 unique (see uniq) sets of numbers if they were randomly distributed? They are not. If you want truly random distribution, try the shuf command which does the right thing.

$ for N in {1..1000}; do echo $(( $RANDOM%20 )) ; done | shuf | uniq -c | wc -l

Another tip about sort shows that it’s good to read your man pages. I just learned that sort now has a mode that can sort by human readable sizes (-h). This will come in handy when sorting output of the du command and many other applications involving file sizes.


The uniq command seems very weird at first. Why would anyone need a special command that eliminates duplicates? Especially since the sort command has a -u option that does this (kind of).

The typical usage is to find out "how many different things are there"? For example, you may have a log file and wonder how many different addresses made a connection to your web server (i.e. ignoring the many hits where the same customers are merely busy interacting with it).

Here’s a typical example. Which processes are logging things in syslog (which is just some Linux log thing)?

$ sudo cut -d' ' -f5 /var/log/syslog | sed 's/\[.*$//' | sort | uniq -c
     14 anacron
      6 CRON
      2 dbus-daemon
     39 kernel:
      1 liblogging-stdlog:
      4 mtp-probe:
     20 systemd
      2 udisksd

Here I cut out the service name and clipped off the process number. The real action starts at sort. That sort organizes the naturally occurring interleaved list. By then sending it to uniq I eliminate the duplicates. The -c option causes uniq to count up how many duplicates there are.

Of course when you look at output like that you might think to sort again and this is very common! Here I’m finding only the top process that generates log messages.

$ sudo cut -d' ' -f5 /var/log/syslog | sed 's/\[.*$//' | sort | uniq -c | sort -n | tail -n1
         39 kernel:

As your Unix proficiency increases, piping results to something like | sort | uniq -c | sort -n | tail -n1 becomes quite ordinary.

Since pipes work by running processes in parallel, this kind of workflow is also extremely high performance as a bonus.

ℱancy uniq

Still not convinced that the sort command is insufficient for making things unique? The problem with the unique option of sort is that it must actually sort the data first. Sometimes this is not what you want. Compare the following.

$ for N in {1..1000}; do echo $(( $RANDOM%20 )) ; done | shuf | uniq | wc -l
$ for N in {1..1000}; do echo $(( $RANDOM%20 )) ; done | shuf | sort -u | wc -l

You can now see that there naturally occurred 42 cases where there were duplicates that uniq had to get rid of while the sort -u got rid of 980 cases of duplicates giving the answer to quite a different question.

The reason to break uniq out into its own stand alone program is that it becomes more modular. The uniq command itself is pretty powerful on its own too. It can even eliminate lines that have duplications limited to a specific whitespace-separated fields (-f).


The wc command stands for "word count" and is one of the most useful command line Unix tools. It is especially useful as a building block in a complex pipeline allowing you to distill a lot of data into something you can deal with. For example instead of seeing pages of files go zooming by, I can do this.

$ ls *txt | wc -l

And find out that I have 152 help files in my directory. The -l is to count, not words, but lines. Specifically I’m counting the lines returned by the ls program. That will list all of the files I’m interested in.

To see how many words are in all of my help files I simply do this.

$ cat *txt | wc -w

Or if you do it like this, you get a breakdown of every file’s word count and a total. (I’ll just limit it to ones starting with "o" to illustrate.)

$ wc -w o*txt
10002 opencv.txt
3846 opengl.txt
13848 total

Without the -w option it shows you lines, words, and bytes.

$ wc o*txt
  1671  10002  71753 opencv.txt
   731   3846  28402 opengl.txt
  2402  13848 100155 total

If you’re a professional writer commissioned for a certain number of words the -w will be very useful to you. However, for normal Unix command line usefulness, wc -l is extremely useful. For example, if I wondered how many times I used the word Linux in my notes, the answer is immediately available.

$ grep -i Linux *txt | wc -l

ℱancy wc

In that last example, if I had a line with "linux" in it twice, it would only be counted once. To really do a proper job, I would want to break up every word into its own line an then find the lines containing the search target and then count them. Here is how I can do that.

$ cat *txt | tr ' ' '\n' | grep -i Linux | wc -l

Apparently I have (at most) 23 lines with multiple "Linux" mentions. But as you can see, no matter what you must do to get the data to be correct, when it comes time to count it all up, wc is your friend.

which / whereis / type / file / locate

Before you can use man to read about a command, you need to know if that command is even present on your system. There are many ways to do this. I prefer the which command for finding where executables really live (if they’re even installed). Let’s say I wanted to find out where the mysql executable is on my system. Here’s a comparison of techniques.

$ which mysql
$ whereis mysql
mysql: /usr/local/bin/mysql /usr/local/man/man1/mysql.1
$ type mysql
mysql is /usr/local/bin/mysql
$ locate mysql | wc -l

A lot of people go right for locate, but as you can see, it produces a list of 12550 lines of stuff I’m not interested in reading. I also find whereis to be too verbose unless you’re looking for man page paths for some reason. I tend to go with which when I want to find a command.

The type command is not a command but a shell built in. Its great feature is that it can recognize shell built-ins for what they are.

$ type type
type is a shell builtin

The file command is good to figure out what exactly the executable is once you’ve found its location. (A shell script? 32 bit or 64bit? Designed for a bad OS?)

$ file /usr/local/bin/mysql
/usr/local/bin/mysql: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/, for FreeBSD 11.2, FreeBSD-style, with debug_info, not stripped

In fact, checking the /sbin/init program is a good way to figure out if your underlying system is (for some god-awful reason) 32 bit.

$ file /sbin/init
/sbin/init: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD),
statically linked, for FreeBSD 11.2, FreeBSD-style, stripped

This one is thankfully not.

ℱancy file

If file seems great but you’d like to know absurd levels of detail about an executable start with ldd. This shows all the shared library dependencies and can be a life saver when trying to resolve dependencies. One very common thing I do is ldd ickyexecutable | grep Found — this will ironically catch all the items on the list that are "Not Found". (You can search for "Not Found" if you feel like including the quoting.) With this list of missing libraries, you now have a great place to start tracking down what you need to get the thing to run.

If you’re hungry to learn yet more about executables, check out the man page for the readelf command. Also the nm command specializes in extracting an executable’s symbols but readelf can too.

export / alias / set

These are shell commands but they can tell you a lot of useful things. If you run it with nothing else export will give you a list of exported variables in your environment. That is useful to review sometimes. The alias command by itself shows all the defined aliases. And set by itself shows everything the shell knows about that is, well, set. This includes shell functions, aliases, variables, and maybe more.

These are super useful commands to keep in mind when troubleshooting or perfecting your shell environment.

An example is if I’m interested in knowing how my shell history settings are configured. I could do this.

$ set | grep HIST

Those are good settings, BTW.

xargs / parallel


Normal people are used to "compression" taking the form of .zip files. Unix can deal with this. To see what monsters may be lurking in a zip file check with something like this.

unzip -l
unzip -l normal_java_stupidity.jar

(Yes, Java jar files are just zip files in reality.)

To actually do the extraction do the same thing but without the -l option.

That all seems fine, but I actually do not like zip files. To find out why I think you should not ever create them see this post I wrote called, Why Your Zip Files Irk Me.


If you have a file called elephants.gz — how can you read that? It is compressed with the gzip program. You can use your choice of these equivalent commands.

$ gunzip elephants.gz
$ gzip -d elephants.gz

And you will be left with a (bigger, natch) file called simply elephants. You can now make ordinary changes to that file. Normal people prefer the gunzip way but I like to stick to the gzip way since that’s really the program that is run by gunzip anyway.

To compress it back down into its smaller form, simply use the following obvious usage.

$ gzip elephants

Which will leave you with an elephants.gz file. Easy.

ℱancy gzip

Let’s say elephants is really big and I don’t have the disk space to actually unpack it. But I want to see if that data contains something of interest. I can do something like this.

$ gzip -cd elephants.gz | grep Babar

This will begin unpacking the file into its readable form and send it to standard output. This is fed into the grep program which looks for (and reports if it finds it, only) a line containing the word "Babar". The entire unpacked data set never is stored on your computer (grep just discards what you weren’t interested in). This is actually the simplest form of this trick. You can use streams of compressed data like this to send big things (e.g. streaming audio) over network connections and other such magic.


It’s just like gzip. There’s even a comically named bunzip2 alternate equivalent to bzip2 -d. Files compressed this way are usually saved as file.bz2.

Basically bzip2 is more hard core than gzip. It will work harder to compress your stuff more intensely saving you disk space. The catch is that it will work harder meaning it will use up more CPU resources. You decide what you’d like to economize.


The tar command stands for "tape archive" and that sounds boring and irrelevant for most modern homo sapiens. However, back in the ancient times the programmers who came up with sensible methods to put entire filesystems on tapes did a pretty solid job of it. So much so that it is still very useful and very much used today.

The most common interaction with tar files is needing to deal with something you download from an open source project like somegreatsoftware.tgz.

The gz means it is compressed and step one is to decompress it. Using gunzip on a .tgz file as described above will leave you with somegreatsoftware.tar. From here you can see the contents of the archive with this.

tar -tvf somegreatsoftware.tar

The -t is for "test" I believe. The -v is for verbose, i.e. show anything it can show. And the -f is getting the command ready for the file to look at.

To actually extract the files from the archive, just change the -t to -x for eXtract. I usually like to run the -t version first to see if the archive creator included a polite containing top directory for the contents. Some (bad) archive creators just cause tar extractions to dump hundreds of files into your current working directory which can be quite tedious to clean up.

head / tail

This pair of commands can be quite intuitive and also extremely useful in unintuitive ways. The head command just shows the beginning of the file (or input), by default the first 10 lines. The tail command shows the last 10 lines. This is extremely useful for seeing what’s in a file. The tail command is especially valuable for seeing what the latest entries in a log file are (you don’t care about stuff at the beginning that may have happened months ago.

To see some amount other than 10 lines you can use an option like this.

$ history | tail -n3

ℱancy head / tail

In ancient days these commands could take an option styled like this.

$ head -2 /proc/meminfo
MemTotal:       16319844 kB
MemFree:        15155016 kB

This shows the first two lines of the /proc/meminfo (memory information pseudo-) file that Linux creates. But this style is not recommended. It is best to use the full -n2 syntax.

There are (at least) two reasons for this. For one, it is explicit about exactly what units you want. For example, you could do this.

$ head -c3 /proc/meminfo

This stops after the first 3 characters of the file, a very useful feature.

The other reason is that head and tail have a clever mode where you can explicitly use positive and negative numbers for different effects.

$ ABC=$(echo {a..z}|tr -d ' ') # Don't worry about how I did this.
$ echo $ABC
$ echo $ABC | head -c5    # Normal mode - first 5 items
$ echo $ABC | head -c-5   # All except items _after_ position 5 from end *
$ echo $ABC | head -c+5   # Plus is like normal head mode
$ echo $ABC | tail -c5    # Normal mode - 4 items _after_ position 5 from end *
$ echo $ABC | tail -c+5   # All _except_ 4 items _before_ number's position *
$ echo $ABC | tail -c-5   # Minus is like normal tail mode *

I used the character mode of head to make these examples compact; this mostly works the same for lines too with -n but it’s good to check. Note the ones with asterisks are not producing or eliminating the number of items listed, but seem to have an off-by-one issue. I’m pretty sure this is caused by new line characters getting counted.

$ seq 10 | tail -n3
$ echo "123456789" | tail -c3

So pay close attention to that.

Yes, the sense of what plus and minus do can be confusing, but it’s enough to know these tricks are possible. Simply look up the detail with man when you have a need and/or perform a quick little experiment to verify it works how you want. (Or reference this very explanation which is what I may start doing!)

Here is what the man page says about this important parameter for head:

print the first NUM bytes ...; with ... '-', print all but the last NUM bytes...
print the first NUM lines ...; with ... '-', print all but the last NUM lines...

And for tail:

output the last NUM bytes; or use -c +NUM to output starting with byte NUM of each file
output the last NUM lines, instead of the last 10; or use -n +NUM to output starting with line NUM

One more extremely powerful feature of tail specifically is that it can actively watch a file and show you new additions as they arrive and are appended. This uses the -f option (for "follow"). This is extremely useful for watching log files collect messages in real time.


I love xxd! This tool is not just really good at what it does and useful for serious computer nerds, but it has a kind of refreshing purity to me. Basically its function is to give a "hex dump" of the input. Beginners need not worry about that. In fact beginners do not need to use this command ever.

But what I love most about this command might appeal to beginners and experts alike. I love how this command allows you to see the actual ones and zeros that make up your data. Everyone has heard that computers work with ones and zeros, but who has really seen direct evidence of this? If you have a Unix system and xxd you can! Check it out.

echo -n "" | xxd -b
00000000: 01111000 01100101 01100100 00101110 01100011 01101000

This shows the binary string for my website’s domain. These are the exact ones and zeros that encode "". That’s pretty cool! Here’s more such low level obsessing.

ℱancy xxd

Beyond that brush with obsessive detail, xxd is tremendously useful for picking apart files that are not in ASCII text. Its usual output is byte (not bit as above) oriented and it tends to be in hex. It can however do a lot of very clever things. Here I’m pulling data right off the first 512 positions of the disk and checking if there’s a GRUB installation there. (BTW - be inspired by, but don’t try this example unless you know what you’re doing.)

dd if=/dev/sda count=1 bs=512 | xxd | grep -B1 RUB

I could have grepped the weird hex values that make GRUB but this was simply easier to let xxd do that math.


If you’re a beginner, you probably should just know that dd is damn dangerous. Some people call it "disk destroyer".

ℱancy dd

As with using xxd, there may come a time when you want to cut through all nonsense and handle each one and each zero yourself explicitly. No messing around! That is what dd can do. The man page doesn’t really say what "dd" stands for but I think of it as "direct data" (transfer).

The way it works is you give it a source and a target and it takes the ones and zeros from the source and puts them on the target. Simple, right?

Here’s an example you should not try.

dd if=/usr/lib/extlinux/mbr.bin of=/dev/sdd

This will take the master boot record bits located in the binary file specified by if= (input file) and copy them to the of= (output file) specified as the disk device "/dev/sdd". The reason not to do stuff like this willy-nilly is that this will hose some possibly important stuff on /dev/sdd. So make sure you’re very sure what you’re doing with this.

Some common options can look like this.

dd if=wholedisk.img of=justtheboot.img bs=512 skip=2048 count=1024000

This pulls just a portion of the original "wholedisk.img" image when creating the new file, "justtheboot.img".

The bs specifier is for block size. Think of this like a bucket that the bits are transferred in. Besides the minimum raw transfer time it takes a little while to load each bucket. If you specify a small bucket, there will be many fillings of it. However, if you specify a huge bucket and you change your mind while the command is running, you may have to wait for the current huge bucket to finish processing before the program will check in and see if you want to interrupt and exit politely. The default is 512 (bytes) which is probably a bit small for modern use. I usually find that 4M is a good value for most things and very little benefit comes from using a different value.

My favorite option is status=progress which will give you a running assessment of how much has been transferred so far.

sudo dd if=ubuntu-16.04.3.iso of=/dev/sdg bs=4M status=progress oflag=sync

Here I’m burning the ones and zeros of an OS iso image onto a flash drive and the status allows me to keep an eye on how it’s doing. The fsync "conversion" (on output only) just ensures that a physical sync is called to make sure that the device is truly updated and not just presenting illusions from cached memory still queued for the device. Don’t forget the block size bs=4M (or some largish number) so that the transfer doesn’t take 20 times longer while it processes (default is 512B so 4 orders of magnitude!) more blocks.

Also if you’re trying to save data on a disk that’s going bad and it’s giving you IO errors with dd, you can investigate ddrescue which can valiantly overlook IO errors and press on with the job.

sudo / su



Troubleshooting a network? You could do worse than starting with ping. I like to send 3 (ECHO_REQUEST) packets in case the first one dies in a fluke traffic accident (but the connection is really up). Any more than 3 and it just takes too long.

ping -c3

One thing to check right away with network troubleshooting is if it’s really the network connection to your target or simply the name lookup.

ping -c3

This will see if you can reach the domain (yes, that is their domain).

Or try Google’s name server which is almost as easy to remember.

ping -c3

If those don’t come back successfully, you’ve got a real network problem!

ℱancy ping

Also in the ping family is traceroute and that classic program’s modern fancy version mtr. I like those tools, however, I’m finding those to be broken now that I have a blazing fast fiber optic connection. It’s like the "Time To Live" setting can’t get low enough to find out anything meaningful.

host / nslookup

Often you can reach the internet but you can’t use names, only IP numbers. If you need to troubleshoot that process or just find out about how names and IP numbers relate, used host and nslookup.

$ host -t A has address
$ host is an alias for has address has IPv6 address 2607:f8b0:4006:819::200e

Note how you can find the IP address of a name and other useful information (like alias targets and IPv6 numbers).

The nslookup command (normally) goes the other way, finding names from numbers.

$ nslookup
Non-authoritative answer:   name =
Authoritative answers can be found from:

Who just tried to log into your SSH server 800,000 times? Find out by looking up their connection IP address with nslookup. You can also use web based services like for this and find out a rough guess where that host is physically located in the world.

iptraf / iftop

Do you have some kind of server which provides data to many clients at once and one of them is doing something uncool. The iptraf program can let you see the flows to each connected host.

Another possible tool that might serve this purpose is iftop. Or nethogs.

ℱancy iptraf

If you’re logged into your server with SSH, set the update interval to 1 sec so you do not overwhelm the network traffic with your own iptraf feedback loop.

It seems iptraf is a Linux only program. The FreeBSD community describes it as having "Too many linuxisms" to port. How about nethogs? That’s a nice program too. Also iftop. Maybe dstat or slurm.

ip addr / ifconfig

What IP number (or numbers!) are you using right now? Find out with ip addr (that’s the ip command with the addr option) or the classic ifconfig. The latter provides pretty much the same information but is regarded as the old way to control (configure) network devices (interfaces).

ip is not on FreeBSD, but ifconfig is.

Also on modern Linux distributions using Network Manager you can get some excellent information with nmcli dev show. I also used to think that nmtui (NetworkManager Text User Interface) was limited to the Red Hat ecosystem, but I recently found it on Ubuntu on an Arduino so that can definitely help tame the mess that NetworkManager can make of connections, especially wifi.

ss / netstat

What network connections are currently active on your computer? Find out with ss. Its older brother, netstat, is not used as much these days, but it is a well-known classic that does the same thing.

ss is probably Linux only while netstat is also on FreeBSD and OSX.

screen / tmux / nohup


Skilled command line practitioners do not avoid computer graphics per se — they avoid superfluous, wasteful, and misleading graphical interfaces. Sometimes, however, the job at hand is all about graphics. If you have a file that encodes an image, you may reasonably want to see what that image looks like. To illustrate the rule by its exception, the only time I feel like the normal GUI desktop claptrap is even slightly valuable is when I need to organize very disorganized photos. But if you’ve been organized and you know that things are where they should be, the command line approach performs efficiently again.

To see an image file as a human viewable image, I like the viewer program feh. Obviously you need some support for graphics (so no SSH terminals without X tunnelling or bare consoles) but most people have that these days.

The advantage of feh is that it is lighting fast and skips a lot of superfluous nonsense. If you just need to see an image (including maybe scaling it to fit your screen) feh will not be beat.

Just for completeness I’ll mention display which is a command line tool that comes with installing the ImageMagick command line graphics tools. A lot of serious Unix people use display but I find it much slower than feh. However, it is usually present on systems where feh might not be installed and is a good backup to know about. The ImageMagick tools in general are extremely powerful and any intelligent person who does anything with images whatsoever should know about them.


I use xpdf so often that I have it aliased as simply o. It opens PDFs and is very efficient and fast doing so. People used to other PDF readers may carp about functionality deficiencies or even the old school Xlib interface. But xpdf is your true friend.

ℱancy xpdf

If you’ve been reading this whole thing here’s a nice example of how this Unix stuff is commonly used.

$ URL=
$ wget -qO- $URL \
| tr ' ,' \n\n | sed -e 's/CVE-/\nCVE-/g' | sed -e 's/[^0-9]*$//g' \
| sort | uniq | grep CVE | wc -l

I’ve broken this single command line into three physical lines (with backslash). The first uses wget to get the URL variable. I set that variable to be Adobe’s security page for Acrobat. The next line converts that HTML mess into a long list of words that must end in numbers. The third line sorts all of these, eliminates duplicates, throws away everything but the key phrase I’m interested in — "CVE", and then counts the results. Reviewing it now, I can spot some places I could optimize this process, but I’ll leave it alone as an illustration of the kinds of rough jobs that can be done with Unix extemporaneously, on-the-fly. I quickly built that command line up piece by piece until running it gave me the final answer I was looking for. Very powerful.

So what is it telling us? It is telling us that on Adobe’s own Acrobat security page, they are mentioning no less than 104 unique registered CVE (Common Vulnerabilities and Exposures) problems related to Adobe Acrobat. Not impressed? If you take off the wc -l and look at these, you’ll see they all start with "2018". I’m no longer a professional security researcher, but that makes me think that we’re talking about 104 registered vulnerabilities in 2018 alone. I have seen Defcon presentations talking about what a delicious attack surface Acrobat/Reader is. It contains an absurd level of functionality — Wikipedia says "PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content) and three dimensional objects using U3D or PRC, and various other data formats." PDFs can contain not just one programming language, the obvious PostScript, but also JavaScript code! And who knows what else! What could possibly go wrong?

Let all that sink in and then ask yourself if Acrobat is part of a secure computing environment. My answer is "no". xpdf, with its "limited" functionality, saves the day!

identify / import / convert




A lot of people know about Cron but not as many know about at. Cron is for recurring jobs. While at is for jobs that you want to run once at some time in the future. A lot of times at isn’t installed by default which is a shame but it’s always easy to get.

If you’re constantly setting alarms or timers, at is perhaps a better choice.

You can simulate at easily enough with something like this.

sleep $((7 * 24 * 60 * 60)) && aplay loudnoise.wav

That will play a loud noise in one week. Still, if you do a lot of this, consider at.


The sleep command is a lot more useful than it seems like a command that does nothing would be. For example, if you want to log something every minute (not 800 times per second) simply add a sleep 60 to the loop that does the logging. Easy.