Master the Linux command line


In depth: The Linux platform is becoming a stronger desktop solution day by day, and part of the reason for this is the commitment by distribution authors to provide an exclusively graphical user interface, from installation to upgrade. We shouldn't forget, though, that the command line interface still exists. It may not be as pretty as a GUI but this alternative interface has flexibility, and there are many cases where it can save you a lot of time.

If you've already worked your way through our Command-line Phrasebook and had a go at our Exploring filters and pipes tutorial, then you're more than ready to move on to greater things. Read on!

The great thing about the command line is that you don't have to be a guru to use it - even relative newcomers can try it out, and will find it useful and even fun to write commands this way. And why take the trouble? Well, unlike in a GUI, where you can only click the buttons that somebody else thought were needed, you can create your own commands and tailor your system.

Have you ever wanted to issue multiple commands with as little clicking as possible? If so, welcome to the shell prompt. Or consider the boot process, which involves a chain of text commands executed in a well-defined order and can be greatly customised if you know how.

In fact, there's nothing to stop you from getting the best of both worlds. Once a script works on the command line, it can be easily bound to a GUI's desktop icon or menu entry. Then in the event of an emergency you can call on your command line knowledge to stay in operation. What about disaster recovery with a GUI - where do you point and click if you need to reinstall the X server?

In this tutorial you'll learn more some of the most common commands available on the command line, and, most importantly, how to combine them together using shell scripts you write yourself -- it's really not as hard as you might think!

Part One: Inside the shell

All command-line typing happens inside what's known in Unixland as a shell. This is both a programming environment and a command interpreter, providing an interface to the operating system. Every shell has control flow structures, manipulates variables and can be modified to suit the environment in which programs run - but often they do this in slightly different ways. There are hundreds of shells available for Linux, but you might only have a handful of them installed. To find which ones are installed in your system, type the following command:

$ cat /etc/shells

Bash (Bourne-Again Shell, an adaptation of the original Bourne shell originally written by Stephen Bourne at AT&T) is the most common choice on Linux systems. Another popular one is csh (C Shell), with a syntax more similar to the C programming language. tcsh is an enhanced, backward-compatible version of csh. The reason why there is more than one solution for (apparently) the same problem is the usual one: different shells may have different licences, and each one is optimised for slightly different uses. Find out more online at, an in-depth discussion of these and other shell programs that was written before Bash claimed dominance.

Programs written in a shell language, or any other interpreted one, are normally called scripts, and are really just plain text files containing sequences of commands. Scripts are loaded and executed by the interpreter line by line, just as if you were typing the same sequence of instructions at the prompt. More explicitly, there isn't anything that you can place in a shell program that you can't type at the shell prompt, and vice versa.

Are scripts better or worse than normal, binary-compiled programs? No, just different. Binary programs are faster but take much more time to develop and test. Scripts are much quicker to write but normally run much slower. What's important, and should be evaluated case by case, is that the total time spent writing and running the program is minimised.

In practice, shell scripts are usually the best match for the custom programming skills and needs of home and small-office Linux users.

Programs and keywords

Software programs are specific binary files physically stored on the hard disk. As a command line-based programming language, every shell can launch them directly. Shells also have, however, a set of built-in keywords, or commands, not corresponding to any actual program. This can cause confusion the first time you study a shell script, so keep this distinction in mind. The executable programs visible by the shell are those stored in the directories contained in the PATH environment variable. On my machine the PATH value is:

[marco@polaris marco]$ echo $PATH

The echo string is a built-in keyword: the shell will perform the corresponding action, which passes on to the terminal the content of the PATH variable, all by itself.

To save the user's time, many shells implement command completion. To see it in action, just type at a Linux prompt the string "finge" and then hit the Tab key. The shell will scan all the executable programs in the $PATH, discover that only the finger executable matches the string you entered, and complete it on the command line. The same happens when you enter a partial directory or file name.

Ports and pipes

Now that we know why we're in a shell, the next thing to understand is how data flows in, out and through it. Each program running at the prompt can be thought of as a black box with three default ports or streams: standard input (STDIN), standard output (STDOUT) and standard error (STDERR).

STDIN is where all the input comes from: this is, for example, where the kernel forwards the keys you press on your keyboard. STDOUT is where the program sends all the bytes it produces: reports, calculations and so on. STDERR is, as if you didn't know, the emergency line reserved for error messages. What makes this architecture extremely powerful is the fact that the ports of different programs can be easily connected to each other or to files on your hard disk, creating on the fly a single virtual program with impressive capabilities.

Let's demonstrate this by pretending our hard disk is full. To make room, we'll find the 50 biggest files in the home directory, listing them in the terminal window and saving the list in a local file. By examining the list we would then be able to decide which files can be removed. The command for doing this would look something like:

find . -type f -exec ls -s {} \; | sort -n -r | head -50 | cat -n |
tee /tmp/bigfiles.list                                                                                                                          

Quite a handful, isn't it? Don't worry, it's nothing to be scared of - it makes perfect sense when you understand it!

The first thing to understand is the role of the '|' character, known in this context as the 'pipe' operator. What it does is connect - just like a pipe - the STDOUT port of the previous program straight to the STDIN input of the next one. To understand what's going on, we'll have to introduce some utilities - choosing ones that will be extremely handy for a Linux user. To follow the action, test the commands above on your machine, adding one section at a time. Start by typing everything up to (but not including) the first pipe sign. Hit enter and look at the result. Then retype (or recall with the up arrow key), add everything up to the second pipe sign, hit enter and note what else happened. Repeat till the end of the line.

The find program finds all the files matching certain criteria and, if requested exec(utes) on each of them, represented by the curly braces, the action between the exec string and the semi-colon. The full stop after the find command represents the variable current directory, but could be substituted by the any folder in your drive, or combination of them. '-type f' means, consider only the objects of type file. ls is short for list: the -s option tells it to return the file name, preceded by its size in bytes. Consequently, this first piece of command will produce an unordered list of file sizes and names, one per line, no matter how deeply they are nested in the directory tree.

The sort command will rearrange the files in numerical ("-n"), reverse ("-r") order. "head -50" will repeat only the first 50 lines of its input stream, "cat -n" will add a line number to everything it receives. In the discarding all the others, and midst of all this piping frenzy, tee will create a branch: its standard input will be both printed to the terminal (since it's the last command) and saved to the file /tmp/bigfiles.list.

If you'd like to learn more about piping the output from one program into another, make sure you check out our tutorial, "Exploring filters and pipes"!

Quote me happy

You'll recognise variables in shell scripts by a dollar sign prepended to the command. These can be read, or given values, in a very creative way thanks to all those funny quote keys scattered around your keyboard. The different quoting styles are very important in the shell, because each of them is interpreted differently. The single quotes ('these ones') are taken literally. Their whole content is used just as it is, a static chunk of text. Double quotes ("here they are") are used to perform so-called substitutions. Before using their content, the shell will scan it for variables, recognizable by the dollar sign, and substitute their current value into the quoted string.

The last type of quotes (`inverse ones`) are the most powerful variety - and thus should be used with caution. Their content is considered a command to be executed. The result of such a command is then put and used in place of the original string - try running "ls $HOME" in the three types of quote!

Part 2: Processing text

Much of this part of our tutorial concerns metacharacters, the text symbols sprinkled (at random, it often seems) throughout command line instructions. If you can find out what they do and learn how to use them, you'll be able to create powerful programs for finding, inserting and scrubbing out text.

Our first example will help you explain to a program how to recognise a certain piece of text and what to do with that text afterwards. The standard description of the structure of a string of text is called a regular expression - or regex. These are dark, mysterious beasts, but easy to use once you've tamed them. In regular expressions, the characteristics of complex text patterns are defined by a vast array of metacharacters:


Weird, huh? But don't be afraid - come closer. The first regex here simply means that we're looking for any line containing the string 'linux' (regardless of its case, or if it's part of a longer word). The second and third are a bit more specific: they'll match only lines beginning (^) or ending ($) with that string. The last regex describes all lines that start with the 'linux' string, end with 'format' and have any (*) number of any character (.) in between. In other words it will match with:

linux Format                                                            
Linux users love Linux Format                                              

Regular expressions are also used to substitute some text patterns for others:


Here the first regex capitalises all occurrences of linux, and the second one replaces all the dates of Christmas past with that of the next one: '\d' is another metacharacter, meaning 'any digit', so four of them will match any year expressed in that form.

The role of the interpreter

In practice, regular expressions are fed as arguments to applications, or interpreters, that can put them into practice. The location of the interpreter inside the file system is written right after the shebang. Not sure what the shebang is? Simple: it's Unix lingo for the two characters at the very beginning of every script - the charming '#!' couplet. They mark the rest of the file as a script - in other words, a series of executable commands meant for an interpreter.

Therefore, the first line #! /bin/bash declares that you want the program bash in the bin directory to execute your commands (note the space after '!'). If the file mentioned after the shebang doesn't exist, or is not an interpreter, the system the shebang doesn't exist, or is not an interpreter, the system will simply return a 'Command Not Found' error and quit. Some Unix variants place a tight limit on the length of the shebang line, truncating everything to 32 characters or so.

What this means in practice is that you may get the 'Command Not Found' error even if you've entered a valid interpreter file - what's happened is that it's just too far from the shebang for the system to recognise ('Not Found' is not the same as 'Not There'). Two interpreters you're likely to use for your regexes are AWK and SED. They have been around since the very beginning of Unix and although there are several other interpreters (chief among them Perl) that can do much more, the original two are faster and, for this reason, still widely used in boot-time scripts.

Using SED and AWK

SED works on streams of text (the name SED is just a contraction of "stream editor"). It loads one line at a time, edits it according to the commands it has received, and prints it to standard output.

cat somefile | sed '/^0/d'

The command above will delete all lines beginning with 0. AWK gets its, er, awkward name from the surnames of its creators: Aho, Weinberger and Kernighan. It is a bit more powerful than SED, but works in the same way - one input record at a time. By default, each line is a separate record, referred to as $0. Records are made of (typically) space-separated fields, accessible as $1, $2 and so on.

awk '/fax/ { print }' bin/*

Here we found and printed all lines containing the 'fax' string in all the files of the bin directory. So far our examples have concerned individual phrases of text, be it finding them, formatting them or deleting them. But there are ways to use the shell to locate whole sections of text. What do you do when you find a classified ad in a newspaper page that you want to keep in your wallet? You cut it out with scissors and discard everything else. You can program the command line to do exactly the same thing with text streams.

When you you need to find and extract only relevant rows and columns of characters it can be very convenient to visualise the terminal window (or a whole text stream) in the same way - as if they were sheets or rolls of paper.

Extracting blocks of text

The four most useful utilities for this task are the programs tail, head, cut and grep. The first two return the first or last few lines of a text stream. This is how you would get the 16th to 20th line of somefile.txt:

head -20 somefile.txt | tail -5                                       
The cut command does the same thing, but vertically:

cut -c20,23 somefile.txt                                                                                                                                          
ls -lrt | cut -c44-
The first example returns only the columns from 20 to 23 of somefile.txt. The second takes a detailed file listing and strips everything but the modification date and file name. Last but not least is the grep family. These are, on Linux, three separate commands (grep, egrep, fgrep) that can extract from files all the lines matching a given regex. Each grep variant has several options and understands a limited set of regular expression constructs. In all cases, regex matches cannot span multiple lines. Here are some classic uses of grep:
grep Linux *.txt
grep -i -v Windows *.txt
egrep 'Euro|Sterling' invoice*.txt

Executing these commands would first of all return all the lines containing the Linux string in all files with a .txt extension. The second would give you all the lines from the same files that do NOT contain (-v) the word Windows, regardless of its case (-I). Finally, use the last example to show all the lines containing either Euro or Sterling from all invoice files.

The 'here documents' tool

Still working with long blocks of text, we move to here documents. They exploit a great feature of working within the shell, namely that that you don't have to put templates in external files. With here documents, you can place a block of text, possibly containing some variables, straight into a script, and use it either as the standard input of a command or for a variable assignment.

Here documents use a dedicated operator, <<, to define the block of text. The syntax is very simple:

your account is past due.
Please send $INVOICE to Linux Format today


As you can see, the string right after the << operator (END_OF_EMBEDDED_TEXT) is the same that marks the end of the here document. Now imagine that the code above is in a loop, going over the contents of a text database. The code would create a series of payment requests with the actual names and outstanding payments of every customer. Printing or emailing them would be easy. Another good use of here documents is to create temporary files or to feed sequences of instructions to interactive programs like FTP.

How to find broken bookmarks

The last part of this tutorial is a handy script. We bet you have hundreds - if not thousands - of links in your web bookmark files. Chances are, a good percentage of those links are broken: web pages move and disappear all the time. You can immediately find out which links are dead with the script below. It was made for Mozilla bookmarks, but modifying it for other browser formats if you need to should be pretty straightforward. To fully understand the script, refer to earlier parts of this tutorial to remind yourself what the various metacharacters do.

#! /bin/bash
\rm url_list
\rm url_control_tmp
touch url_control_tmp
grep '
<A HREF="' $1 | cut '-d"' -f2 > url_list
for URL in `cat url_list `
echo -n $URL >> url_control_tmp
curl --head $URL 2>/dev/null | grep 'Not Found' >> url_control_tmp
awk '{print $1}' url_control_tmp | sort | cat -n

The first three commands simply remove (rm) any temporary file created by previous runs and then create (touch) a new one, for reasons that will become clear later. Then the fun starts. The bookmark file is passed to the script as first argument, so its name is contained in the $1 variable. In the Mozilla bookmark file the lines that contain links start with the DT><A HREF=" string. The script extracts them with grep and then, using the double quote character as separator (cut '-d"'), discards everything but the second field (-f2); that is, the actual URL.

In this way all the links and nothing else end up, one per line, in the url_list file. The for line iterates every line of the url_list file, provided courtesy of the cat command. Inside the for loop, the echo instruction simply appends to another file, without newline (-n), the current URL. For the append operation to work, the file must already exist. That's why it was created (or touched) at the beginning. Remember now? Curl is a nice web browsing utility that works from the command line to automatically retrieve all kinds of documents from the internet. In this example it is launched once for every URL, but it only downloads the page HTTP headers (-head). The headers contain bits of data associated with each document, like this:

HTTP/1.1 200 OK
Date: Fri, 04 Feb 2005 23:09:54 GMT
Server: Apache/1.3.27 (Unix) (Red Hat/Linux)
Content-Type: text/html

The relevant line is the first one: 200 OK means that the page is available. A non-existent page would have returned something like 404 Not Found. When curl is launched its error messages are ignored: STDERR has the I/O stream number 2 (0 is input, 1 is output), so 2> /dev/null means that this stream must be sent to the fake device (dev/null) provided by Unix for cases just like this. The grep part of the command saves only the lines containing the HTTP return code not found to the url_control_tmp file. The instruction starting with awk prints only the URL value (first field, $1) to its standard output. The resulting list is then sorted and printed with a serial number (cat -n). When I tested the script, the result started with these lines:


This neat script shows that learning shell commands can enhance your browsing pleasure as well as help your coding, and it's a nice note to end this section of the tutorial.

First steps with shell scripting

Our divine mission to open up the command line continues. We start this section with arrays of variables ­ making accessible something that was previously reserved for IT students. We will then move on to techniques that will take a bit of the frustration out of using the CLI: getting the script to ask for instructions and teaching the console a bit of maths. Finally, lest people think that a text interface can only handle text, we'll play a bit with perhaps the most mainstream home-computer activity imaginable: handling digital photos. As usual, a final script will provide a real-world application ­ a thumbnail gallery, no less ­ of the concepts explained.

Writing arrays...

Many variables have what we call a scalar nature: in normal language, this simply indicates that they are one single piece of content, not fragmented or structured into separate parts. Things like $YOUR_HEIGHT or $LINUS_LAST_NAME belong to this category. Every programming language can handle this kind of variable, and most languages also support more complicated data structures and have customised operators to manage them efficiently.

In comparison, the various Unix shells are pretty limited, because they were never intended for high-level data manipulation. However, Bash, the reference Linux shell we've used so far in this series, is quite capable of handling basic arrays, which are unidimensional containers of scalars. Unidimensional means that an array can be thought of as one ordered stack of separate values ­ rather than as a table, where data is arranged in rows and columns. You can access any element in an array just by giving its numeric distance from the beginning of the array itself. So the first element of A_4_PIECES_ARRAY will be $A_4_PIECES_ARRAY[0] and the last $A_4_PIECES_ARRAY[3].

Shell arrays do not need to have all their elements explicitly defined ­the first line of your script can be something like this:

FRIENDS[3] ='Peter'

This creates the FRIENDS array, with its first three elements (indexes 0, 1 and 2) empty.

However, it's good practice (especially in complex scripts) to explicitly declare an array as follows:

declare -a FRIENDS

Arrays can be filled in more efficiently than by assigning values one at a time, thus:

FRIENDS=( Martin Karl `Jean Luc' Peter)
FRIENDS=( `cat somefile.txt | tr `\n' ` ``)

The first instruction just assigns the elements in the desired order. Note the single quotes keeping `Jean Luc' as a single value despite the space between Jean and Luc. The second instruction uses command substitution to fetch the whole content of a text file (cat), put it all on one line by changing new lines (\n) to white spaces with tr and use the result as a words list for the array.

When it's finally time to use an element, this is the right syntax:

linux->echo ${FRIENDS[2]}
Billy Jean

Don't forget the curly braces or Weird Things will happen:

linux->echo $FRIENDS[2]

Here, Bash doesn't look at FRIENDS as an array any more­ it just remembers its first part as a scalar, and adds to it the other characters we typed.

...reading input...

We already know that a script can be given different command-line parameters every time it's launched. These are internally accessible through the built-in variables $1, $2 and so on, but there's a major limitation to this approach: you have to give the script all the answers it might need before you launch it. What would really make an enquiry like this useful is if the script could do something, report and (depending on what happened) ask the user for further instructions in real time ­ without forcing us to start all over.

Don't worry. It is possible to ask for, and load, user input while a script is running. This is accomplished with the read command, which reads the value of one or more variables from STDIN (where you type from).

echo -n "Who is your best friend?"
echo "Your Best Friend is $BEST_FRIEND"

If you don't give read the name of a target variable it will save what you entered into the built-in variable $REPLY. To read in more than one variable with only one call you would list all their names in a row:


Or, if you wanted all the values to go into an array, all nicely indexed, you would say so with the -a switch:


This would store all the names you enter in ${FRIENDS_LIST[0]}, ${FRIENDS_LIST[1]} and so on. Last but not least, read can get its input automatically, from an assigned file: we will see how to do this in the final example.

...and arithmetic

Occasionally, shell scripts can also do some maths work. They won't perform as well as languages designed specifically for such tasks, but they're more than acceptable for everyday light use. There are three ways to perform arithmetic operations in the shell. The first one is to launch the expr command inside inverted quotes:

Q=1;Q=`expr 4 + $Q \* 3`;echo $Q

Yes, this does look like one of our tame monkeys randomly hitting the keyboard. But the result, instead of a Shakespeare sonnet, is 7. Why? Well, first the Q variable is initialised. Then we ask expr to multiply it by 3 and add 4 to the result. The inverted quotes make the result, ie 7, go back into the Q variable. Note the back slash before the multiplication operator *: ­without it the shell would have seen a metacharacter meaning (in that context) `the names of all the files in the current directory'. Not what you would use for calculations, right? A more popular construct for calculations is with double parentheses:


This form lets you use spaces, making it all a bit more readable, and also supports a C-like format:

((a += 1))

Another admissible syntax is the let operator:

let Q=Q+10
let "Q += 10"

An art lesson

The ImageMagick toolbox ( is a collection of little programs for manipulating images in many, many ways, either with a GUI or straight from the command line. ImageMagick is included in just about any desktop-oriented GNU/Linux distribution, but many users ignore its existence or only know its graphic front-end, the display program. The true value of this package, however, lies in its command-line pieces, because they can be used to perform repetitive operations on many images at the greatest possible speed.

Here are some basic examples of ways in which you can use ImageMagick's most popular components (check the web page, there are many more):

convert -geometry 200x200 some_big_picture.jpg small_
convert -fill white -font helvetica -pointsize 100 -draw "text
100, 00 \"Wonderful flowers" original.jpg final.jpg

What's happening here? The first command is just rescaling some big images to 200x200 pixels. The second one is adding the caption to the flowers picture: it specifies the caption text and position as well as the font family and size. Convert can also be used to create simple animations: check the website or the man page for details. If, instead of text, you wanted to add a graphic logo to each image, the ImageMagick tool of choice would be Combine. Used as in this example, it would add your logo in the bottom-left-hand corner:

combine -gravity SouthWest -compose Over some_picture.png
my_logo.gif picture_with_logo.png

Of course, there's nothing to stop you doing both of these things ­ that is, cascading the commands above so that you have both a logo and a caption in the final image. Keep in mind one thing, however: JPEG and other common image formats discard some data in the compression process; meaning that when you save an image as a JPEG file it loses some quality. Consequently, a JPEG-modified version of a JPEG modified-version of a JPEG original might not be worth much at the end of the day.

Other, non-degrading formats should be used for repeated editing. For a very brief introduction to these issues, check out the page OK, let's get to work, putting everything we've learned so far together. Imagine that you have, every now and then, a pile of images in a folder. Say you want to put smaller versions of all the pictures online, each with its own overwritten comment. Start by writing a caption for each image and saving them in a separate text file, in this format:

picture_1.jpg Junior learning to swim
picture_2.jpg Daddy tasting the grilled lobster

Preparing the file above would be the only time-consuming part ­- the script using it would only be a few lines long.

while [ 1 ]
    read IMAGE_NAME CAPTION || break
    convert -sample 25%x25% -draw "text 10, 0 \"$CAPTION"
$IMAGE_NAME.jpg ${IMAGE_NAME}_thumbnail.jpg
    SIZE_OF_THIS_IMAGE=`ls -s ${IMAGE_NAME}_thumbnail.
 done < $1
echo "The total space needed on the website will be $TOTAL_

The while/do/done magic makes the script loop over the whole file provided as first argument ($1), one line at a time (we'll discuss shell control structures and loops in detail later). At each iteration, the read instruction loads the first string into IMAGE_NAME. Everything else after the first space is dumped (because there are no other variables listed) into CAPTION, just as we'd like. The script generates a thumbnail image with the same name as the original but with a _thumbnail extension.

The picture's height and width are reduced to one quarter of the original value (-sample 25%x25%). Next, the size in bytes of the resulting thumbnail is stored in SIZE_OF_THIS_IMAGE and added to TOTAL_SPACE_NEEDED. When there are no more lines to read, the loop is interrupted (note the break keyword) and we end this tutorial section by being told how many more megabytes of web storage we will have to buy...

Part 4: Loops, conditions, functions and more

Parts one to three of this tutorial looked at several basic commands that you can use straight away in simple scripts. In this final part, we'll be structuring and managing code for more complex tasks ­ perhaps work that's co-developed with other programmers, or that depends on the result of other programs or scripts. In such collaborative projects, how does a script know what's happening, or if something is true? The answer is that it looks at numerical clues left by the previous scripts or commands.

Every command-line program leaves an exit or return status behind after it runs. Exit status 0 means success, and errors are signalled with an integer code between 1 and 255. 126, for example, means that there was a problem with file permissions. When the last instruction in a script is an exit command with no arguments, the exit status given to the line before it will relate to the whole script. The special variable $? always contains the exit status of the last command executed in a script or at the prompt. To verify this, just type the following two commands:

exit 50
echo $?

Checks on external objects or events can be performed with the test built in, or by using square bracket operators. The latter is a more efficient option; here are some examples of it:

[ -f mailrc ]
[ -d MailDir ]
[ "$NAME" -eq "Carl" ]

As you can see, these tests can be performed on files, numbers or strings. In this example, the first two return true if mailrc is an actual file and MailDir is a directory. The third will be false unless $COUNTER is equal to 1, and the last will fail if $NAME returns a value other than Carl. (Find details on syntax in the Bash man page). Remember that the (( ... )) and let constructs also return a 0 exit status if the calculations they perform yield a non-zero value. As confusing as this is, at least it's consistent.

Using for loops

Once tests and exist codes have let you know what's going on with your script you can start to do something about it. A very simple script follows one straight flow: do this, then that, then this other thing... More complex scripts must repeat some steps several times, or make choices on their own, while running. Let's look at which Bash constructs can give a script this kind of autonomy. A for loop simply does something to or with each element of a fixed list:

for XYZ in list_of_arguments
  something using the current value of XYZ

You can also use for loops interactively, as long as there is a semicolon after the list. Try typing this at the prompt:

for XYZ in "Hello" "Johnny Dear"; do echo $XYZ; done

You can work through a two-dimensional array with an iterative loop ­ going over each entry in the array once for the first dimension, and once for the second ­ but a more readable alternative might be the set command, which assigns each sub-string of a list to a positional variable:

for $member in "John London" "Ann Liverpool" "Bill Glasgow"
 set --$member
 echo "$1 lives in $2"

This is really useful when parsing plain text databases where each line is a separate record. In general, however, the power of this and other loops is only unleashed when the argument list is generated on the fly, out of some other command.

for file in $( find / -type l -name `*html' )

The first instruction above is all you need to find and process in real time all the files in your system that have a .html extension but are just links (-type l) to other pages. While and until commands Sometimes you'll want to do something to each element of a more or less fixed set. Other times you need to carry out an action an unknown number of times, until something else happens. In these cases you need the while and until commands. while tests for a condition at the top of a loop, and keeps looping as long as that condition is false. until has the same syntax but does the opposite ­ that is, it keeps looping as long as its condition is true. These two loops do the same thing:

while [condition is false ]


until [condition is true]

You can terminate these loops earlier if you need, with break or continue statements. break does just what you would expect of it: it breaks the loop that it's in. continue only stops the current iteration: it skips all the remaining commands. Even with these constructs it is possible to nest loops. Of course, a break will have different effects depending on where it is in the loop hierarchy. To understand this, try to uncomment the break instructions in this script, one at a time:

#! /bin/bash
while  "$A" -lt "5" 
echo A: $A
if [ "$A" == "2" ]
echo "      Hello from outer loop"
while  "$B" -lt "4" 
if [ "$A" == "1" ]
echo "      Hello from inner loop"
echo "      B : $B"
let B=$B+1
let A=$A+1

What if

Sometimes, a program doesn't have to iterate over the same route again and again, but must choose a route to go down once. An if/then instruction just decides which of two paths to follow in a flow diagram, according to the exit status of a test or a command. Syntax-wise, the if test and if [condition is true] forms are equivalent. if/then blocks can be nested if needed. In this simple example, elif is simply an abbreviation for `else if':

if [ something_is_true ]
 # do this
elif [ something_else_is_true ]
 # do_that
 # just do whatever is written here

An if/then block is OK if there are only two choices, but what if there are more? Sure, you could use several such blocks nested or cascaded in some way, but things would get ugly quite fast. Luckily, we have the case instruction. This is just the equivalent of the switch keyword used in C programming. Don't worry if you've never used C before: this basic example of an interactive menu shows all the syntax you need to know.

while [ "$os" == "" ]
echo "Choose an operating system"
echo "[L]inux"
echo "[W]indows"
read os
case "$os" in
 "L | "l" )
echo `Excellent Choice'
 "W" | "w" )
echo `Yuck! Are you sure?'
echo `Come on, make your choice!'

Normally, case works on one test variable ­ $os in this script. The various branches of code are separated by double semicolons. Each one begins with a list of all the possible value of the test variable that will trigger the execution of the following commands. Equivalent values are separated by the pipe character (|). The various alternatives are evaluated from top to bottom, stopping at the first one that matches. In the script above, if you type `L' or `l', the execution will end by printing `Excellent Choice' to the screen.

Should you enter `W' or `w' (why?) the first branch will be ignored and you'll get the reaction you deserve. The final option, *), simply does what is necessary if $os has any other value not explicitly mentioned in the previous cases. Remember to always add such a final default branch ­ to send back an error message if nothing else.

Running shell functions

The first time you try out these techniques, you'll notice a couple of things that might seem a bit odd. First of all, chances are that you'll spot a lot of small (and some not so small) chunks of repetitive code, in which only a few initial parameters change. How can you structure it to avoid repetition and make it easier to see the general flow of the source code? The first step is to create some shell functions. These are simply blocks of code that implement some specified task, and they must be declared before they can be called. The most portable way to do so is this:

Name_Of_Your_Function () {

To run a shell function you invoke its name and provide any required arguments in the right order. As with stand-alone script these arguments will be available within the function in the variables $1, $2, etc. Functions also have their local I/O streams, which can be redirected, or can be fed with HERE documents:

A_Shell_Function <$file
Another_Shell_Function <<Function_data

The first instruction sends to A_Shell_Function the content of $file; the second runs it with Christmas as the first argument and Easter as the second. Variables can be declared local to a shell function, using the local command. This also makes recursion possible ­ if you really want to have it in something less computationally efficient than a shell script. Last but not least, even shell functions return an exit status. This will relate to the last command executed or to the argument given to the final return command. Again, the calling script will find the exit status in $?.

And that brings us to the end of this tutorial on the shell and shell scripting - hopefully you've learned a lot, and feel much more comfortable with the idea of writing your own little scripts to automate common tasks. Have fun!

First published in Linux Format

First published in Linux Format magazine

You should follow us on or Twitter

Your comments

nice tuto!

nice tuto..

Good info

Hello! adeddkf interesting adeddkf site! I'm really like it! Very, very adeddkf good!

Good info

Very nice site!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Username:   Password: