Unix Shell Scripting Book
Unix Shell Scripting Book
Table of Contents
1 INTRODUCTION ............................................................................................................................................ 4
1.1 OVERVIEW ..........................................................................................................................................................4
1.2 WHY UNIX? ........................................................................................................................................................4
1.3 FEATURES OF UNIX ..............................................................................................................................................5
1.4 HOW UNIX IS ORGANIZED .....................................................................................................................................6
2. BASIC UNIX COMMANDS .............................................................................................................................. 9
1.1 HOW TO LOGIN ....................................................................................................................................................9
1.2 FIND INFORMATION ABOUT YOUR SYSTEM ............................................................................................................... 11
1 Introduction
1.1 Overview
Unix is the most widely used computer Operating System (OS) in the world. Unix has been
ported to run on a wide range of computers, from handheld personal digital assistants (PDAs)
to inexpensive home computing systems to some of the worlds' largest super-computers. Unix
is a multiuser, multitasking operating system which enables many people to run many
programs on a single computer at the same time. After more than three decades of use, Unix is
still regarded as one of the most powerful, versatile, flexible and (perhaps most importantly)
reliable operating systems in the world of computing.
The UNIX operating system was designed to let a number of programmers access the computer
at the same time and share its resources.
The operating system controls all of the commands from all of the keyboards and all of the data
being generated, and permits each user to believe he or she is the only person working
on the computer.
This real-time sharing of resources makes UNIX one of the most powerful operating systems
ever.
Although UNIX was developed by programmers for programmers, it provides an environment
so powerful and flexible that it is found in businesses, sciences, academia, and industry.
Many telecommunications switches and transmission systems also are controlled by
administration and maintenance systems based on UNIX.
While initially designed for medium-sized minicomputers, the operating system was soon
moved to larger, more powerful mainframe computers.
As personal computers grew in popularity, versions of UNIX found their way into these
boxes, and a number of companies produce UNIX-based machines for the scientific and
programming communities.
The Unix Operating System has a number of features that account for its flexibility, stability,
power, robustness and success. Some of these features include:
• Unix hides the details of the low-level machine architecture from the user, making
application programs easier to port to other hardware.
• Unix provides a simple, but powerful command line User Interface (UI).
• The user interface provides primitive commands that can be combined to make larger
and more complex programs from smaller programs.
• Unix implementations provide a hierarchical file system, which allows for effective and
efficient implementation while providing a solid, logical file representation for the user.
• Unix provides a consistent format for files, i.e. the byte stream, which aids in the
implementation of application programs. This also provides a consistent interface for
peripheral devices.
Kernel decides who will use this resource, for how long and when. It runs your programs
(or set up to execute binary files).
The kernel acts as an intermediary between the computer hardware and various
programs/application/shell.
The kernel controls the hardware and turns part of the system on and off at the
programmer’s command. If we ask the computer to list (ls) all the files in a directory, the
kernel tells the computer to read all the files in that directory from the disk and display
them on our screen.
1.1.2 ssh
SSH allows users of Unix workstations to secure their terminal and file transfer connections.
This page shows the straight forward ways to make these secure connections.SSH provides the
functional equivalent to the 'rlogin' utility, but in a secure fashion. SSH is freely available for
Unix-based systems, and should be installed with an accompanying man page. ssh connects and
logs into the specified hostname (with optional user name).
The usermust prove his/her identity to the remote machine using one of several
methods depending on the protocol version used.
General Syntax with ssh are:
Command Options
• l Login name It specifies the user to log in as on the remot machine.
OR
$ ssh mahesh@172.24.0.252
1.2.2 who
The who command displays a list of users currently logged in to the local system in detailed
format.
It displays each users
• login name,
• the login device (TTY port),
• the login date and time
The command reads the binary file /var/admn/utmpx to obtain this information and
information about where the users logged in from If a user logged in remotely the who
command displays the remote host name or internet Protocol (IP) address in the last column of
the output.
It's often a good idea to know their user id's so can mail them
messages. The who command displays the informative listing of users.
[root@sql ~]# who
stuser1 pts/0 2011-12-12 09:58 (172.24.1.180)
htuser7 pts/1 2011-12-12 10:57 (172.24.0.122)
stuser1 pts/2 2011-12-12 09:56 (172.24.1.180)
apuser1 pts/3 2011-12-12 10:53 (172.24.8.40)
kjuser3 pts/4 2011-12-12 11:21 (172.24.0.130)
oracle pts/5 2011-12-12 10:45 (172.24.8.40)
htuser6 pts/6 2011-12-12 11:09 (172.24.0.129)
htuser10 pts/7 2011-12-12 11:02 (172.24.0.241)
Here
• 1st column shows the username of users who are logged on server.
• 2nd column shows device names of their respective terminal.These arethe filenames
associated with the terminals.(mahesh's terminal is pts/1).
• 3rd,4th,5th column shows date and time of logging in.
Last column shows machine name/ip from where the user has logged in.
It has more options which can be used.
1.2.3 w
Show who is logged and what they are doing.
UNIX maintains an account of all users who are logged on to system but along with that,,it also
shows what that particular user doing on his machine.
It also displays information about the users currently on the machine, and their processes.
The header shows, in this order, the current time, how long the system has been running, how
many users are currently logged on, and the system load averages for the past 1, 5, and 15
minutes.
[mahesh@station60 ~]$ w
18:35:12 up 19:11, 7 users, load average: 0.01, 0.03, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
Htuser1 pts/1 station111.examp 16:25 2.00s 2.22s 0.53s sqlplus
Stuser5 pts/3 station121.examp 18:30 9.00s 2.55s 0.77s vim
The following entries are displayed for each user: login name,the tty name, the remote host,
login time, idle time, JCPU, PCPU, and the command line of their current process.
The JCPU time is the time used by all processes attached to the tty.
The PCPU time is the time used by the current process,named in the " what"field.
1.2.4 uname
knowing your machine charecteristic.
uname command displays certain features of the operating system running on your machine.
By default it simply displays the name of operating system.
Syntax
$ uname [-a] [-i] [-n] [-p] [-r] [-v]
1.2.5 uptime
Tell how long the system has been running.
Uptime gives a one line display of the following information.
The current time, how long the system has been running, how many users are currently logged
on, and the system load averages for the past 1, 5, and 15 minutes.
[mahesh@station60 ~]$ uptime
18:56:15 up 19:32, 8 users, load average: 1.60, 1.11, 0.63
[mahesh@station60 ~]$
1.2.6 users
It prints only usernames of current users who are logged in to the current
host(server).
[root@sql ~]# users
apuser1 apuser1 apuser2 apuser3 apuser4 gbuser12 htuser13
htuser6 htuser7 kjuser3 kjuser4 nagnath nagnath oguser10 oracle
oracle rkuser10 rkuser18 rkuser2 rkuser32 rkuser9 root ssuser1
stuser1 stuser1
[root@sql ~]#
1.2.7 date
The date command can be used to display or set the date. If a user has superuser privileges, he
or she can set the date by supplying a numeric string with the following command:
Fortunately there are options to manipulate the format. The format option is preceded by a +
followed by any number of field descriptors indicated by a % followed by a character to indicate
which field is desired. The allowed field
descriptors are:
%m month of year (01-12)
%n prints output to new line
%d day of month (01-31)
%y last two digits of year (00-99)
%D date as mm/dd/yy
%H hour (00-23)
%M minute (00-59)
%S second (00-59)
%T time as HH:MM:SS
%j day of year (001-366)
%w day of week (0-6) Sunday is 0
%a abbreviated weekday (Sun-Sat)
%h abbreviated month (Jan-Dec)
%r 12-hour time w/ AM/PM (e.g., "03:59:42 PM")
Examples
$ date
Mon Jan 6 16:07:23 PST 1997
-s datestr Sets the time and date to the value specified in the datestr. The datestr may
contain the month names, timezones, 'am', 'pm', etc. See examples for an example of how the
date and time can be set.
Examples
1.2.8 cal
Print a 12-month calendar (beginning with January) for the given year, or a one-month calendar
of the given month and year. month ranges from 1 to 12. year ranges from 1 to 9999. With no
arguments, print a calendar for the current month.
Before we can do the calendar program we must have a file named calendar at the root of your
profile. Within that file we may have something similar to:
Syntax
$ cal [options] [[month] year]
-j Display Julian dates (days numbered 1 to 365,
starting from January 1).
-m Display Monday as the first day of the week.
-y Display entire year.
-V Display the source of the calendar file.
month Specifies the month for us want the calendar to be displayed. Must be the numeric
representation of the
month. For example: January is 1 and December is 12.
1.2.9 ifconfig
If a user wants to check the ip-address of his machine,he can use “ifconfig” command. Ifconfig
is used to configure the kernel-resident network interfaces. It is used at boot time to set up
interfaces as necessary. After that, it is usually only needed when debugging or when
system tuning is needed.
[root@station79 ~]# ifconfig
1.2.10 hostname
hostname command simply displays the fully qualified name of computer.
[mahesh@station60 ~]$ hostname
station60.example.com
[mahesh@station60 ~]$
1.2.11 free
Display amount of free and used memory in the system.
It displays the total amount of free and used physical and swap memory in the system, as well
as the buffers used by the kernel.The shared memory column should be ignored; it is obsolete.
In above output the memory description which is displayed it is in bytes.If user wants to display
it in required format that is in GB,MB or KB.
Command Options.
• $ free -k
It will show the output in Kilobytes.
• $ free -g
It will show the ouput in Gegabytes.
• $ free -m
It will show the output in Megabytes.
1.2.12 df -h
The df command displays information about total space and available space on a file system.
[mahesh@station60 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 494M 26M 444M 6% /boot
/dev/sda2 30G 15G 14G 52% /
/dev/sda7 2.0G 1.3G 624M 67% /home
/dev/sda5 6.8G 1.9G 4.6G 30% /var
/dev/sda3 7.7G 3.8G 3.6G 51% /usr
2 Unix Filesystem
The Unix file system is a methodology for logically organizing and storing large quantities of
data such that the system is easy to manage. A file can be informally defined as a collection of
(typically related) data, which can be logically viewed as a stream of bytes (i.e. characters). A
file is the smallest unit of storage in the Unix file system.
By contrast, a file system consists of files, relationships to other files, as well as the attributes of
each file. File attributes are information relating to the file, but do not include the data
contained within a file. File attributes for a generic operating system might include (but are not
limited to):
Additionally, file systems provide tools which allow the manipulation of files, provide a
logical organization as well as provide services which map the logical organization of files to
physical devices.
From the beginners’ perspective, the Unix file system is essentially composed of files and
directories. Directories are special files that may contain other files.
The Unix file system has a hierarchical (or tree-like) structure with its highest level directory
called root (denoted by /, pronounced slash). Immediately below the root level directory are
several subdirectories, most of which contain system files.
Below this can exist system files, application files, and/or user data files. Similar to the concept
of the process parent-child relationship, all files on a Unix system are related to one another.
That is, files also have a parent-child existence. Thus, all files (except one) share a common
parental link, the top-most file (i.e. /) being the exception.
Below is a diagram (slice) of a "typical" Unix file system. As you can see, the top-most directory
is / (slash), with the directories directly beneath being system directories. Note that as Unix
implementations and vendors vary, so will this file system hierarchy. However, the organization
of most file systems is similar.
Tasks
• Making files available to users.
• Managing and monitoring the system's disk resources.
• Protecting against file corruption, hardware failures, user errors through backup.
• Security of these filesystems, what users need and have access to which files.
• Adding more disks, tape drives, etc when needed.
When Unix operating systems is installed, some directories depending upon the Unix being
installed are created under / (or root), such as /usr /bin /etc /tmp /home /var.
• etc Contains all system configuration files and the files which maintain information
about users and groups.
• bin Contains all binary executable files (command that can be used by normal user also)
• usr Default directory provided by Unix OS to create users home directories and contains
manual pages - also contains executable commands
• tmp System or users create temporary files which will be removed when the server
reboots.
• dev Contains all device files i.e. logical file names to physical devices.
• home - contains user directories and files
• lib - contains all library files
• mnt - contains device files related to mounted devices
• proc - contains files related to system processes
• root - the root users' home directory (note this is different than /)
• home Default directory allocated for the home directories of normal users when the
administrator don’t specify any other directory.
• var Contains all system log files and message files.
• sbin Contains all system administrator executable files (command which generally
normal users don’t have the privileges)
Example: 1
[mahesh1@station60 ~]$ ls
case.sh for.sh hello.sh if.sh
[mahesh1@station60 ~]$
the ls command without any option displays the files and directories
Column 1- tells us the type of file, what privileges it has and to whom these privileges are
granted. There are three types of privileges. Read and write privileges are easy to understand.
The exec privilege is a little more difficult. We can make a file "executable" by giving it exec
privileges. This means that commands in the file will be executed when we type the file name in
at the UNIX prompt. It also means that when a directory which, to UNIX is a file like any other
file, can be "scanned" to see what files and sub-directories are in it. Privileges are granted to
three levels of users:
_ 1) The owner of the file. The owner is usually, but not always, the userid that created the file.
_ 2) The group to which the owner belongs. At GSU, the group is usually, but not always
designated as the first three letters of the userid of the owner.
_ 3) Everybody else who has an account on the UNIX machine where the file resides.
Column 2 - Number of links
Column 3 - Owner of the file. Normally the owner of the file is the user account that originally
created it.
Column 4 - Group under which the file belongs. This is by default the group to which the
account belongs or first three letters of the userid. The group can be changed by the chgrp
command.
Column 5 - Size of file (bytes).
Column 6 - Date of last update
Column 7 - Name of file
Example: 3
$ ls -ld /usr
[mahesh1@station60 ~]$ ls -ld /usr
drwxr-xr-x 16 root root 4096 Sep 20 20:06 /usr
[mahesh1@station60 ~]$
Rather than list the files contained in the /usr directory, this command lists information about
the /usr directory itself (without generating a listing of the contents of /usr). This is very useful
when we want to check the permissions of the directory but not the content of the directory.
-a Shows us all files, even files that are hidden (these files begin with a dot.)
Example: 4
[mahesh1@station60 ~]$ ls -a
.bash_logout .bashrc .emacs hello.sh .kde .bash_profile
case.sh for.sh if.sh .mozilla
[mahesh1@station60 ~]$
3.1.2 cat
The cat command is used to read a file and also to create
Examples:
a)creating a file
[mahesh1@station60 ~]$ cat >hello.txt
hi
welcome to unix Enter text and end with ctrl-D
[mahesh1@station60 ~]$
b)reading a file
[mahesh1@station60 ~]$ cat hello.txt
Hi
welcome to unix
[mahesh1@station60 ~]$
3.1.3 more
The more command in Linux is helpful when dealing with a small xterm window, or if you want
to easily read a file without using an editor to do so. More is a filter for paging through text one
screenful at a time.
This will auto clear the screen and display the start of the file.
1 #!/bin/bash
2 beep 659 120 # Treble E
3 beep 0 120
4 beep 622 120 # Treble D#
5 beep 0 120
--More--(5%) <---- This line shows at what line you havereached
in the file relative to the entire file size.
If you hit space, then more will move down the file the height of the terminal window you have
open, to display new information to you.
This will then display the data.txt file but use the more processor to view the file at your own
pace.
More will then display the first file, followed by the second file informing you of the file change.
This will display from the first line that contains Erik in it. This is quite useful if you are looking
at patch files and know the filename you are looking for.
3.1.4 cp
1.cp command copies file or group of files.It creates exact image of the file on disk with
dirrraent name. The syntax requires at leasr two filenames to be specified in the command
line.When both are ordinary files,the first is copied to second:
[mahesh1@station60 ~]$ cp file1 file2
if the destion file (file2) does not exist ,it will first be created before coping takes place.If
not it will simply be overwritten without any warinig from the system.
2.The cp is often used with the shorthand notation .(dot),to signify the current directory as the
destination.For instance to copy the file userlist.txt from /home/mahesh to your current
directory following command.
3.cp can also be used to copy more than one file with a single invocation of the command.In
that case the last filename must be a directory.you can use the cp command as follows.
where the all the files file1,file2,file3 and likewise will be copied into a backup directory.
3.1.5 mv
mv command has two distincts functions.
mv doesn't create a copy of the file but it renames the file.No additional space is consumed on
disk during renaming.To rename the file hello.sh to welcome.sh use the following command.
[mahesh1@station60 ~]$
mv simply replaces the filename in the existing directory entry with the new name.
To move the group of files to a directory.The following command moves files to a backup
directory
3.1.6 rm
The rm command deletes one or more files.It normally operates silently and should be used
with caution.
command will dangerous to use in this case it will remove all the files whoes name starts with
file and end with any other characters.
Now see the following command that will delete all files in current directory.
mahesh1@station60 ~]$rm *
-i option:
the blinking cursor is. When in command mode, keystrokes perform special functions rather than
actually inserting text to the document. (This makes up for the lack of mouse, menus, etc.!) You
must know which keystroke will switch you from one mode to the other:
• To switch to insert mode: press i (or a, or o)
• To switch to command mode: press Esc
Getting out: When you want to get out of the editor, switch to command mode (press Esc) if
necessary, and then
• type :wq Rtn to save the edited file and quit, or
• type :q! Rtn to quit the editor without saving changes, or
• type ZZ to save and quit (a shortcut for :wq Rtn), or
• type :w filename to save the edited file to new file "filename"
Moving Around: When in command mode you can use the arrow keys to move the cursor up,
down, left, right. In addition, these keystrokes will move the cursor:
h left one character
l right one character
k up one line
j down one line
b back one word
f forward one word
{ up one paragraph
} down one paragraph
$ to end of the line
^B back one page
^F forward one page
17G to line #17
G to the last line
Inserting Text: From command mode, these keystrokes switch you into insert mode with new
text being inserted
Cutting, Copying, Pasting: From command mode, use these keystroke (or keystroke−
combination) commands for the described cut/copy/paste function:
• x delete (cut) character under the cursor
• 24x delete (cut) 24 characters
• dd delete (cut) current line
• 4dd delete (cut) four lines
• D delete to the end of the line from the cursor
• dw delete to the end of the current word
• yy copy (without cutting) current line
• 5yy copy (without cutting) 5 lines
• p paste after current cursor position/line
• P paste before current cursor position/line
Searching for Text: Instead of using the "Moving Around" commands, above, you can go
directly forward or backward to specified text using "/" and "?". Examples:
• /wavelet Rtn jump forward to the next occurrence of the string "wavelet"
• ?wavelet Rtn jump backward to the previous occurrence of the string "wavelet"
• n repeat the last search given by "/" or "?"
Replacing Text: This amounts to combining two steps; deleting, then inserting text.
• r replace 1 character (under the cursor) with another character
• 8r replace each of the next 8 characters with a given character
• R overwrite; replace text with typed input, ended with Esc
• C replace from cursor to end of line, with typed input (ended with Esc)
• S replace entire line with typed input (ended with Esc)
• 4S replace 4 lines with typed input (ended with Esc)
• cw replace (remainder of) word with typed input (ended with Esc)
Miscellany: The commands on these two pages are just the start. Many more powerful
commands exist in VI. More complete descriptions of all the possible commands are available on
the web; search for "vi tutorial" or "vim tutorial". Useful commands include
u undo the last change to the file (and type "u" again to re−do the change)
U undo all changes to the current line
^G show the current filename and status and line number
:set nu Rtn show all line numbers (":set nonu" gets rid of the numbers)
^L clear and redraw the screen
:%s/Joe/Bob/g Rtn change every "Joe" to "Bob" throughout the document
J join this line to the next line
5J join 5 lines
xp exchange two characters (actually the two commands x=delete and p=paste)
:w Rtn write (save) the current text, but don’t quit VI
:12,17w filename Rtn write lines #12−17 of the current text to a (new) text file
[root@shekhar ~]# ps -f
UID PID PPID C STIME TTY TIME CMD
root 5794 5791 0 10:44 pts/0 00:00:00 -bash
root 5814 5794 0 10:44 pts/0 00:00:00 ps -f
As you can see from the above diagram, a process for the “bash” shell is running in the
background. In this case, the PID (a unique number assigned to the process) for the shell
process is 5794.
The second thing that happens when you login is that a special file named
.bash_profile automatically gets executed. Every user has this file sitting in his/her
home directory. Home directory is a directory assigned to each user as his/her
home. If you write any UNIX command in this .bash_profile file, it will get executed
everytime you login. Usually commands like “alias” are written in his .bash_profile
file. A sample .bash_profile is show below.
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=$PATH:$HOME/bin
export PATH
alias rm='rm -i'
clear
echo "-----------------------------------------"
echo "Welcome $USER"
echo "-----------------------------------------"
-----------------------------------------
Welcome root
-----------------------------------------
[root@shekhar ~]# pwd
/root This is your HOME directory
[root@shekhar ~]#
Shell is a user program or it's a environment provided for user interaction. Shell is an command
language interpreter that executes commands read from the standard input device (keyboard)
or from a file.
Shell is not part of system kernel, but uses the system kernel to execute programs, create files
etc.
/bin/sh
/bin/bash
/sbin/nologin
/bin/tcsh
/bin/csh
/bin/ksh
[root@station60 ~]#
4. The problem is that you are not having execute permission on file
so let us give execute permission on this file
$chmod u+x hello.sh
5. once the problem above is fixed, find the path of your script.Suppose in this case the full
path of hello.sh is /home/mahesh/hello.sh now execute the script as follows.
$ /home/mahesh/hello.sh [Enter]
Hello World!
#!/bin/bash This first line indicates what interpreter to use when running this script
The "shebang" is a special comment. Since it is a comment it will not be executed when the
script is run. Instead before the script is run, the shell calling the script will check for the #!
pattern. If found it will invoke the script using that interpreter. f no #! is found most shells will
use the current shell to run the script.
Since the shells are installed in different locations on different systems you may have to alter
the #! line.
For example, the bash shell may be in /bin/bash, /usr/bin/bash or /usr/local/bin/bash.
Setting the shell explicitly like this assures that the script will be run with the same interpreter
regardless of who executes it (or what their default shell may be.)
6 Shell Variables
The variable is place holder for storing data. The value of the variable can be changed during
the program exucution.
The value assigned could be a number, text, filename, device, or any other type of data.A
variable is nothing more than a pointer to the actual data. The shell enables you to create,
assign, and delete variables.
Shell variables can be used at the command line and/or used within shell programs.
While similar to other programming language variables, shell variables do have some different
characteristics such as:
6.1.2 Rules for Naming variable name (Both UDV and System Variable)
(1) Variable name must begin with Alphanumeric character or underscore character (_),
followed by one or more Alphanumeric character. For e.g. Valid shell variable are as follows
HOME
SYSTEM_VERSION
Vechno
(2) Don't put spaces on either side of the equal sign when assigning value to variable. For e.g. In
following variable declaration there will be no error
$ no=10
But there will be problem for any of the following variable declaration:
$ no =10
$ no= 10
$ no = 10
(3) Variables are case-sensitive, just like filename in Linux. For e.g.
$ no=10
$ No=11
$ NO=20
$ nO=2
Above all are different variable name, so to print value 20 we have to use $ echo $NO and not
any of the following
(4) You can define NULL variable as follows (NULL variable is variable which has no value at the
time of definition) For e.g.
$ vech=
$ vech=""
Nothing will be shown because variable has no value i.e. NULL variable.
If we try to access the variable that is not declared the blank line will appear means the value of
that variable is null or not assigned
[mahesh@station60 ~]$
In above example note the b is not having any value assigned hence it returns null.
Note the spaces on either side of the operator, these are mandatory and a source of frequent
errors. Some of the possible operators include:
addition +
subtraction -
multiplication * # must be written with \ before the *
(See the example below)
division /
modulus %
$ I=10
$ expr $I + 2 [Enter] # same using a variable
12
There is an alternative way to performing arithmetic calculations available in some of the newer
shells (e.g. bash, ksh93). This newer method (sometimes referred to as let) uses the following
syntax: $((expression)) .
For example:
$ X=10 [Enter]
$ echo $((X + 2)) [Enter] # note no $ on X
12
the above command will displays the path of your sysem observe the bin is present in current
path.
Now type the following command
[mahesh1@station60 ~]$ which ls
/bin/ls
[mahesh1@station60 ~]$
which command tells the path of ls command i.e .ls command's executable code is stored on
bin directory and since the /bin set in path variable we can execute the ls command
successfully.
[mahesh1@station60 ~]$ ls
a.txt case.sh for.sh hello.sh hello.txt if.sh
[mahesh1@station60 ~]$
now /bin is not present in path hence we can not execut the ls command.
[mahesh1@station60 ~]$ ls
-bash: ls: command not found
[mahesh1@station60 ~]$ ls
now set your path variable as it is you will be able to execut the ls command .
By assigning path of your shell script you can execut without giving full path like an executable
command.
the output of command is substituted at the location of the leftmost backquote. Note that
these are not the same character as the single quote mark. For example, if we wanted to
output:
My current directory is: current directory location
Note the / in /home is substituted exactly at the location of the leftmost backquote. We can
also perform assignment using the backquote characters, for example:
$ CUR_DIR=`pwd` [Enter] # note no spaces around the =
or with respect to the expr command:
$ I=10 [Enter]
$ I=`expr $I + 1` [Enter]
$ echo $I [Enter]
11
An alternative notation for command substitution (present in more modern shells) is the
$(command) syntax. This enables one do the following:
1 Technically, a single one of these is called a grave accent, but are sometimes informally
referred to as backticks, or back tick marks.
$ CUR_DIR=`pwd` [Enter]
$ echo "My current directory is: $CUR_DIR" [Enter]
My current directory is: /home/mthomas
Note that with single quotes in this example, the value stored in the $CUR_DIR variable would
not be displayed. It is the practice of the author to always enclose text and variables within
double quotes, and escape any special characters using the backslash.
Note that if any pair of quotes is unmatched (missing either of the quotes), a situation may
arise where a command results in single greater than (>) character displayed follows:
$ echo "My current directory is: $CUR_DIR [Enter]
The > character in this instance is the shell environment variable named PS2 (prompt string 2).
Do not confuse this with an output redirection character (see next section). When this occurs,
the shell is trying to parse the command entered and is missing one or more characters it needs
to complete its parsing. If you understand what is missing, you may be able to recover at the >
prompt by typing the characters missing. Otherwise, you may want to punt with a [Ctrl-c]
sequence.
where MyProject.20090816.tar is the name of the archive (file) you are creating, and MyProject
is the name of your subdirectory. It's common to name an uncompressed archive with the .tar
file extension.
In that command, I used three options to create the tar archive:
• The letter c means "create archive".
• The letter v means "verbose", which tells tar to print all the filenames as they are added
to the archive.
• The letter f tells tar that the name of the archive appears next (right after these
options).
The v flag is completely optional, but I usually use it so I can see the progress of the command.
The general syntax of the tar command when creating an archive looks like this:
tar [flags] archive-file-name files-to-archive
As you can see, I added the 'z' flag there (which means "compress this archive with gzip"), and I
changed the extension of the archive to .tgz, which is the common file extension for files that
have been tar'd and gzip'd in one step.
In this tar example, the '.' at the end of the command is how you refer to the current directory.
tar command example - creating an archive in a different directory
You may also want to create a new tar archive like that previous example in a different
directory, like this:
tar -czvf /tmp/mydirectory.tar.gz .
As you can see, you just add a path before the name of your tar archive to specify what
directory the archive should be created in.
This lists all the files in the archive, but does not extract them.
To list all the files in a compressed archive, add the z flag like before:
tar -tzvf my-archive.tar.gz
That same command can also work on a file that was tar'd and gzip'd in two separate steps (as
indicated by the .tar.gz file extension):
tar -tzvf my-archive.tar.gz
I almost always list the contents of an unknown archive before I extract the contents. I think
this is always good practice, especially when you're logged in as the root user.
For compressed archives the tar extract command looks like this:
tar -xzvf my-archive.tar.gz
or this:
tar -xzvf my-archive.tar.gz
Additional information
Keep the following in mind when using the tar command:
• The order of the options sometimes matters. Some versions of tar require that the f
option be immediately followed by a space and the name of the tar file being created or
extracted.
• Some versions require a single dash before the option string (e.g., -cvf ).
$ vi read.sh
#
#Script to read your name from key-board
#
echo "Your first name please:"
read fname
echo "Hello $fname, Lets be friend!"
Run it as follows:
Thus, if we wish to see the value stored in the first postional parameter, we could do the
following from within the my_script program (note that this only works from within the
my_script program):
echo $1
arg1
Positional parameters provide the programmer with a powerful way to "pass data into" a shell
program while allowing the data to vary. If we had a shell program named hello that contained
the following statement:
echo Hello Fred! How are you today?
this would not be very interesting to run, unless perhaps your name was Fred. However if the
program was modified like this:
echo "Hello $1! How are you today?"
This would allow us to pass single data values "into" the program via positional parameters as
illustrated in the following diagram:
We could then run the program as follows, using varying values to pass into the positional
variable $1.
$ hello Fred [Enter]
Hello Fred! How are you today?
It should be obvious that this would be a much more useful program. Keep in mind that many
behaviors of standard variables are also behaviors of positional variables. For example, if you
did not assign a value to the first positional variable, you would not get an error, rather
behavior as follows:
$ hello [Enter]
Hello ! How are you today?
Similarly, if there are more command line arguments than positional variables, the extra
arguments are simply ignored, for example:
There are special parameters that allow accessing all of the command-line arguments at once.
$* and $@ both will act the same unless they are enclosed in double quotes, "".
Both the parameter specifies all command-line arguments but the "$*" special parameter takes
the entire list as one argument with spaces between and the "$@" special parameter takes the
entire list and separates it into separate arguments.
We can write the shell script shown below to process an unknown number of command-line
arguments with either the $* or $@ special parameters:
The following table shows a number of special variables that you can use in your shell scripts:
Variable Description
$0 The filename of the current script.
These variables correspond to the arguments with which
a script was invoked. Here n is a positive decimal
$n number corresponding to the position of an argument
(the first argument is $1, the second argument is $2,
and so on).
$# The number of arguments supplied to a script.
All the arguments are double quoted. If a script
$*
receives two arguments, $* is equivalent to $1 $2.
All the arguments are individually double quoted. If a
$@ script receives two arguments, $@ is equivalent to $1
$2.
$? The exit status of the last command executed.
The process number of the current shell. For shell
$$ scripts, this is the process ID under which they are
executing.
$! The process number of the last background command.
Let's take rm command, which is used to remove file, but which file you want to remove and
how you will tell this to rm command (even rm command don't ask you name of file that you
would like to remove). So what we do is we write command as follows:
$ rm {filename}
Here rm is command and filename is file which you would like to remove. This way you tail rm
command which file you would like to remove. So we are doing one way communication with
ourcommand by specifying filename. Also you can pass command line arguments to your script
to make itmore users friendly. But how we access command line argument in our script.
$ ls -a /*
This command has 2 command line argument -a and /* is another. For shell script,
Here $# (built in shell variable ) will be 2 (Since foo and bar only two Arguments), Please note at
a time such 9 arguments can be used from $1..$9, You can also refer all of them by using $*
(which expand to`$1,$2...$9`). Note that $1..$9 i.e command line arguments to shell script is
know as "positional parameters".
Following script is used to print command ling argument and will show you how to access them:
$ vi demo
#!/bin/bash
# Script that demos, command line args
echo "Total number of command line argument are $#"
echo "$0 is script name"
echo "$1 is first argument"
echo "$2 is second argument"
echo "All of them are :- $* or $@"
$ vi isnump_n
9 Exit Status:
The $? variable represents the exit status of the previous command.
Exit status is a numerical value returned by every command upon its completion. As a rule,
most commands return an exit status of 0 if they were successful, and other than o (most of the
time 1)if they were unsuccessful.
Some commands return additional exit statuses for particular reasons. For example, some
commands differentiate between kinds of errors and will return various exit values depending
on the specific type of failure.
Following is the example of successful command:
[mahesh1@station60 ~]$ ls
a.txt case.sh for.sh hello.sh hello.txt if.sh
[mahesh1@station60 ~]$ echo $?
0
[mahesh1@station60 ~]$
10.1 head
head command is used to display starting portion of the file. By default head command displays
the top 10 lines of file.
[root@station60 ~]#
[root@station60 ~]#
Note :Instead of 5 we can give any number regardless of it is less or greater than 10.
10.2 Tail
tail command works exactly opposite of the head.It displays the ending portion of file.By
default it also displays the 10 lines from the file we can override the behaviour as follows.
raj:x:7276:7276::/home/raj:/bin/bash
ram:x:7277:7277::/home/ram:/bin/bash
suhas:x:7278:7278::/home/suhas:/bin/bash
[root@station60 ~]#
suppose I want to retrive a line on particular position from the file then combination of head
and tail commands can be used as follows
mahesh:x:523:501::/home/mahesh:/bin/bash
[root@station60 ~]#
10.3 wc
In Unix, to get the line, word, or character count of a document, use the wc command. At the
Unix shell prompt.
wc filename Replace file name with the file or files for which we want information. For each file,
wc will output three numbers.
The first is the line count, the second the word count, and the third is the character count.
To narrow the focus of your query, we may use one or more of the following wc options:
-c bytes
-l lines
-m characters
-w words
Note: In some versions of wc, the -m option will not be available or -c will report characters.
However, in most cases, the values for -c and -m are equal.
Syntax:
To count the characters in a file. Here it counts the no of characters in the file abc.txt
$ Wc –c / abc.txt
For example, to find out how many bytes are in the .login file, we could enter:
$ wc -c .login
We may also pipe standard output into wc to determine the size of a stream. For example, to
find out how many files are in a directory, enter:
/bin/ls -l | wc -l
10.4 Sort
sort is a standard Unix command line program that prints the lines of its input or concatenation
of all files listed in it's argument list in sorted order. The r flag will reverse the sort order.
1) By default sort command sorts in ascending order
Examples:
$ cat phonebook
Smith,Brett 5554321
Doe,John 5551234
Doe,Jane 5553214
Avery,Cory 5554321
Fogarty,Suzie 5552314
$ du /bin/* | sort n
4 /bin/domainname
4 /bin/echo
4 /bin/hostname
4 /bin/pwd
...
24 /bin/ls
30 /bin/ps
44 /bin/ed
54 /bin/rmail
80 /bin/pax
102 /bin/sh
304 /bin/csh
2) If the first column of file does not contains numerical data then it will not sort according to
numbers we have to provide the position of column by using k option
$cat student.txt
harsh 10
mahesh 5
uday 55
3) k will work when there the column seprator is space. If the delimiter is other than space
then use t option
$cat student.txt
harsh:10
mahesh:5
uday:55
10.5 cut
cut is a Unix command which is typically used to extract a certain range of characters from a
line, usually from a file.
Syntax
cut [c][flist] [ddelim] [file]
Flags which may be used include
$cat company.data
406378:Sales:Itorre:Jan
031762:Marketing:Nasium:Jim
636496:Research:Ancholie:Mel
396082:Sales:Jucacion:Ed
1) If you want to print just columns 4 and 8 of each line(the first letter of the department and
the fourth digit of the serial number), use the c4,8 flag, as in this command:
-f Specifies a field list, separated by a delimiter list A comma separated or blank separated list
of integer denoted fields, incrementally ordered.
The indicator may be supplied as shorthand to allow inclusion of ranges of fields
-d Delimiter the character immediately following the d option is the field delimiter for use in
conjunction with the -f option the default delimiter is tab. Space and other characters with
special meanings within the context of the shell in use must be enquoted or escaped
as necessary.And since this file obviously has fields delimited by colons, we can pick out just the
last names by specifying the d and f3 flags, like this:
Examples using d and f options
1) If you want to access single field.
$cut -d”:” -f3 company.data
Itorre
Nasium
Ancholie
Jucacion
10.6 paste
Paste is a Unix utility tool which is used to join files horizontally (parallel merging), e.g. to join
two similar length files which are comma delimited. It is effectively the horizontal equivalent to
the utility cat command which operates on the vertical plane of two (or more) files, i.e. by
adding one file to another in order.
Example
To paste several columns of data together, enter:
10.7 grep
"grep" one of the most frequently used TEXT PROCESSING TOOLS stands for "Global Regular
Expression Print".
grep command searches the given file for lines containing a match to the given strings or words.
By default, grep prints the matching lines. Use grep to search for lines of text that match one or
many regular expressions, and outputs only e the matching lines.
1) If you want to count of a particular word in log file
you can use grep c option to count the word.
Below command will print how many times word "Error" has
appeared in logfile.txt
2) Sometime we are not just interested on matching line but also on lines around
matching lines particularly useful to see what happens before any Error or
Exception. grep –context option allows us toprint lines around matching pattern. Below
example of grep command in UNIX will print 6 lines around matching line of word
"successful" in logfile.txt
4) Use grep -o command in UNIX if you find whole word instead of just pattern.
Above grep command in UNIX searches only for instances of 'ERROR' that are entire
words; it does not match
will list the names of all Java files in the current directory whose contents mention`main'.
6) If you want to see line number of matching lines you can use option "grep -n" below
command will show on which lines w Error has appeared.
grep -n ERROR logfile
7) grep command in UNIX can show matching pattern in color which is quite useful to
highlight the matching section , to see matching pattern in color use below
command.
This chapter describes more about the powerful UNIX mechanism of redirecting input, output
and errors. Topics include:
As depicted in the diagram above, input flows (by default) as a stream of bytes from standard
input along a channel, is then manipulated (or generated) by the command, and command
output is then directed to the standard output.
The ls command can then be described as follows; there is really no input (other than the
command itself) and the ls command produces output which flows to the destination of stdout
(the terminal screen), as below:
2) You may want to issue another command on the output of one command.
This is known as redirecting output. Redirection is done using either the “>” (greater-than
symbol), or using the “|” (pipe) operator.
The simplest case to demonstrate this is basic output redirection . The general syntax looks as
follows:
Spaces around the redirection operator are not mandatory, but do add readability to the
command. Thus in our ls example from above, we can observe the following use of output
redirection:
Notice there is no output appearing after the command, only the return of the prompt. Why is
this, you ask? This is because all output from this command was redirected to the file my_files.
Observe in the following diagram, no data goes to the terminal screen, but to the file instead.
Examining the file as follows results in the contents of the my_files being displayed:
In this example,
• if the file my_files does not exist, the redirection operator causes its creation, and
• if it does exist, the contents are overwritten.
Notice here that the previous contents of the my_files file are gone, and replaced with the
string "Hello World!" This might not be the most desirable behavior, so the shell provides us
with the capability to append output to files.
• if the file does not exist, >> will cause its creation and ,
You use input redirection using the ‘<’ less-than symbol and it is usually used with a program
which accepts user input from the keyboard.
Examples:
1. A legendary use of input redirection that I have come across is mailing the contents of a text
file to another user.
If the user mike exists on the system, you don't need to type the full address. If you want to
reach somebody on the Internet, enter the fully qualified address as an argument to mail.
2. Looking in more detail at this, we will use the wc (word count) command. The wc command
counts the number of bytes, word and lines in a file. Thus if we do the following using the
file created above, we see:
$ wc my_files [Enter]
6 7 39 my_files
where the output indicates 6 lines, 7 words and 39 bytes, followed by the name of the file wc
opened.
What happens above is the contents of the file my_text_file.txt are passed to the command
'wc' whose output is in turn redirected to the file output_file.txt
There are three types of I/O, which each have their own identifier, called a file descriptor:
• standard input :0
• standard output :1
• standard error :2
• if the file descriptor number is omitted, and the first character of the redirection
operator is <, the redirection refers to the standard input (file descriptor 0).
• If the first character of the redirection operator is >, the redirection refers to the
standard output (file descriptor 1).
• For redirecting a error you can’t omitte the descriptor i.e. you have to write error
redirection descriptor as follows.
Note here that only the standard output appears once the standard
In this case both stdout and stderr were redirected to file, thus no output was sent to the
terminal. The contents of each output file was what was previously displayed on the screen.
Note there are numerous ways to combine input, output and error redirection.
Another relevant topic that merits discussion here is the special file named /dev/null
(sometimes referred to as the "bit bucket").
This virtual device discards all data written to it, and returns an End of File (EOF) to any process
that reads from it. I informally describe this file as a "garbage can/recycle bin" like thing, except
there's no bottom to it. This implies that it can never fill up, and nothing sent to it can ever be
retrieved. This file is used in place of an output redirection file specification, when the
redirected stream is not desired. For example, if you never care about viewing the standard
output, only the standard error channel, you can do the following:
One final miscellaneous item is the technique of combining the two output streams into a single
file. This is typically done with the 2>&1 command, as follows:
<< redirects the standard input of the command to read from what is called a "here document".
Here documents are convenient ways of placing several lines of text within the script itself, and
using them as input to the command. The << Characters are followed by a single word that is
used to indicate the end of file word for the Here Document. Anyword can be used, however
there is a common convention of using EOF (unless we need to include that word within your
here document).
Example:1 The following example creates a file userlist.txt without waiting for user,after
running a file auto_file.sh.
$Vim auto_file.sh
$ cat > userlist.txt << EOF
bravo
delta
alpha
chrlie
EOF
Example 3: Login to oracle database crete a report using select query spool that report into a
file and mail that file to Manager.
$vim report.sh
sqlplus username/password <<EOF
spool report.txt
select e.employee_id,e.last_name,d.department_name,e.salary
FROM employees e join departments d
ON e.department_id=d.department_id;
spool off
EOF
mail –s “Salary Report” manager@focustraining.in < report.txt
Unlike other forms of interprocess communication (IPC), a pipe is one-way communication only.
Basically, a pipe passes a parameter such as the output of one process to another process
which accepts it as input. The system temporarily holds the piped information until it is read by
the receiving process.
The UNIX domain sockets (UNIX Pipes) are typically used when communicating between two
processes running in the same UNIX machine. UNIX Pipes usually have a very good throughput.
We can look at an example of pipes using the who and the wc commands. Recall that the who
command will list each user logged into a machine, one per line as follows:
$ who [Enter]
mthomas pts/2 Oct 1 13:07
fflintstone pts/12 Oct 1 12:07
wflintstone pts/4 Oct 1 13:37
brubble pts/6 Oct 1 13:03
Also recall that the wc command counts characters, words and lines. Thus if we connect the
standard output from the who command to the standard input of the wc (using the -l (ell)
option), we can count the number of users on the system:
$ who | wc -l [Enter]
4
In the first part of this example, each of the four lines from the who command will be "piped"
into the wc command, where the -l (ell) option will enable the wc command to count the
number of lines.
While this example only uses two commands connected through a single pipe operator, many
commands can be connected via multiple pipe operators
12 Control Statements
While writing a shell script, there may be a situation when you need to adopt one path out of
the given two paths. So you need to make use of conditional statements that allow your
program to make correct decisions and perform right actions.
Unix Shell supports conditional statements which are used to perform different actions based
on different conditions. Here we will explain following two decision making statements:
• The if...else statements
• The case...esac statement
12.1 Operators
12.1.1 For Mathematics, or numerical comparision use following
Operators in Shell Script
Mathematical Normal But In Shell
Operator in Arithmetical/
Meaning
Shell Mathematical
Script Statements
If [ 5 -eq 6 ]
-eq is equal to 5 == 6
If [ 5 -ne 6 ]
-ne is not equal to 5 != 6
If [ 5 -lt 6 ]
-lt is less than 5 < 6
If [ 5 -le 6 ]
is less than or
-le 5 <= 6
equal to
If [ 5 -gt 6 ]
-gt is greater than 5 > 6
If [ 5 -ge 6 ]
is greater than
-ge 5 >= 6
or equal to
The if...fi statement is the fundamental control statement that allows Shell to make decisions
and execute statements conditionally.
Syntax:
if [ expression ]
then
Statement(s) to be executed if expression is true
Fi
Here Shell expression is evaluated. If the resulting value is true, given statement(s) are
executed. If expression is false then no statement would be not executed. Most of the times
you will use comparison operators while making decisions.
Give you attention on the spaces between braces and expression. This space is mandatory
otherwise you would get syntax error.
If expression is a shell command then it would be assumed true if it return 0 after its execution.
If it is a boolean expression then it would be true if it returns true.
2.2 Example:1
#!/bin/sh
a=10
b=20
if [ $a -eq $b ]
then
echo "a is equal to b"
fi
if [ $a -ne $b ]
then
echo "a is not equal to b"
fi
This will produce following result:
a is not equal to b
rm -f /tmp/list_of_users.txt
Syntax:
if [ expression ]
then
Statement(s) to be executed if expression is true
else
Statement(s) to be executed if expression is not true
Fi
Here Shell expression is evaluated. If the resulting value is true, given statement(s) are
executed. If expression is false then no statement would be not executed.
Example:1
If we take above example then it can be written in better way using if...else statement as
follows:
#!/bin/sh
a=10
b=20
if [ $a –eq $b ]
then
echo "a is equal to b"
else
echo "a is not equal to b"
fi
#!/bin/bash
echo “Enter the filename”
read file1
if [ ! -s file1 ]
then
echo "file1 is empty or does not exist."
ls -l > file1
else
echo "File file1 already exists."
Fi
#!/bin/bash
#script to check whether directory exists or not.
#dir=$(pwd)
a=$1
if [ -d $a ]; then
echo " directory $a exists "
else
echo " directory $a does not exists "
fi
Example:3
$ vi positive.sh
#!/bin/bash
# Script to see whether argument is positive or negative
if [ $# -eq 0 ]
then
echo "$0 : You must give/supply one integers"
exit 1
fi
if [ $1 -gt 0 ]
then
echo "$1 number is positive"
else
echo "$1 number is negative"
fi
Example :4 The example below accepts two strings from user and
checks whether are equal or not
#!/bin/bash
#!/bin/bash
echo "Enter user name"
read usr
cat /etc/passwd|grep –wo ^$usr &>/dev/null
if [ $? -eq 0 ]
then
echo "$usr exists"
else
echo "$usr does not exists"
fi
$vim db_status.sh
#!/bin/bash
export ORACLE_SID=$1
sqlplus / as sysdba<<EOF &>/dev/null
spool status.txt
select sysdate from dual;
spool off
EOF
status=`grep –c ‘ORA’ status.txt`
if [ $status –gt 0 ]
then
echo $1 is running…
else
echo $1 id down…
fi
$vim db_listener_status.sh
#!/bin/bash
lsnrctl status &>/dev/null
if [ $? –eq 0 ]
then
echo Listener is running…
else
echo Listener id down…
fi
if [ expression 1 ]
then
elif [ expression 2 ]
then
elif [ expression 3 ]
then
else
There is nothing special about this code. It is just a series of if statements, where each if is part
of the else clause of the previous statement. Here statement(s) are executed based on the true
condition, if none of the condition is true then else block is executed.
Example:1
#!/bin/sh
a=10
b=20
if [ $a –eq $b ]
then
echo "a is equal to b"
elif [ $a -gt $b ]
then
echo "a is greater than b"
elif [ $a -lt $b ]
then
echo "a is less than b"
else
echo "None of the condition met"
fi
Example 2:
$ vi elf.sh
#!/bin/sh
# Script to test if..elif...else
if [ $1 -gt 0 ]; then
echo "$1 is positive"
elif [ $1 -lt 0 ]
then
echo "$1 is negative"
elif [ $1 -eq 0 ]
then
echo "$1 is zero"
else
echo "Opps! $1 is not number, give number"
fi
do
command1
command2
...
done
As with if statements, a semicolon (;) can be used to remove include the do keyword on the
same line as the while condition-command statement.
The example below loops over two statements as long as the variable i is less than or equal to
ten. Store the following in a file named while1.sh and execute it
Example:1
#!/bin/bash
#Illustrates implementing a counter with a while loop
#Notice how we increment the counter with expr in backquotes
i=1
while [ $i -le 10 ]
do
echo "i is $i"
i=`expr $i + 1`
done
Example:2 Lock the user accounte whoes uid is between the range
of 500 to 510
#!/bin/bash
while read line
do
uname=`echo $line|cut -d":" -f1`
id=`echo $line|cut -d":" -f3`
if [ $id -ge 500 -a $id -le 520 ]
usermod -L $uname &>/dev/null
echo "User $uname Locked...."
fi
done</etc/passwd
Syntax:
for var in word1 word2 ... wordN
do
Statement(s) to be executed for every word.
Done
Here var is the name of a variable and word1 to wordN are sequences of characters separated
by spaces (words). Each time the for loop executes, the value of the variable var is set to the
next word in the list of words, word1 to wordN.
Example:
Here is a simple example that uses for loop to span through the given list of numbers:
Example:1
#!/bin/sh
for var in 0 1 2 3 4 5 6 7 8 9
do
echo $var
done
This will produce following result:
0
1
2
3
4
5
6
7
8
9
Example:2
#!/bin/bash
a=$(seq 1 1 5)
for i in $a
do
echo "Value of i = $i"
done
Following is the example to display all the files starting with .bash and available in your home.
I'm executing this script from my root:
Example:
#!/bin/sh
for FILE in $HOME/.bash*
do
echo $FILE
done
This will produce following result:
/root/.bash_history
/root/.bash_logout
/root/.bash_profile
/root/.bashrc
Example:3
#!/bin/bash
echo "You want to ping $1 network"
for i in $(seq 1 1 10)
do
ping -c1 $1.$i > /dev/null 2>&1
if [ $? -eq 0 ]; then
echo "Node $1.$i is up"
else
echo "Node $1.$i is down"
fi
done
Example:4
Example:
#!/bin/bash
do
bname=$(basename${i})
done
Syntax:
until command
do
Statement(s) to be executed until command is true
Done
Here Shell command is evaluated. If the resulting value is false, given statement(s) are
executed. If command is true then no statement would be not executed and program would
jump to the next line after done statement.
Example:
Here is a simple example that uses the until loop to display the numbers zero to nine:
#!/bin/sh
a=0
until [ ! $a -lt 10 ]
do
echo $a
a=`expr $a + 1`
done
In this tutorial you will learn following two statements used to control shell loops:
1. The break statement
2. The continue statement
Example:
Here is a simple example that uses the while loop to display the numbers zero to nine:
#!/bin/sh
a=20
while [ $a -gt 10 ]
do
echo $a
done
This loop would continue forever because a is alway greater than 10 and it would never
become less than 10. So this true example of infinite loop.
Syntax:
The following break statement would be used to come out of a loop:
break
The break command can also be used to exit from a nested loop using this format:
Here n specifies the nth enclosing loop to exit from.
break n
Example:
Here is a simple example which shows that loop would terminate as soon as a becomes 5:
#!/bin/sh
a=0
while [ $a -lt 10 ]
do
echo $a
if [ $a -eq 5 ]
then
break
fi
a=`expr $a + 1`
done
0
1
2
3
4
5
The break statement will cause the shell to stop executing the current loop and continue
on after its end.
#!/bin/sh
files=`ls`
count=0
for i in $files
do
count=`expr $count + 1`
if [ $count -gt 100 ]
then
echo "There are more than 100 files in the current
directory"
break
fi
done
#!/bin/bash
while [ 1 = 1 ]
do
if [ -f $1 ]; then
break
else
sleep 1
fi
done
Syntax:
Continue
Like with the break statement, an integer argument can be given to the continue command to
skip commands from nested loops.
continue n
Here n specifies the nth enclosing loop to continue from.
Example:
The following loop makes use of continue statement which returns from the continue
statement and start processing next statement:
#!/bin/sh
NUMS="1 2 3 4 5 6 7"
for NUM in $NUMS
do
Q=`expr $NUM % 2`
if [ $Q -eq 0 ]
then
echo "Number is an even number!!"
continue
fi
echo "Found odd number"
done
By using the < $FILENAME notation after the done loop terminator
we feed the while loop from the bottom, which greatly increases the input throughput
to the loop. When we time each technique, this method will stand out at the top of the list.
14 Wildcards in UNIX
The two basic wildcard characters are ? and *. The wildcard ? matches any one character. The
wildcard * matches any grouping of zero or more characters. Some examples may help to
clarify this. (Remember that Unix is casesensitive).
Assume that your directory contains the following files:
Chap bite bin
bit Chap6 it
test.new abc
Lit site test.old
Big snit bin.old
The ? wildcard
Example,
$ ls ?i?
Lit big bin bit
Finds any files with "i" in the middle, one character before and one character after.
The * wildcard
The * wildcard is more general. It matches zero or any number of characters, except that it will
not match a period
that is the first character of a name.
$ ls *t
lit bit it snit
Using this wildcard finds all the files with "it" as the last two characters of the name (although it
would not have found
a file called .bit).
We could use this wildcard to remove all files in the directory whose names begin with "test".
The command to do this
is
$rm test*
Be careful when using the * wildcard, especially with the rm command. If we had mistyped this
command by adding a
space between test and *, Unix would look first for a file called test, remove it if found, and
then proceed to remove all
the files in the directory!
The ? wildcard matches any one character. To restrict the matching to a particular character or
range of characters, use square brackets [ ] to include a list. For example, to list files ending in
"ite", and beginning with only "a", "b", "c",
or "d" we would use the command:
$ ls [abcd]ite
This would list the file bite, but not the file site. Note that the sequence [ ] matches only one
character. If we had a file
called delite, the above command would not have matched it.
We can also specify a range of characters using [ ]. For instance, [1-3] will match the digits 1, 2
and 3, while[A-Z]
matches all capital letters.
ls [A-Z]it
Will find any file ending in "it" and beginning with a capital letter (in this case, the file Lit).
Wildcards can also be combined with [ ] sequences. To list any file beginning with a capital
letter, we would use:
$ ls [A-Z]*
Chap1 Chap6 Lit
15 Functions
Functions enable you to break down the overall functionality of a script into smaller, logical
subsections, which can then be called upon to perform their individual task when it is needed.
Using functions to perform repetitive tasks is an excellent way to create code reuse. Code reuse
is an important part of modern object-oriented programming principles.
Shell functions are similar to subroutines, procedures, and functions in other programming
languages.
The name of your function is function_name, and that's what you will use to call it from
elsewhere in your scripts. The function name must be followed by parentheses, which are
followed by a list of commands enclosed within braces.
Example:
Following is the simple example of using function:
#!/bin/sh
# Define your function here
Hello () {
echo "Hello World"
}
# Invoke your function
Hello
When you would execute above script it would produce following result:
amrood]$./test.sh
Hello World
[amrood]$
Example:
Following function returns a value 1:
#!/bin/sh
# Define your function here
Hello () {
echo "Hello World $1 $2"
return 10
}
# Invoke your function
Hello Zara Ali
# Capture value returnd by last command
ret=$?
echo "Return value is $ret"
This has the effect of causing any functions defined inside test.sh to be read in and defined to
the current shell as follows:
[amrood]$ number_one
This is the first function speaking...
This is now the second function speaking...
[amrood]$
To remove the definition of a function from the shell, you use the unset command with the .f
option. This is the same command you use to remove the definition of a variable to the shell.
[amrood]$unset .f function_name
16 Arrays
A shell variable is capable enough to hold a single value. This type of variables are called scalar
variables.
Shell supports a different type of variable called an array variable that can hold multiple values
at the same time. Arrays provide a method of grouping a set of variables. Instead of creating a
new name for each variable that is required, you can use a single array variable that stores all
the other variables.
All the naming rules discussed for Shell Variables would be applicable while naming arrays.
We can use a single array to store all the above mentioned names. Following is the simplest
method of creating an array variable is to assign a value to one of its indices. This is expressed
as follows:
array_name[index]=value
Here array_name is the name of the array, index is the index of the item in the array that you
want to set, and value is the value you want to set for that item.
As an example, the following commands:
NAME[0]="Zara"
NAME[1]="Qadir"
NAME[2]="Mahnaz"
NAME[3]="Ayan"
NAME[4]="Daisy"
If you are using ksh shell the here is the syntax of array initialization:
If you are using bash shell the here is the syntax of array initialization:
array_name=(value1 ... valuen)
Here array_name is the name of the array, and index is the index of the value to be accessed.
Following is the simplest example:
#!/bin/sh
NAME[0]="Zara"
NAME[1]="Qadir"
NAME[2]="Mahnaz"
NAME[3]="Ayan"
NAME[4]="Daisy"
echo "First Index: ${NAME[0]}"
echo "Second Index: ${NAME[1]}"
You can access all the items in an array in one of the following ways:
${array_name[*]}
${array_name[@]}
ere array_name is the name of the array you are interested in. Following is the simplest
example:
#!/bin/sh
NAME[0]="Zara"
NAME[1]="Qadir"
NAME[2]="Mahnaz"
NAME[3]="Ayan"
NAME[4]="Daisy"
echo "First Method: ${NAME[*]}"
echo "Second Method: ${NAME[@]}"
This would produce following result:
[amrood]$./test.sh
First Method: Zara Qadir Mahnaz Ayan Daisy
Second Method: Zara Qadir Mahnaz Ayan Daisy
17 Signal Trapping
Signals are software interrupts sent to a program to indicate that an important event has
occurred. The events can vary from user requests to illegal memory access errors. Some signals,
such as the interrupt signal, indicate that a user has asked the program to do something that is
not in the usual flow of control.
The following are some of the more common signals you might encounter and want to use in
your programs:
Signal
Signal Name Description
Number
Hang up detected on controlling terminal or
SIGHUP 1
death of controlling process
Issued if the user sends an interrupt
SIGINT 2
signal (Ctrl + C).
Issued if the user sends a quit signal
SIGQUIT 3
(Ctrl + D).
Issued if an illegal mathematical operation
SIGFPE 8
is attempted
If a process gets this signal it must quit
SIGKILL 9 immediately and will not perform any clean-
up operations
SIGALRM 14 Alarm Clock signal (used for timers)
Software termination signal (sent by kill
SIGTERM 15
by default).
[amrood]$ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE
9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2
13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT
17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU
25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH
29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN
35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4
39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12
47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10
55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6
59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX
The actual list of signals varies between Solaris, HP-UX, and Linux.
Here signal is either the number or name of the signal to deliver and pid is the process ID that
the signal should be sent to. For Example:
[amrood]$ kill -1 1001 Sends the HUP or hang-up signal to the program that is
running with process ID 1001. To send a kill signal to the same
process use the folloing command:
[amrood]$ kill -9 1001
This would kill the process running with process ID 1001.
Here command can be any valid Unix command, or even a user-defined function, and signal can
be a list of any number of signals you want to trap.
There are following common uses for trap in shell scripts:
1. Clean up temporary files
2. Ignore signals
Example
Following script dehmonstrate the tarping the TERM signal when it is send to shell script
vi trap.sh
#!/bin/bash
trap "echo 'trapped the signal TERM'" TERM
echo "starting infinite looop"
i=1
while [ 1 -eq 1 ]
do
echo "$i. sleeping for 1 sec..."
sleep 1
i=$(( $i + 1 ))
done
18 awk
You should see the contents of your /etc/passwd file appear before your eyes. Now, for an
explanation of what awk did. When we called awk, we specified /etc/passwd as our input file.
When we executed awk,
• it evaluated the print command for each line in /etc/passwd, in order.
• All output is sent to stdout, and we get a result identical to catting /etc/passwd.
Now, for an explanation of the { print } code block. In awk, curly braces are used to group blocks of
code together, similar to C.
Inside our block of code, we have a single print command. In awk, when a print command appears
by itself, the full contents of the current line are printed.
Here is another awk example that does exactly the same thing:
This will find out all lines in emp.lst file that contain
the word “director” and print them to the stdout. Same as
grep director emp.lst
In awk, the $0 variable represents the entire current line, so print and print $0 do exactly the
same thing. If you'd like, you can create an awk program that will output data totally unrelated
to the input data. Here's an example:
Whenever you pass the "" string to the print command, it prints a blank line. If you test this
script, you'll find that awk outputs one blank line for every line in your /etc/passwd file. Again,
this is because awk executes your script for every line in the input file. Here's another example:
Multiple fields
Awk is really good at handling text that has been broken into multiple logical fields, and allows
you to effortlessly reference each individual field from inside your awk script. The following
script will print out a list of all user accounts on your system:
Above, when we called awk, we use the -F option to specify ":" as the field separator. When
awk processes the print $1 command, it will print out the first field that appears on each line in
the input file. Here's another example:
As you can see, awk prints out the first and third fields of the /etc/passwd file, which happen to
be the username and uid fields respectively. Now, while the script did work, it's not perfect --
there aren't any spaces between the two output fields! If you're used to programming in bash
or python, you may have expected the print $1 $3 command to insert a space between the two
fields. However, when two strings appear next to each other in an awk program, awk
concatenates them without adding an intermediate space. The following command will insert a
space between both fields:
When you call print this way, it'll concatenate $1, " ", and $3, creating readable output. Of
course, we can also insert some text labels if needed:
$ awk -F":" '{ print "username: " $1 "\t\tuid:" $3" }' /etc/passwd
AWK Variables
You can use variables in awk and assign values to them.
They do not need a $ sign in front of them like shell
variables. Variables do not have data type. Variables are
not declared. String variables are always double quoted.
Numbera are initialized to 0 and strings are initialized to
null (empty string)
X = “5”
Y =10
Z=”A”
Print X + Y Prints 15
Print XY Prints 510
Print Y + Z Prints 10. Z is converted to 0 since it does
not have numerals
Putting your scripts in their own text files also allows you to take advantage of additional awk
features. For example, this multi-line script does the same thing as one of our earlier one-liners,
printing out the first field of each line in /etc/passwd:
BEGIN {
FS=":"
}
{ print $1 }
The difference between these two methods has to do with how we set the field separator. In
this script, the field separator is specified within the code itself (by setting the FS variable),
while our previous example set FS by passing the -F":" option to awk on the command line. It's
generally best to set the field separator inside the script itself, simply because it means you
have one less command line argument to remember to type. We'll cover the FS variable in
more detail later in this article.
Normally, awk executes each block of your script's code once for each input line. However,
there are many programming situations where you may need to execute initialization code
before awk begins processing the text from the input file. For such situations, awk allows you to
define a BEGIN block. We used a BEGIN block in the previous example. Because the BEGIN block
is evaluated before awk starts processing the input file, it's an excellent place to initialize the FS
(field separator) variable, print a heading, or initialize other global variables that you'll
reference later in the program.
Awk also provides another special block, called the END block. Awk executes this block after all
lines in the input file have been processed. Typically, the END block is used to perform final
calculations or print summaries that should appear at the end of the output stream.
/foo/ { print }
Of course, you can use more complicated regular expressions. Here's a script that will print only
lines that contain a floating point number:
/[0-9]+\.[0-9]*/ { print }
There are many other ways to selectively execute a block of code. We can place any kind of
boolean expression before a code block to control when a particular block is executed. Awk will
execute a code block only if the preceding boolean expression evaluates to true. The following
example script will output the third field of all lines that have a first field equal to fred. If the
first field of the current line is not equal to fred, awk will continue processing the file and will
not execute the print statement for the current line:
$1 == "fred" { print $3 }
Awk offers a full selection of comparison operators, including the usual "==", "<", ">", "<=",
">=", and "!=". In addition, awk provides the "~" and "!~" operators, which mean "matches" and
"does not match". They're used by specifying a variable on the left side of the operator, and a
regular expression on the right side. Here's an example that will print only the third field on the
line if the fifth field on the same line contains the character sequence root:
$1 ~ /root/ { print $3 }
Conditional statements
Awk also offers very nice C-like if statements. If you'd like, you could rewrite the previous script
using an if statement:
{
if ( $5 ~ /root/ ) {
print $3
}
}
18.Do you have the permission to modify /etc/sysctl.conf file. Who can modify it.
no.only root user.
24.When I type ifconfig command I get the error message "command not found". What is the
problem and how can I solve it permaneltly.
--if normal user is not having permission of running ifconfig command.can solved by making
entry
in /etc/sudoers file by root user
--or if permission is there but path is not fount then set the path for ifonfig command
ex-- PATH=$PATH:/sbin/ifconfig
27.How can you change the default permissions of a file to r--r--r when the file gets created
30. Find out all files in /tmp directory whose owner is shekhar
find /tmp -user shekhar
31.Find out all files older than 30 days whose extension is log and delete them - in one
command
Vi Questions
a. Go to the end of the file
G
e. How to quit the file without saving and discard changes that you have made
q!
20 Useful Assignments
20.1 Shell Scripting Assignments for Linux Admins
1) Write a shell script for download a file from ftp server.
schedule it to run at a specific time.
send success or failure email.
Use command line arguments for sending ip of the ftp server and loginID & password
3) Write a shell that will chechk the status (Ping) of entire network.
send the list of down servers as a email attachment to respective authority
The list of server is kept in a file named /opt/server_list.txt
4) Write a shell script for generating File system space utilization report.
Indent the content of report nicely.
The report should be sent every morning at 8:00 AM.
6) Write a shell script Lock all users between UID 500 and 530
7) Write a shell script Lock all users whose names are in a file called users.txt
9) Write a shell script to Identify list of user who have executed more than 10 jobs yesterday.
send this report in an email.
10) Write a shell script to find files larger that a specific size.
report it to a concerned person through email.
12) Write a shell script to Reset passwords of all users listed in a file user.list
2. Extract data from a file and then ftp that file to another server.
Schedule it daily.
Send email on success or failure
3. Generate a report of Top 5 customers and their revenue for the last year
and email this report to all participants whose email addresses are present
in a file named email_list.txt
5. Export of oracle database. Name of the DB should be sent as a command line argument. send
email.