This is the fifth epidose of a small series.
Description:
Logging the output of the scripts to a file is very important. There are several ways to achieve it, I will just show one of my favorites.
BAD:
You can log badly either from the script to a log file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
#!/bin/bash -l TODAY=`date +"%Y%m%d" LOGDIR='/path/to/log' OUTPUT="${LOGDIR}/output_${TODAY}.log" # create the empty file or overwrite the existing one > $OUTPUT echo "Writing to the logfile" | tee -a $OUTPUT command | tee -a $OUTPUT echo "ops, this message and command will not be logged" command exit $? |
or by redirecting badly the standard output of the script:
1 2 |
$ crontab -l 0 * * * * /path/to/script.sh > /path/to/always_the_same_log.out 2>&1 |
GOOD:
My favorite solution is to automatically open a pipe that will receive from the standard output and redirect to the logfile. With this solution, I can programmatically define my logfile name inside the script (based on the script name and input parameters for example) and forget about redirecting the output everytime that I run a command.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
export LOGDIR=/path/to/logfiles export DATE=`date +"%Y%m%d"` export DATETIME=`date +"%Y%m%d_%H%M%S"` ScriptName=`basename $0` Job=`basename $0 .sh`"_whatever_I_want" JobClass=`basename $0 .sh` function Log_Open() { if [ $NO_JOB_LOGGING ] ; then einfo "Not logging to a logfile because -Z option specified." #(*) else [[ -d $LOGDIR/$JobClass ]] || mkdir -p $LOGDIR/$JobClass Pipe=${LOGDIR}/$JobClass/${Job}_${DATETIME}.pipe mkfifo -m 700 $Pipe LOGFILE=${LOGDIR}/$JobClass/${Job}_${DATETIME}.log exec 3>&1 tee ${LOGFILE} <$Pipe >&3 & teepid=$! exec 1>$Pipe PIPE_OPENED=1 enotify Logging to $LOGFILE # (*) [ $SUDO_USER ] && enotify "Sudo user: $SUDO_USER" #(*) fi } function Log_Close() { if [ ${PIPE_OPENED} ] ; then exec 1<&3 sleep 0.2 ps --pid $teepid >/dev/null if [ $? -eq 0 ] ; then # a wait $teepid whould be better but some # commands leave file descriptors open sleep 1 kill $teepid fi rm $Pipe unset PIPE_OPENED fi } OPTIND=1 while getopts ":Z" opt ; do case $opt in Z) NO_JOB_LOGGING="true" ;; esac done Log_Open echo "whatever I execute here will be logged to $LOGFILE" command Log_Close |
(*) the functions edebug, einfo, etc, have to be created using the guidelines I have used in this post: Bash tips & tricks [ep. 4]: Use logging levels
The -Z parameter can be used to intentionally avoid logging.
Again, all this stuff (function definitions and variables) should be put in a global include file.
If I execute it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# [ ludo@testsrv:/scripts [21:10:17] [not set env:"not set"] 0 ] # # sudo -u oracle ./myscript.sh 2016-03-16 21:10:20 - Logging to /path/to/logfiles/myscript/myscript_whatever_I_want_20160316_211020.log 2016-03-16 21:10:20 - Sudo user: ludo whatever I execute here will be logged to /path/to/logfiles/myscript/myscript_whatever_I_want_20160316_211020.log # [ ludo@testsrv:/scripts [21:10:20] [not set env:"not set"] 0 ] # # sudo -u oracle ./myscript.sh -Z 2016-03-16 21:15:18 - INFO ---- Not logging to a logfile because -Z option specified. whatever I execute here will be logged to # [ ludo@testsrv:/scripts [21:10:20] [not set env:"not set"] 0 ] # # cat /path/to/logfiles/myscript/myscript_whatever_I_want_20160316_211020.log 2016-03-16 21:10:20 - Logging to /path/to/logfiles/myscript/myscript_whatever_I_want_20160316_211020.log 2016-03-16 21:10:20 - Sudo user: ludo whatever I execute here will be logged to /path/to/logfiles/myscript/myscript_whatever_I_want_20160316_211020.log |
Latest posts by Ludovico (see all)
- New views in Oracle Data Guard 23c - January 3, 2024
- New in Data Guard 21c and 23c: Automatic preparation of the primary - December 22, 2023
- Does FLASHBACK QUERY work across incarnations or after a Data Guard failover? - December 13, 2023
Nice article, but I need to do some work because I use ksh, and I don’t understand the use of “sleep 0.2”, until ash only accepts integer values.
Can you give me some help on that?
Thanks!
Hi Daniel, sleep in bash does accept rational values. I’m sorry, but I can’t help you with ksh. I was used to it many years ago but I can’t recall the last time I used it. Also, the title of the blog post says it all 😉 It’s Bash tips, not KSH tips 🙂
Nice article.
I was wondering if it is a good practice to set these script as common.sh then include them in all scripts ?
Anyhow, that’s what I tried to do but never managed to make it work :-/
Would it be possible to update from your side ?
Absolutely! I have a init script that I frequently use for my scripts. I plan to add it to my github very soon.
also i am not sure if this could be a option but if we do (2>&1).. but this would be required on all commands…
ls /do/not/work 2>&1
F_check_warn $? “ls working or not”
we get something below in the logs…
2016-09-27 18:02:44 – Logging to /tmp/logs/test_output_20160927_180244.log
ls: cannot access /tha: No such file or directory
2016-09-27 18:02:44 – WARNING – ls working or not failed with exit code 2. The script will continue.
Hello –
That option will work as long as the command we are trying to run does not have double quotes in it, something like below is sure to break…
original command
mysql -uroot -h0 -e “show database”
after F_try function
F_try “mysql -uroot -h0 -e “show database” “
You are right. You can add “eval” in order to evaluate correctly the string via bash.
The good with the F_try approach is that you can automatically enrich it for better debugging.
E.g.
function F_try() {
Command=$@
edebug “Executing the following command:”
edebug ”
${Command}
”exec 4>&1
_ERRMSG=$( { eval ${Command} >&4 ; } 2>&1 )
return $?
}
F_try “ls -ld \”/tmp\””
F_check_warn $? “ls working or not”
F_try “ls -ld \”/not/existing\””
F_check_warn $? “ls working or not”
OUTPUT:
# ./test.sh -G
2016-09-28 15:32:31 – DEBUG — -G specified: Debug mode
2016-09-28 15:32:31 – DEBUG — Executing the following command:
2016-09-28 15:32:31 – DEBUG —
ls -ld "/tmp"
drwxrwxrwt. 25 root root 4096 Sep 28 15:32 /tmp
2016-09-28 15:32:31 – SUCCESS – ls working or not succeded with exit code 0
2016-09-28 15:32:31 – DEBUG — Executing the following command:
2016-09-28 15:32:31 – DEBUG —
ls -ld "/not/existing"
2016-09-28 15:32:31 – WARNING – STDERR: ls: cannot access /not/existing: No such file or directory
2016-09-28 15:32:31 – WARNING – ls working or not failed with exit code 2. The script will continue.
2016-09-28 15:32:31 – DEBUG — No cleanup function defined.
hi –
this is a great series of tips and tricks when it comes to writing shell script. I had one question with regards to this.
Using the above and also the “F_check_warn” function, how does one go about capturing the actual output of the error… for example if i run below
Log_Open
ls /do/not/work
F_check_warn $? “ls working or not”
Log_Close
I would get output like below in my logfile
2016-09-27 13:50:59 – WARNING – ls working or not failed with exit code 2. The script will continue.
but the actual error output (like below) will be missed…so how does one capture that?
ls: cannot access/do/not/work: No such file or directory
Hi max,
The standard error is not filtered, so if you follow my examples the only way to see the error is in interactive mode…
Good point!
actually it’s not so easy…
I guess you can try to put a wrapper around the command that sets a variable with the error. Then you can display it in the F_check_warn.
Let’s try:
function F_check_warn() {
EXITCODE=$1
shift
if [ $EXITCODE -eq 0 ] ; then
eok $@ succeded with exit code $EXITCODE
else
if [ “${_ERRMSG}” ] ; then
ewarn “STDERR: ${_ERRMSG}”
fi
ewarn $@ failed with exit code $EXITCODE. The script will continue.
fi
# return the same code so other checks can follow this one inside the script
return $EXITCODE
}
function F_try() {
cmd=$@
exec 4>&1
_ERRMSG=$( { $cmd >&4 ; } 2>&1 )
return $?
}
F_try “ls /not/existing”
F_check_warn $? “ls working or not”
THE OUTPUT IS….
# ./test.sh
2016-09-27 17:40:08 – WARNING – STDERR: ls: cannot access /not/existing: No such file or directory
2016-09-27 17:40:08 – WARNING – ls working or not failed with exit code 2. The script will continue.
Bingo! I think I will introduce this new concept 😉 thanks for the idea
Pingback: Bash tips & tricks [ep. 5]: Write the output to a logfile - Ludovico Caldara - Blogs - triBLOG