Shell trick: use template file

If you want to use a POSIX shell for building files from templates, you can use a nifty trick shared on StackOverflow.

$ eval "cat <<EOF >outputfile
$( cat )
" 2>/dev/null

And can contain shell variables or even command substitution!

$ cat
[domain/$( hostname --domain )]

debug_level = 1
id_provider = ipa
ipa_server = _srv_, $( hostname --domain )
ipa_domain = $( hostname --domain )
ipa_hostname = $( hostname --fqdn )

This is a neat little trick, but should not be a regular substitute for better templating mechanisms like the jinja2 in ansible.

Verify ansible vault password

# prepare vault password file
printf 'Vault password: ' ; read -se VAULT_PASS ; printf '\n' ;
echo "${VAULT_PASS}" > "${PWFILE}"
# fail out if password is incorrect
! ansible-vault view --vault-password-file "${PWFILE}" "${VAULTFILE}" 1>/dev/null && exit 1

You can use shell to read in the password and save it to a file. Just remember to clean it up at the end!

I like to do this right before a shell loop that calls ansible with vaulted values multiple times, so I’m not prompted multiple times to enter the password.

Load your bash profile even when switching user

If you use a shared application account, for example, ansible, and want your profile to be used even after switching to that user, here is a solution for you. Add this to the destination user ~/.bashrc.

echo "${SSH_CLIENT}" | grep -q '' && __bp=1
test "$( stat -c '%U' "$( tty 2>/dev/null )" 2>/dev/null )" = "bgstack15" && __bp=1
test "${__bp}" = "1" && test -f /usr/bin/bp && . /usr/bin/bp --noclear

I am using a world-readable copy of my bgstack15 profile, aka bash profile.

Convert Local to AD Users

Project CLADU

CLADU stands for Convert Local to AD User.

When you want to take local accounts and remove them and have the AD user with the exact same name take its place, use cladu.

usage: [-duV] [-gr] [--ng] [--nr] user1 [ user2 user3 ... ]
version 2018-03-09a
-d debug Show debugging info, including parsed variables.
-u usage Show this usage block.
-V version Show script version number.
-g groups Add the AD user to the local groups of the local user. Default is to skip this action.
--ng Do not perform the -g action
-r report Generate report in each user homedir.
--nr Do not perform the -r action
Environment variables:
Parameters override environment variables
CLADU_USER_REPORT any non-null value will perform the -r action.
CLADU_USER_REPORT_FILENAME=converted.txt File to save report to in each homedir
CLADU_GROUPS any non-null value will perform the -g action.
Return values:
0 Normal
1 Help or version info displayed
2 Count or type of flaglessvals is incorrect
3 Incorrect OS type
4 Unable to find dependency
5 Not run as root or sudo

Go check out the entire source to look at the flow of the script.

Delayed cleanup of temp files for shell script

If you want the temp files from your script to be left around for a few minutes, you can use a few shell tricks to remove a temp directory a few minutes later.

If the environment variable FETCH_NO_CLEAN has any content at all, the script will not clean up at all.
The script sends a heredoc to a separate, nohupped shell instance. The commands are to wait so many seconds (default of 300 seconds), and then remove the directory.

To ensure the cleanup function is called, set a trap on the most common exit codes (0 through 20) to call the function, unset the traps, and then exit.
Then you can define a tempdir, and use it to make some temp files.

Using the TMPDIR variable is safe on FreeBSD mktemp, but bash will be /usr/local/bin/bash.

Shell functions for testing other functions

When I am updating or adding to my scripts package, I like to find out the fastest way to execute a task.

This involves using the time command and a few wrapper functions.

Here’s an example. I was updating my lsd (list directories) function, and I wanted to see how fast it is compared to a variant I was considering.

func() { pushd "${1}" 1>/dev/null 2>&1 ; find $( find . -maxdepth 1 -type d -printf '%p\n' | sed -r -e 's/^\.\/?//;' | sed -r -e '/^\s*$/d' ) -maxdepth 0 -exec ls -ldF --color=always {} + ; popd 1>/dev/null 2>&1 ;  } ;

func_wrapper() { __x=0 ; while test ${__x} -lt ${1} ; do __x=$(( __x + 1 )) ; __func "${2}" "{$3}" ; done ; }

So now you can run the wrapper and tell it how many times to loop:

time func_wrapper 1000 . 1>/dev/null

real    0m6.081s
user    0m2.103s
sys     0m5.157s

Convert input sets of numbers to numerical sequences


I wrote a function for shell (basically bash) that makes it possible to convert a series of numbers such as “1,5-8,15” into a completely enumerated sequence, so 1 5 6 7 8 15.

I needed this to facilitate passing parameters to another function, but with the ability to give arbitrarily-grouped sets of numbers.

You can see my gist on github.

convert_to_seq() {
  printf "${@}" | xargs -n1 -d',' | tr '-' ' ' | awk 'NF == 2 { system("/bin/seq "$1" "$2); } NF != 2 { print $1; }' | xargs

convert_to_seq "$1"

Try it out for yourself! If you are looking for such a function, here you go.


Input: 1,5,8-10
Output: 1 5 8 9 10

Input: 500-510,37
Output: 500 501 502 503 504 505 506 507 508 509 510 37

New user in freeipa has plain bash shell instead of reading .bashrc

So you have a new user in freeipa, and he can successfully log in to a freeipa client. And you know for certain you executed ipa-client-install with the –mkhomedir option. But when you open a terminal as the new user, it shows you the boring bash prompt ‘bash-4.1$’ or whatever version.

You checked the /etc/skel, and it has a valid .bashrc file, and when you dot source your own ~/.bashrc, it then loads the prompt you expect.

Here’s your issue: do a getent passwd username. Look at the login shell of the user. It’s going to be the default /bin/sh. Just change it in ipa to be /bin/bash! An sss_cache -E command was not enough; you have to log out and then back in to have it take effect. It’s probably because the terminal emulator is being called from a process that was started before the account was changed.

Linux use tee with color

Output in color on the console

The console normally can display color. The most obvious example is when you do a standard ls command. By default on most systems, ls includes a –color=auto flag that changes filenames of certain types to different colors.

Some applications display colorized output like systemctl status nfsd.service.

Using tee

Tee is a nifty cli utility that duplicates the input to both the standard out as well as to a file. You can use it like so:

ls -l --color=always | tee ~/ls.out

Some tools will disable colorized output if it is going to a pipe, but ls can be forced to provide color with the –color=always flag. For applications that don’t have such an option, you can usually prepend the command unbuffer.

unbuffer ansible-playbook playbook.yml | tee ansible.log

When you examine the file, however, you will observe that the console color commands were saved. That’s perfectly fine if you cat the file, or less -R, but normally you want a text file to just have the text. Sysadmins are quite used to dealing with logs that are all the standard color of the console.

The solution: ctee!

Using some fantastic resources on the Internet which I list at the bottom of this post, I assembled a tool that will tee the input, and send color to the stdout (usually the console) and send regular text to the file. I call my creation ctee.

Code walkthrough

So this script was generated using newscript which copies the template from the same bgscripts package. The template sources framework, which is my big flagship library of functions for shell scripts.

Lines 39-54 are the parseFlag function, which parses the command line passed parameters. I implemented this before I ever learned about Python’s argparse library.
The only flag worthy to note is –append. All it does is set the variable to 1.

The meat of this script starts at line 85 and goes to the end, which is shown below.

# this whole operation calculates where the stdout of this whole pipe goes
_fd1="$( readlink -f /proc/${_pid}/fd/1 )"
while test ${ttyresolved} -lt 10;
   ttyresolved=$(( ttyresolved + 1 ))
   echo "before ${ttyresolved}, _pid=${_pid}, _fd1=${_fd1}" > ${devtty}
   case "${_fd1}" in
      *pipe:* )
         newpid=$( find /proc -type l -name '0' 2>/dev/null | xargs ls -l 2>/dev/null | grep -F "$( basename ${_fd1} )" | grep -viE "\/${_pid}\/"; )
         newpid=$( echo "${newpid}" | head -n1 | grep -oiE "\/proc\/[0-9]*\/" | grep -o "[0-9]*" )
         _fd1="$( readlink -f /proc/${_pid}/fd/1 )"
       *dev*|*/* )

echo "thisttyout ${thisttyout}" > ${devtty}

case "${append}" in
      tee ${thisttyout} | sed -u -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" >> "${outfile1}"
   *) tee ${thisttyout} | sed -u -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" > "${outfile1}"

The idea behind ctee is that it uses tee, but inverts the outputs. Ctee uses the console tty as the file to output to, and then standard out continues on to a sed command that strips out the console color escape sequences which is then redirected to the file specified to ctee as a parameter.

So one of the big tricks is to derive which console you are executing this command from. The while loop from lines 90-106 use a /proc filesystem trick to find the process on the other end of a pipe. This loop travels from this process’s standard out to the standard in of another process, and then takes that standard out, and tracks that down until it includes the phrase “dev” or “/” in its name, indicating either a tty or a file (technically the dev is redundant, but it makes the line easier to understand).

The second main trick of ctee is that it has a sed command strip out the console color control characters. I found a nice way to remove that, and you can see those commands on lines 113 and 115. One line is for appending to a file, and the other is for overwriting.


You can use like so:

unbuffer ansible-playbook playbook.yml | /var/log/ansible/playbook.$( date "+%Y-%m-%d-%H%M%S" ).log

Which will get your console the colorized output that helps for easy interpretation of the output, with the normal logging for later analysis.

Moving the application installation directory for bgscripts


My bgscripts package in version 1.1-27 and before was installed in /usr/bgscripts. The more I thought about it, and the more I read the spec for FHS 3.0 (filesystem hierarchy standard) (PDF), the more I realized it should actually be in /usr/share/bgscripts.

Moving the files in the build directory was easy.

Wrapper script

But despite my symlinks in /usr/bin, which were easy to update, I have many, many hard-coded pathname calls of my scripts everywhere. So to mitigate the problem, I wrote a fancy wrapper script for regular scripts, including bup, send, bgscripts.bashrc, rdp, and treesize.

I needed to send myself a message with the various information about what script called the old location. I picked because that’s what it’s for– sending emails. I slapped in a bunch of diagnostic information and wrote a call to the new location.

# Wrapper at legacy location for bgscripts. Sends alert and then continues the process.
while read flocation; do if test -x ${flocation} && test "$( ${flocation} --fcheck )" -ge 20161212; then frameworkscript="${flocation}"; break; fi; done <<EOFLOCATIONS
test -z "${frameworkscript}" && echo "$0: framework not found. Aborted." 1>&2 && exit 4
. ${frameworkscript} || echo "$0: framework did not run properly. Aborted." 1>&2
pwd="$( pwd )"
psout="$( ps -ef )"
parentprocess="$( ps -ef | grep -iE -- "${thisppid}" | grep -viE -- "grep.*-iE.*--" )"
/usr/share/bgscripts/ -hs "bgscripts alert ${server}: ${scriptfile}" <<EOF
----- Description -----
This alert means that script "${scriptfile}" was called, and you need to fix it because /usr/bgscripts/* has been deprecated and will be removed in a future version.
----- Debug info-----
pwd: ${pwd}
Scriptdir: ${scriptdir}
Scriptfile: ${scriptfile}
All parameters: $@
Server: ${server}
Now: ${now}
Thisos: ${thisos}
Thisflavor: ${thisflavor}
Thisppid: ${thisppid}
Parent process: ${parentprocess}
----- Process list -----
/usr/share/bgscripts/${scriptfile} "${@}"

So if something of mine calls /usr/bgscripts/bgscripts.bashrc, this wrapper script will throw an email alert, and then redirect the script call so workflows are not interrupted!

The very first production use of the wrapper script was when I logged onto my test Ubuntu box to conduct a deep-dive search (more on that in the next heading) of scripts that call I got an email right away, saying I had dot-sourced the bashrc script. Looking at the diagnostic info, I was reminded that I tended to hardcode into my ~/.bashrc the call to /usr/bgscripts/bgscripts.bashrc.

I have now updated that to point to /usr/bin/bp, so in the future it will point to wherever I want it to. However, I seriously doubt the location will change now. My package is now FHS 3.0 compliant, and I don’t want to break that.

Deep dive search

So, I wanted to conduct a deep-dive search of all shell scripts (defined by me as everything ending in .sh, because I almost always use the extension) on all systems that refer to, which is going to be the big hangup. So I whipped up this oneliner.
sudo updatedb; locate .sh | grep -iE "\.sh$" | grep -viE "\/usr\/src|\/usr\/share|\/usr\/lib|\/home\/.*\/rpmbuild/|\/home\/work\/.*clean" | xargs grep -iE "framework\.sh"

Now, an reading of my code shows that many of my scripts do a lookup for in various locations, including ./, ~/, /usr/local/bin/, and so on. You can observe this pattern in my wrapper script in this post.

So the serach will catch those, but I will recognize those blocks of code because they are just the filename on a separate line (in a here-doc) and as long as I see the /usr/libexec/bgscripts/ final resting place of framework, I’ll be satisfied.



  1. Filesystem hierarchy standard 3.0
  2. FHS 3.0 in pdf
  3. Here document