Watch for new processes and list them only once

tl;dr

while true; do ps -ef | grep -iE "processname" | grep -viE "grep \-"; done | awk '!x[$10]++'

Discussion

Top shows everything currently running, and updates every so often (default is 2 seconds). But it has so much information for all the processes it does not show you the details of a single entry.
ps -ef or ps auxe only shows you the processes at that instant. You can run that in a loop piped to grep, but then it continues to show you the same things again and again.

Watch for new processes and list them only once

You can use this snippet to show you new entries for the process you’re looking for.

while true; do ps -ef | grep -iE "processname" | grep -viE "grep \-"; done | awk '!x[$10]++'

What this does is a while loop of all the processes. It updates constantly, because of the while true. The second grep command just prevents it from finding the first grep statement. Piping the whole thing to the special awk statement that removes duplicate lines makes it show only unique ones. And the awk $10 is the tenth column, which for me was the process parameter which is what I wanted to show the uniqueness of.

Remove comments from file but preserve strings

Remove comments from file but preserve strings containing the comment symbol

sed '/#/!b;s/^/\n/;ta;:a;s/\n$//;t;s/\n\(\("[^"]*"\)\|\('\''[^'\'']*'\''\)\)/\1\n/;ta;s/\n\([^#]\)/\1\n/;ta;s/\n.*//' file

Explanation

/#/!b if the line does not contain a # bail out
s/^/\n/ insert a unique marker (\n)
ta;:a jump to a loop label (resets the substitute true/false flag)
s/\n$//;t if marker at the end of the line, remove and bail out
s/\n\(\(“[^”]*”\)\|\(‘\”[^’\”]*’\”\)\)/\1\n/;ta if the string following the marker is a quoted one, bump the marker forward of it and loop.
s/\n\([^#]\)/\1\n/;ta if the character following the marker is not a #, bump the marker forward of it and loop.

References

  1. Shamelessly plagiarized from http://stackoverflow.com/a/13551154/3569534

An improvement upon while true loop

The story

So normally when I want to see output for something, I’ll run a while true loop.

while true; do zabbix_get -s rhevtester.example.com -k 'task.converter_cpu'; done

That doesn’t always stop even when I mash ^C (CTRL+C).

The solution

So I offer my improvement. The way to stop the loop is obvious, and also I will explain it.

while test ! -f /tmp/foo; do zabbix_get -s rhevtester.example.com -k 'task.converter_cpu'; done

To stop the loop, in another shell just:

touch /tmp/foo

Edit terminal title from the command line

tl;dr

export PROMPT_COMMAND='echo -ne "\033]0;NEW TEXT HERE\007"'

Edit terminal title from command line

To modify the window title directly, you just need to use this:

echo -ne "\033]0;NEW TEXT HERE\007"

But in a normal bash environment, your PROMPT_COMMAND will be executed before each display of the prompt, so to affect your interactive shell, you will need that export PROMPT_COMMAND.

References

Weblinks

  1. https://askubuntu.com/questions/22413/how-to-change-gnome-terminal-title

Quickly bounce nic over ssh without losing ssh session

The story

If you make changes to a network card settings and need to restart it to take effect, you might find this useful.
Ssh is resilient enough to usually keep your session if you take the card down and up fast enough. Obviously you want to make sure the change you make will not prevent the card from being enabled correctly. I was making changes to my dns settings for the card, and I wanted them to take effect immediately.

The snippet

echo "ifdown eth0; sleep 2; ifup eth0;" > ~/foo.sh; chmod u+x ~/foo.sh; ~/foo.sh

Remove only certain duplicate lines with awk

Basic solution

http://www.unix.com/shell-programming-and-scripting/153131-remove-duplicate-lines-using-awk.html demonstrates and explains how to use awk to remove duplicate lines in a stream without having to sort them. This statement is really useful.
awk '!x[$0]++'

The fancy solution

But if you need certain duplicated lines preserved, such as the COMMIT statements in the output of iptables-save, you can use this one-liner:
iptables-save | awk '!asdf[$0]++; /COMMIT|Completed on|Generated by/;' | uniq
The second awk rule prints again any line that matches “COMMIT” or “Completed on” or “Generated by,” which appear multiple times in the iptables-save output. I was programmatically adding rules and one host in particular was just adding new ones despite the identical rule already existing. So I had to remove the duplicates and save the output, but keep all the duplicate “COMMIT” statements. I also wanted to keep all the comments as well.

Ssh use password this time

Problem

If you normally use passwordless authentication for your ssh sessions, you know how helpful it is. It saves you from having to type your password in all the time. You might have ssh keys set up (ssh-keygen) or kerberos.

In either case, you found this post because you want to use ssh with a password this time. You need to force ssh to use the password, just this once, without having to make all sorts of complicated requirements.

Solution

In either case, you found this post because you want to use ssh with a password this time. You need to force the password. Here’s how to do that:

function sshp { ssh -o PreferredAuthentications=password,keyboard-interactive -o PubkeyAuthentication=no -o GSSAPIAuthentication=no "$@"; }

The above snippet comes from my latest and greatest bgscripts.bashrc in my bgscripts package.

What the function does is execute ssh with a few extra options, that are very straightforward. It specifically lists the preferred methods for authentication, while disabling public key (which is what most people use) and kerberos auth (GSSAPI).

References

Weblinks

Shell one-liners for manipulating symlinks

Showing symlink targets

I was trying to clean up a directory filled with files and symlinks. Many of the symlinks’ targets did not exist anymore (old package versions). I wrote a few one-liners to help me examine and clean up the directory.

Here’s the first one-liner.

find . -type l -print0 | sed -e 's/\.\///g;' | xargs -0 -n1 | while read line; do canon=$( readlink -f "${line}" 2>/dev/null ); test -e "${canon}" && echo "${line} is $( /bin/ls -d --color=always "${canon}" )" || echo "INVALID ${line}"; done

In expanded form:
find . -type l -print0 | \
sed -e 's/\.\///g;' | \
xargs -0 -n1 | \
while read line; do \
canon=$( readlink -f "${line}" 2>/dev/null );
test -e "${canon}" && echo "${line} is $( /bin/ls -d --color=always "${canon}" )" || echo "INVALID ${line}";
done

The find command lists symlinks in the current directory, with special null characters as the separators. This will allow correct operation with spaces in filenames. You don’t normally encounter this in GNU/Linux, but you never know!

The sed removes the dot slash ./ from the beginning of filenames. I just don’t like seeing that. If I had selected find * earlier, I wouldn’t need the sed command, but I would miss any files whose names begin with dot (e.g., .gitignore) and I would still find file . which I would have to exclude somehow anyway (well, not here because I’m searching for symlinks, but I personally never try to run find * no matter what).

xargs -0 -n1 now removes the null character separators, and feeds each individual entry one at a time to the next command.

While: So for each entry, read the symlink. If the target is a file, display it in whatever color ls would show it (without entering directories), because colors are pretty! Otherwise, print the line with “INVALID” in front of it.

Cleaning up the invalid symlinks

find . -type l -print0 | sed -e 's/\.\///g;' | xargs -0 -n1 | while read line; do canon=$( readlink -f "${line}" 2>/dev/null ); test -e "${canon}" && true || { echo /bin/rm "./${line}"; }; done
Replace the echo commands with true for the affirmative and the echo /bin/rm command.
To actually effect the change, remove the echo in front of /bin/rm.