Roll back aborted dnf update, 2019 edition

I had to fix an aborted dnf update. My previous post on this topic

sudo dnf remove $( sudo dnf list installed --showduplicates $( sudo dnf list installed --showduplicates | sort | uniq -w35 -D | awk '/^[a-zA-Z]/{print $1}' | sort | uniq | grep -vE 'kernel|saned' ) | awk '$0 !~ /Packages/{split($1,b,".");if($2 > a[b[1]]){a[b[1]]=$2"."b[2]}} END {for (key in a) {print key"-"a[key]} }' )

Walkthrough of the commands

sudo dnf list installed --showduplicates | sort | uniq -w35 -D | awk '/^[a-zA-Z]/{print $1}' | sort | uniq | grep -vE 'kernel|saned'

Show all installed packages, and then show only the duplicates (up to the first 35 characters; having to take a guess here), and remove any saned and kernel packages. I don’t know why I had to exclude saned: Perhaps I wanted both x86_64 and i386 packages for saned.

sudo dnf list installed --showduplicates $ABOVEVALUES | awk '$0 !~ /Packages/{split($1,b,".");if($2 > a[b[1]]){a[b[1]]=$2"."b[2]}} END {for (key in a) {print key"-"a[key]} }'

So, list the output from the previous statement including duplicates, and then use awk to find the highest version number of each named package and store it to a buffer. Then display that whole buffer at the end. So this now shows only the exact name and version (NEVRA, partially) of what to remove.
So this whole process is here to roll back the partially-updated changes.

sudo dnf remove $ABOVEVALUES

And now remove those packages. This should reset, so that we can then perform a regular upgrade at some later point.

references

  1. awk array in END https://unix.stackexchange.com/questions/183279/how-to-view-all-the-content-in-an-awk-array/183280#183280
  2. Prior use of associative arrays in awk https://bgstack15.wordpress.com/2017/04/11/remove-only-certain-duplicate-lines-with-awk/
  3. prior use of dnf –showduplicates but that didn’t work this time https://bgstack15.wordpress.com/2018/04/03/fedora-remove-duplicate-packages-from-partially-completed-dnf-update/
  4. discussion on NEVRA https://slashterix.wordpress.com/2016/08/06/rpm-version-comparison/

Prepend output with time to generate each line

To show how long it takes before showing each new line of output, use this neat command.

long_command | ts -i "%.s"
$ ./configure --prefix=/tools | ts -i %.s
0.082337 checking for a BSD-compatible install... /tools/bin/install -c
0.002841 checking whether build environment is sane... yes
0.008164 checking for a thread-safe mkdir -p... /tools/bin/mkdir -p
0.000040 checking for gawk... gawk
0.005892 checking whether make sets $(MAKE)... yes

References

Weblinks

  1. St├ęphane Chazelas at https://unix.stackexchange.com/questions/391210/prepending-each-line-with-how-long-it-took-to-generate-it/391222#391222

Watch for new processes and list them only once

tl;dr

while true; do ps -ef | grep -iE "processname" | grep -viE "grep \-"; done | awk '!x[$10]++'

Discussion

Top shows everything currently running, and updates every so often (default is 2 seconds). But it has so much information for all the processes it does not show you the details of a single entry.
ps -ef or ps auxe only shows you the processes at that instant. You can run that in a loop piped to grep, but then it continues to show you the same things again and again.

Watch for new processes and list them only once

You can use this snippet to show you new entries for the process you’re looking for.

while true; do ps -ef | grep -iE "processname" | grep -viE "grep \-"; done | awk '!x[$10]++'

What this does is a while loop of all the processes. It updates constantly, because of the while true. The second grep command just prevents it from finding the first grep statement. Piping the whole thing to the special awk statement that removes duplicate lines makes it show only unique ones. And the awk $10 is the tenth column, which for me was the process parameter which is what I wanted to show the uniqueness of.

An improvement upon while true loop

The story

So normally when I want to see output for something, I’ll run a while true loop.

while true; do zabbix_get -s rhevtester.example.com -k 'task.converter_cpu'; done

That doesn’t always stop even when I mash ^C (CTRL+C).

The solution

So I offer my improvement. The way to stop the loop is obvious, and also I will explain it.

while test ! -f /tmp/foo; do zabbix_get -s rhevtester.example.com -k 'task.converter_cpu'; done

To stop the loop, in another shell just:

touch /tmp/foo

Edit terminal title from the command line

tl;dr

export PROMPT_COMMAND='echo -ne "\033]0;NEW TEXT HERE\007"'

Edit terminal title from command line

To modify the window title directly, you just need to use this:

echo -ne "\033]0;NEW TEXT HERE\007"

But in a normal bash environment, your PROMPT_COMMAND will be executed before each display of the prompt, so to affect your interactive shell, you will need that export PROMPT_COMMAND.

References

Weblinks

  1. https://askubuntu.com/questions/22413/how-to-change-gnome-terminal-title

Quickly bounce nic over ssh without losing ssh session

The story

If you make changes to a network card settings and need to restart it to take effect, you might find this useful.
Ssh is resilient enough to usually keep your session if you take the card down and up fast enough. Obviously you want to make sure the change you make will not prevent the card from being enabled correctly. I was making changes to my dns settings for the card, and I wanted them to take effect immediately.

The snippet

echo "ifdown eth0; sleep 2; ifup eth0;" > ~/foo.sh; chmod u+x ~/foo.sh; ~/foo.sh

Remove only certain duplicate lines with awk

Basic solution

http://www.unix.com/shell-programming-and-scripting/153131-remove-duplicate-lines-using-awk.html demonstrates and explains how to use awk to remove duplicate lines in a stream without having to sort them. This statement is really useful.
awk '!x[$0]++'

The fancy solution

But if you need certain duplicated lines preserved, such as the COMMIT statements in the output of iptables-save, you can use this one-liner:
iptables-save | awk '!asdf[$0]++; /COMMIT|Completed on|Generated by/;' | uniq
The second awk rule prints again any line that matches “COMMIT” or “Completed on” or “Generated by,” which appear multiple times in the iptables-save output. I was programmatically adding rules and one host in particular was just adding new ones despite the identical rule already existing. So I had to remove the duplicates and save the output, but keep all the duplicate “COMMIT” statements. I also wanted to keep all the comments as well.