Find used files in /etc/sudoers.d/

Sudo, when given the #includedir /etc/sudoers.d/ directive, will read files in that directory.

According to its man page, it does not interpret files whose names include a dot or end with a tilde ~. You can list the files that are available for sudo to interpret with this statement:

find /etc/sudoers.d/ -type f ! -regex '\/etc\/sudoers\.d\/.*\..*'  ! -regex '\/etc\/sudoers\.d\/.*~'
Advertisements

Grep odt file

Overview

In the GNU/Linux world, you spend a lot of time on the command line. Searching a text file is a piece of cake:
grep -iE "expression" file1

You might even use a gui, and inside that gui you might even use an open-source office suite for all those times that plain text isn’t enough. But what about when you want to search one of those odt files you vaguely remember is some form of xml?

Easy. You use unoconv or odt2txt (look those up in your package manager) and then grep the outputted file. Or you can use the –stdout option.

unoconv -f txt foo.odt

unoconv -f txt --stdout foo.odt | grep -iE "Joe Schmoe"

History

I first started tackling this problem by figuring out how to access the xml inside. I learned an odt file is zipped, but a tar xf didn’t help. Turns out it was some other compression, that unzip manages.

I also had to actually learn the tiniest bit of perl, as regular GNU grep (and I inferred sed) doesn’t do non-greedy wildcard matching.

So I got this super-complicated one-liner going before I decided to try a different approach and discovered the unoconv and odt2txt applications.

time unzip -p foo.odt content.xml | sed -e 's/\([^n]\)>\n(.*)<\/\1>/\2/;s/<text:h.*?>(.*)<\/text:h>/\1/;' -e 's/<style:(font-face|text-properties).*\/>//g;' | sed -e "s/'/\'/g;s/"/\"/g;s/<text:.*break\/>//g;"

 

References

Weblinks

  1. Unzipping an odt file https://ubuntuforums.org/showthread.php?t=899179&s=3aa7c303c4a5655e039600c4082d7a2c&p=5653494#post5653494
  2. Perl non-greedy wildcard matching http://stackoverflow.com/a/1103177/3569534

Shell one-liner to show total size of filetype in directory

find /home/bgstack15 -mtime +2 -name "*.csv" | xargs stat -c "%s" | awk '{Total+=$1} END{print Total/1024/1024}'

This one-liner shows the cumulative size of all the .csv files in /home/bgstack15 (and subdirectories).

The explanation

find /home/bgstack15 -mtime +2 -name “*.csv”
Lists all csv files modified 2 or more days ago in my home directory. If the time of the file is insignificant, just remove the -mtime +2.
Pipe that output to
xargs stat -c “%s”
Some people out there use ls for this, but other people say don’t do that, a la http://mywiki.wooledge.org/ParsingLs.
Anyway, this command takes the standard output from the pipe and adds it to the end of the command, which in this case is stat. Stat here is listing just the file size in bytes for each file. It doesn’t even include the name of the file in this case. That’s all adjustable, of course.
Pipe that output to
awk ‘{Total+=$1} END{print Total/1024/1024}’
This command adds the first delimited (tab, in this case) word from each line and adds it to a variable, “Total.” At the end of all the lines, show the total value divided by 1024 divided by 1024, so the output is in MB (megabytes).