Logrotate, audit.log, selinux, cron, and ansible

The story

The disk space for /var/log/audit/audit.log tends to get filled up. The audit daemon has an ability to rotate its own logs. See the man page for auditd.conf.

max_log_file             =  100
max_log_file_action      =  rotate

That’s swell and all, until you realize that auditd cannot compress its rotated logs. On a small /var/log/audit mount point, you’ll fill it up with uncompressed logs.

/dev/mapper/os-var_log_audit          2490M 2136M      355M      86% /var/log/audit

So on a RHEL7 system with logrotate, you can adjust logrotate to handle the audit.log file. Now, logrotate is a finicky application. It has caused me many hours of grief in the past.
You would want to set auditd.conf a certain way:

max_log_file             =  0
max_log_file_action      =  ignore

And set /etc/logrotate.d/audit:

/var/log/audit/*.log {
        rotate 30
        minsize 100k
        maxsize 200M
                touch /var/log/audit/audit.log ||:
                chmod 0600 /var/log/audit/audit.log ||:
                service auditd restart

And ensure you’ve got a /etc/cron.weekly/logrotate:


/usr/sbin/logrotate /etc/logrotate.conf
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
exit 0

After a few days, I learned that my logs were getting filled up so fast, the weekly rotation wasn’t good enough. So I had to place it in my cron.hourly.
And then I learned that it wasn’t running every hour. I spent a few days investigating, and eventually learned that some systems use a specific status file for logrotate. I remember in the past logrotate needs an execution with a -f flag to force the rotation the first time and add a new file to the status file. So if a new file was never force-rotated, it won’t be added to the status file.
My manual logrotate -f command was indeed adding my audit.log log file to the status file, but to the wrong one!
Some of my systems use -s /var/lib/logrotate/logrotate.status but the default is /var/lib/logrotate.status.
So I had to reflect that in my ansible playbook. Actually, I had to write some logic to find the one used by the cronjob and then use that status file.

So I got the correct logrotate status file set up in the ansible playbook. I spent the next week figuring out that logrotate simply couldn’t rotate the file when called from cron. I piped the utility to tee, and also included the -v flag on logrotate. I saw a permission denied.
With the permission issue, I had no choices left by selinux. I had to use the audit.log file to determine that the audit.log file is not readable by logrotate when called by cron.
I finally set captured all the actions performed by logrotate by setting the selinux process context to be permissive:

semanage permissive -a logrotate_t
I let it run, and then had to collect all the actions it performed, and saw what had happened.
{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2why

So I used audit2allow to convert it to an selinux policy.

{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2allow -M logrotate-audit

And then after some searching online, I learned how I can keep the text definition file, and compile the policy from it when I need to:

grep logrotate /var/log/audit/audit.log | audit2allow -m logrotate-audit # saves to logrotate-audit.te
checkmodule -M -m -o logrotate-audit.mod logrotate-audit.te # intermediate step
semodule_package -o logrotate-audit.pp -m logrotate-audit.mod # compiled policy
semodule -i logrotate-audit.pp

The text definition of logrotate-audit policy:

#semodule -i logrotate-audit.pp

module logrotate-audit 1.0;

require {
        type auditd_etc_t;
        type logrotate_t;
        type auditd_log_t;
        class file { create getattr ioctl open read rename setattr unlink write };
        class dir { add_name read remove_name write };

#============= logrotate_t ==============
allow logrotate_t auditd_etc_t:file getattr;
allow logrotate_t auditd_log_t:dir { read write add_name remove_name };
allow logrotate_t auditd_log_t:file { create ioctl open read rename getattr setattr unlink write };

Now, I wrote a master ansible playbook that performs this whole operation, from loading the .te file and compiling it and installing it, to setting logrotate to watch the audit file, and telling auditd to ignore rotating it.
Note: It is outside the scope of this task to ensure that the selinux tools are in place on each server. My environment already ensures package libselinux-python is present on each system, which should bring in all the dependencies of this ansible playbook.

# File: /etc/ansible/books/fix_var-log-audit.yml
# Author: bgstack15
# Startdate: 2018-01-24
# Title: Playbook that Fixes the /var/log/audit Space Issue
# Purpose: Logical Disk Free Space is too low
# History:
# Usage:
#    ansible-playbook -i /etc/ansible/inv/hosts /etc/ansible/configuration/fix_var-log-audit.yml -l hostwithproblem201
#    Use the -l host1,host2 parameter.
# Reference:
#    roles/general_conf/tasks/04_selinux.yml
#    roles/general_conf/tasks/05_auditd.yml
# Improve:
# Documentation:
#    The intention with auditd is to minimize the disk usage of the logs

- hosts: all
  remote_user: ansible_user
  become: yes

    auditd_conf: /etc/audit/auditd.conf
    auditd_log_cleanup_regex: '.*audit\.log\.[0-9]+'
    auditd_log_dir: /var/log/audit
    auditd_logrotate_conf: /etc/logrotate.d/audit


# To make it possible to just drop in files to the files directory and have this module read them automatically, use these two.
#  - name: learn full list of semodules available to install, modular list version
#    shell: warn=no find /etc/ansible/roles/general_conf/files/selinux/ -regex '.*.te' -printf '%f\n' | sed -r -e 's/\.te$//;'
#    register: semodules_list
#    changed_when: false
#    delegate_to: localhost
#    ignore_errors: yes
#  - name: learn semodule versions to install, modular list version
#	shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/roles/general_conf/files/selinux/{{ item }}.te | awk '{print $3*1000;}'
#    register: selinux_pol_versions_target
#    changed_when: false
#    delegate_to: localhost
#    with_items:
#    - "{{ semodules_list.stdout_lines }}"

  - name: learn semodule versions to install, static version
    shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/templates/{{ item }}.te | awk '{print $3*1000;}'
    register: selinux_pol_versions_target
    changed_when: false
    delegate_to: localhost
    - logrotate-audit

  #- debug:
  #    msg: "{{ item.item }} should be {{ item.stdout }}"
  #  with_items:
  #  - "{{ selinux_pol_versions_target.results }}"

  - name: learn current semodule versions
    shell: warn=no semodule --list | awk '$1=="{{ item.item }}" {print $2*1000} END {print "0";}' | head -n1
    register: selinux_pol_versions_current
    changed_when: false
    - "{{ selinux_pol_versions_target.results }}"

  - debug:
      msg: "{{ item.item.item }} is currently {{ item.stdout }} and should be {{ item.item.stdout }}"
    - "{{ selinux_pol_versions_current.results }}"

  #- pause:
  #    prompt: "Does the above look good?........................"

  - name: download selinux modules that need to be installed
      src: "/etc/ansible/templates/{{ item.item.item }}.te"
      dest: "/tmp/{{ item.item.item }}.te"
      mode: 0644
      owner: root
      group: root
      backup: no
      force: yes
    changed_when: false
    - "item.item.stdout > item.stdout"
    - "{{ selinux_pol_versions_current.results }}"

  - name: install selinux modules
    shell: chdir=/tmp warn=no /usr/bin/checkmodule -M -m -o "/tmp/{{ item.item.item }}.mod" "/tmp/{{ item.item.item }}.te" && /usr/bin/semodule_package -m "/tmp/{{ item.item.item }}.mod" -o "/tmp/{{ item.item.item }}.pp" && /usr/sbin/semodule -v -i "/tmp/{{ item.item.item }}.pp"
    - "item.item.stdout > item.stdout"
    - "{{ selinux_pol_versions_current.results }}"

  - name: clean any temporary selinux modules files
      path: "/tmp/{{ item[0].item.item }}.{{ item[1] }}"
      state: absent
    changed_when: false
    - "item[0].item.stdout > item[0].stdout"
    - "{{ selinux_pol_versions_current.results }}"
    - [ 'te', 'pp', 'mod' ]


  # modify auditd.conf which notifies the handler
  - name: auditd does not keep logs
      path: "{{ auditd_conf }}"
      regexp: "{{ item.r }}"
      backrefs: yes
      line: "{{ item.l }}"
      create: no
      state: present
      backup: yes
    #notify: auditd handler
    - { r: '^max_log_file_action.*$', l: 'max_log_file_action      =  ignore' }
    - { r: '^max_log_file\s.*$', l: 'max_log_file             =  0' }

  # tarball and cleanup any existing audit.log.1 files
  - name: list all old auditd logs which need to be compressed and cleaned up
    shell: warn=no find /var/log/audit -regex {{ auditd_log_cleanup_regex }}
    register: cleanup_list
    ignore_errors: yes
    changed_when: cleanup_list.stdout_lines | length > 0

  - name: get archive filename
    shell: warn=no echo "audit.log.{{ ansible_date_time.epoch }}.tgz"
    register: audit_log_tgz
    changed_when: audit_log_tgz.stdout_lines | length != 1

  - name: touch archive file
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      state: touch
      owner: root
      group: root
      mode: 0600
    when: cleanup_list.stdout_lines | length > 0

  - name: archive and cleanup existing audit.log.1 files
      dest: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      path: "{{ cleanup_list.stdout_lines }}"
      format: gz
      owner: root
      group: root
      remove: yes
    ignore_errors: yes
    when: cleanup_list.stdout_lines | length > 0

  - name: check for existence of new tarball
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
    ignore_errors: yes
    register: audit_log_tarball

  - name: place audit log tarball in auditd_log_dir
    shell: warn=no /bin/mv "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}" "{{ auditd_log_dir }}/"
    ignore_errors: yes
    - audit_log_tarball.stat.exists is defined
    - audit_log_tarball.stat.exists

  - name: get current size of audit log
      path: "{{ auditd_log_dir }}/audit.log"
    ignore_errors: yes
    register: audit_log_stat

  - name: apply logrotate script for audit
      src: /etc/ansible/templates/etc-logrotate.d-audit
      dest: "{{ auditd_logrotate_conf }}"
      owner: root
      group: root
      mode: 0644
      backup: yes

  - name: learn the logrotate.status file to use, if any
    shell: warn=no grep -rE -- 'bin\/logrotate\>.*(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' /etc/cron.* 2>/dev/null | grep -oE '(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' | sort | uniq | head -n1
    ignore_errors: yes
    changed_when: false
    register: this_logrotate_flag

  - name: show which logrotate.status file to use, if any
      msg: "The status file that will be used is {{ this_logrotate_flag.stdout }}"

  - name: run logrotate
    shell: warn=no /usr/sbin/logrotate {{ this_logrotate_flag.stdout }} -f "{{ auditd_logrotate_conf }}"
    register: run_logrotate
    when: ( cleanup_list.stdout_lines | length > 0 ) or ( audit_log_stat.stat.exists and audit_log_stat.stat.size > 190000000 )



So, logrotate can be configured to rotate the audit log. It just takes a few minutes to configure correctly, after about 2 weeks of research and testing.



  1. http://melikedev.com/2013/08/19/linux-selinux-semodule-compile-pp-module-from-te-file/
  2. https://linux.die.net/man/8/auditd.conf

Personal effort

Hours and hours of my original research
Years of administering RHEL servers with logrotate


Compiling Pale Moon web browser on Fedora 27



I like the traditional model of Firefox. This is easily represented in the current project Pale Moon. The upstream project does not currently provide instructions for compiling on Fedora 27. All the options they have documented are on their developer site: https://developer.palemoon.org/Developer_Guide:Build_Instructions/Pale_Moon/Linux.

You might think it is no big deal to download and compile an application. But when the application is still targeted for gcc4.9, it’s a little tricky to compile on the current Fedora which uses gcc7.

After consulting the Internet, I have assembled my instructions for compiling Palemoon on Fedora.

Compile palemoon on Fedora 27

# Install the whole set of packages listed on (primary ref 1) for CentOS7
sudo dnf -y install gtk2-devel dbus-glib-devel autoconf213 yasm mesa-libGL-devel alsa-lib-devel libXt-devel zlib-devel openssl-devel sqlite-devel bzip2-devel pulseaudio-libs-devel
sudo dnf -y groupinstall 'Development Tools'

# Install additional dependencies I found
sudo dnf -y install nspr-devel

# Use autoconf 2.13
autoconfver="$( autoconf --version 2>/dev/null | awk 'NR==1 {print $NF*100;} END {print "0";}' | head -n1 )"
test ${autoconfver} -ne 213 && test ${autoconfver} > 0 && sudo mv /usr/bin/autoconf /usr/bin/autoconf-${autoconver} 2>/dev/null ; sudo ln -sf autoconf-2.13 /usr/bin/autoconf

# Use the copr davidva/gcc49
sudo dnf -y copr enable davidva/gcc49
# fix the $releasever variable in the repo
sudo sed -i -r -e 's/-\$releasever-/-23-/;' /etc/yum.repos.d/_copr_davidva-gcc49.repo
sudo dnf -y install gcc49
# fix a minor library problem (primary ref 2)
pushd /opt/gcc-4.9.3/lib64/gcc/x86_64-fedoraunited-linux-gnu/4.9.3
sudo cp -p ../lib64/libgcc_s.so.1 .
sudo ln -s libgcc_s.so.1 libgcc_s.so

# Fetch palemoon source
mkdir ~/pmsrc ~/pmbuild
cd ~/pmsrc
git clone https://github.com/MoonchildProductions/Pale-Moon.git .

# Prepare to compile
# use .mozconfig from (primary ref 1)
touch "${tf}"
cat <<EOFMOZCONFIG > "${tf}"
# Please see https://www.palemoon.org/redist.shtml for restrictions when using the official branding.
ac_add_options --enable-official-branding

mk_add_options MOZ_CO_PROJECT=browser
ac_add_options --enable-application=browser

mk_add_options MOZ_OBJDIR=/home/$USER/pmbuild/

ac_add_options --enable-optimize="-O2 -msse2 -mfpmath=sse"
ac_add_options --with-pthreads

ac_add_options --disable-installer
ac_add_options --disable-updater

ac_add_options --enable-release
ac_add_options --enable-devtools
ac_add_options --enable-jemalloc
ac_add_options --enable-shared-js

ac_add_options --enable-strip

ac_add_options --x-libraries=/usr/lib

# Compile:
mkdir ~/log
cd ~/pmsrc
source /usr/bin/gcc49
{ time ./mach build ; } | tee -a ~/log/pmsrc.$( date "+%Y-%m-%d-%H%M%S" ).log
echo done

Future steps

Post these instructions on the fedora and palemoon fora.
Ask Pale Moon devs if these can be adapted and shared on Reference 1.




  1. http://developer.palemoon.org/Developer_Guide:Build_Instructions/Pale_Moon/Linux
  2. ensure lib files are in right dir: https://serverfault.com/questions/266138/cannot-find-lgcc-s-from-gcc-3-4-on-unbuntu-11-04/266141#266141


  1. A great example and future reference http://ftp.nluug.nl/pub/os/Linux/distr/pclinuxos/pclinuxos/srpms/SRPMS.pclos/palemoon-27.0.3-1pclos2017.src.rpm
  2. https://build.opensuse.org/package/show/network/palemoon

Notes about set-gid and sticky bits for directories

I can never remember how the set-gid and sticky bits work on directories, so I finally spent some time to re-read man (but had to resort to info) about chmod. This is my cheat sheet.


Setgid (octal permission 2000) makes new files in the directory owned by the group that owns the directory. This is very useful for teams.

How to set

chmod g+s thisdir
chmod 2770 thisdir

How to clear

chmod g-s thisdir
chmod 00770 thisdir

sticky bit, or restricted deletion

Sticky bit (octal permission 1000) on a directory prevents Bob from deleting a file owned by Alice. Even if the directory is owned by one of Bob’s groups and is writable, Bob cannot delete the Alice’s files. This is particulary helpful for the /tmp directory. Check it out:

$ ls -lad /tmp
drwxrwxrwt. 4 root root 120 Jan 23 09:40 /tmp

How to set sticky bit

chmod a+t thisdir
chmod 1770 thisdir

How to clear

chmod a-t thisdir
chmod 00770 thisdir

According to info coreutils chapter 27.4, “Directories and the Set-User-ID and Set-Group-ID Bits,” gnu chmod needs a 5-digit octal to clear this bit.
Basically, if it’s worth setting set-gid, you should throw in sticky bit.

chmod 03770 thisdir

gcc 4.9 for Fedora 27

If you want gcc 4.9 (the GNU Compiler Collection) on the current version of Fedora, which is Fedora 27, there is an easy option for you!

You can use the Cool Other Packages Repo, or copr, written by the user davidva.
The easiest way to use his gcc49 copr is this:

sudo dnf -y copr enable davidva/gcc49
sudo sed -i -r -e 's/-\$releasever-/-23-/;' /etc/yum.repos.d/_copr_davidva-gcc49.repo
sudo dnf -y install gcc49

There is a weakness in the repo file delivered by the copr, where it uses the $releasever instead of a static number 23 used by the repo. Davidva compiled the package without a specific fedora version number tied to it, so it will install on any version of Fedora, as long as you can get to the rpm package.
There is also a weakness with a particular library. It’s in the wrong directory, so it cannot be found. You might find an error when trying to compile a project:

/usr/bin/ld: cannot find -lgcc_s

Use these steps to fix it (Weblink 2).

pushd /opt/gcc-4.9.3/lib64/gcc/x86_64-fedoraunited-linux-gnu/4.9.3
sudo cp -p ../lib64/libgcc_s.so.1 .
sudo ln -s libgcc_s.so.1 libgcc_s.so

As the copr page (Weblink 1) indicates, this installation of gcc 4.9 can be present concurrently with the main gcc installation. To use version 4.9, you need to run:

source /usr/bin/gcc49

Which sets a few variables, notably $CC and $CXX to be the specific binaries from this package.


  1. https://copr.fedorainfracloud.org/coprs/davidva/gcc49/
  2. ensure lib files are in right dir: https://serverfault.com/questions/266138/cannot-find-lgcc-s-from-gcc-3-4-on-unbuntu-11-04/266141#266141

sssd disable known_hosts hashing

If you use Fedora GNU/Linux, and you want bash autocompletion for the hosts in your FreeIPA domain when using the OpenSSH client, you have to disable knownhosts hashing in the global knownhosts file.

To find the global knownhosts file, check /etc/ssh/ssh_config

# grep -i knownhosts /etc/ssh/ssh_config
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
GlobalKnownHostsFile /var/lib/sss/pubconf/known_hosts

The file displayed here is the default on Fedora: /var/lib/sss/pubconf/known_hosts.

Now, to disable the hashing inside that file, you have to configure sssd.

# cat /etc/sssd/conf.d/50_ssh_hash_known_hosts.conf 
ssh_hash_known_hosts = false

I don’t know at what version the dot-dee directory for sssd.conf was introduced specifically (other then when it is compiled with libini>=1.3.0), but mine had it as of Fedora 27 and sssd-1.16.0. But with this entry in the config, restart sssd and maybe delete the previous /var/lib/sss/pubconf/known_hosts file which will be re-generated as it needs it.




Shell functions for testing other functions

When I am updating or adding to my scripts package, I like to find out the fastest way to execute a task.

This involves using the time command and a few wrapper functions.

Here’s an example. I was updating my lsd (list directories) function, and I wanted to see how fast it is compared to a variant I was considering.

func() { pushd "${1}" 1>/dev/null 2>&1 ; find $( find . -maxdepth 1 -type d -printf '%p\n' | sed -r -e 's/^\.\/?//;' | sed -r -e '/^\s*$/d' ) -maxdepth 0 -exec ls -ldF --color=always {} + ; popd 1>/dev/null 2>&1 ;  } ;

func_wrapper() { __x=0 ; while test ${__x} -lt ${1} ; do __x=$(( __x + 1 )) ; __func "${2}" "{$3}" ; done ; }

So now you can run the wrapper and tell it how many times to loop:

time func_wrapper 1000 . 1>/dev/null

real    0m6.081s
user    0m2.103s
sys     0m5.157s

My best practice for html links

  • Don’t use target=”_blank” because that forces only one behavior, opening in a new tab. I should expect the user to open a link in a new tab if he wants to.
  • Don’t omit the referrer, which is done by including rel=”noreferrer”. I want other sites to know who is linking to them, in the hopes of getting cross-traffic.
  • Do include rel=”noopener” to protect users (reference 1).



  1. https://mathiasbynens.github.io/rel-noopener/
  2. https://css-tricks.com/use-target_blank/
  3. https://html.spec.whatwg.org/multipage/links.html#link-type-noopener

List current xvnc sessions in xrdp so you can reconnect to your old one


{ echo "user pid Xdisplay port"; { ps -ef | awk '/Xvnc :[[:digit:]]+/ {print $1,$2,$9}' | while read tu tpid tvnc; do sudo netstat -tlpn | awk -v "tpid=${tpid}" '$0 ~ tpid {print $4;}' | sed -r -e 's/^.*://;' -e "s/^/${tu} ${tpid} ${tvnc} /;" ; done ; } | sort -k3 ; } | column -c4 -t

The story

I connected to a gnome session on a terminal server, and disconnected. I wanted to reconnect to my current session, but apparently I got a new X session. After some research, I learned you can configure xrdp to prompt for the port number so you can get back to the previous session. However, then you have to know what to type in. After doing a manual ps and netstat, I found some useful numbers. What I needed to enter was the tcp port number, so 5919.

The explanation

You can have an entry in the /etc/xrdp/xrdp.ini file like the following block.


When you connect over RDP, select the “Reconnect” module and type in a port number, which you can find from the output of the oneliner.

{ echo "user pid Xdisplay port"; { ps -ef | awk '/Xvnc :[[:digit:]]+/ {print $1,$2,$9}' | while read tu tpid tvnc; do sudo netstat -tlpn | awk -v "tpid=${tpid}" '$0 ~ tpid {print $4;}' | sed -r -e 's/^.*://;' -e "s/^/${tu} ${tpid} ${tvnc} /;" ; done ; } | sort -k3 ; } | column -c4 -t
user       pid    Xdisplay  port
mjohnso    11448  :17       5917
mjohnso    12939  :18       5918
bgstack15  1219   :19       5919



  1. https://askubuntu.com/questions/133343/how-do-i-set-up-xrdp-session-that-reuses-an-existing-session#360835

Getting Firefox and Java to work with jnlp files

If you are having trouble opening a jnlp file (e.g., for IPMI console access) you can try some of these steps.

Tell Firefox to allow pop-up windows for this site

Tell Firefox how to handle the filetype .jnlp

Tell it to open it with /usr/bin/javaws

Tell Java to allow the site to run applications

If you get “Application Blocked by Java Security” you can fix that by editing an exceptions list.

Modify file ~/.java/deployment/security/exception.sites

Each line in this file should be a protocol and domain name or IP address for the exception, e.g.:


  1. https://java.com/en/download/faq/java_webstart.xml
  2. https://stackoverflow.com/questions/25949651/openjdk-how-to-add-site-to-exception-list#25950032