Samba share with AD auth, 2020 May edition


I wrote about this topic almost 4 years ago: Samba share with AD authentication
This article is the updated version. It has a different environment and purpose, as well as a new version of samba that requires a workaround.
The goal today is just get a quick home directories share.


  • Server is joined to the domain
  • Working on CentOS 7. The previous article included Ubuntu commands for the package manager and firewall.

Setting up Samba

Install the packages, including the server package.

yum -y install samba

Open the firewall.

firewall-cmd --permanent --add-service=samba
systemctl restart firewalld.service

Configure Samba.

cat <<EOFSMB > /etc/samba/smb.conf
   workgroup = EXAMPLE
   security = ads
   realm = EXAMPLE.COM
   kerberos method = system keytab
   netbios name = $( hostname -s )
   server string = Description here
   log file = /var/log/samba/log.%m
   max log size = 50
   dns proxy = no
   encrypt passwords = yes
   passdb backend = tdbsam
   printcap name = /dev/null
   load printers = no

   comment = Home Directories
   valid users = user1, user2, @group1
   browseable = No
   read only = No
   inherit acls = Yes
   guest only = no

Starting with Samba 4.9.1, a workaround is needed for Samba to work when the id mapping is not set up thoroughly. This example does not do any id mapping, so use this quick and dirty fix.

net -s /dev/null groupmap add sid=S-1-5-32-546 unixgroup=nobody type=builtin

You can see the custom mapping for the guest user with:

$ net -s /dev/null groupmap list
nobody (S-1-5-32-546) -> nobody
Reference: 1648399 – Samba 4.9.1: smb.service fails with ERROR: failed to setup guest info (RHBZ)

And enable and start the services.

systemctl enable --now smb nmb

This command enables (sets to run at system startup) and starts immediately, these two services. NMB is the NetBIOS name server. It helps the main Samba daemon in ways deeper than I care to research.

Configuring SELinux

Set a few SE booleans.

for word in samba_export_all_rw samba_create_home_dirs ; do setsebool -P "${word}" 1 ; done

Fedora and scanners

If you are running Fedora and you want to use a scanner, you probably already have sane (backends at least) and simple-scan installed.

What is incredibly frustrating is when scanimage -L shows you the scanner, but simple-scan does not list it.

What you are missing is

sudo dnf install libnsl

This package is different from libnsl2, which is probably already installed. But some applications must depend on libnsl, including simple-scan. Thanks to suspiciousmilk of Ask Fedora.



  1. Brother scanner driver don’t work []

LAPS for Linux

A famous non-free operating system vendor provides a Local Administrator Password Solution for its flagship OS.
Here is my attempt at a GNU/Linux client for the same thing. This tool can help store unique local admin passwords in ldap. Some Unix environments still suffer from using the same single root password for everything. Here is the fix for that.
Please note that the domain-side configuration actions are still required. This is only the client-side utility.

laps for linux

Updated 2018-12-07: Now you can just download the rpm from my copr.

This is written in shell, and designed and tested on CentOS 7. An rpm spec is provided if you want to bundle it in that format. Contributions are welcome, of any sort.

There are two main ways to use my laps4linux.

laps is the Local Administrator Password Solution for GNU/Linux.
usage: [-duV] [-c conffile] [-t|-a] [-f] [-r [-u ] [-h ]]
version 2018-10-22a
 -d debug   Show debugging info, including parsed variables.
 -- usage   Show this usage block.
 -V version Show script version number.
 -c conf    Read in this config file. Default is /etc/laps/laps.conf
 -f force   Skip the time check and just update the password regardless.
 --noforce  Do not force. Overrides environment variable LAPS_FORCE.
 -t test    Dry run only. Useful with debugging on.
 -a apply   Turn off dry run. Default.
 -r read    Read password; do not set it. Can only be used by a domain admin. Can only be used with -u.
 -u user    Connect with kerberos ticket for this user. Default is "machine" to use host keytab.
 -h   Read this hostname instead of $( hostname -s )
Debug levels:
 0 Silent
 9 displays sensitive info, specifically the generated password
10 function trace
Environment variables:
 See documentation at /usr/share/doc/laps/ for full explanation.

Additional notes

When implementing laps in your environment, you will want to ensure the ldap connectivity is working. I particularly had trouble with the kerberos auth to the domain controllers for ldaps.

To use for the first time for a Linux client, you will want to use the -f flag, because the undefined expirationTime ldap attribute will cause an error during the time comparison.

Maybe something like this could be implemented in FreeIPA. It’s just a simple schema extension, oh, and some ACLs. I don’t know if FreeIPA has ACLs on attributes, but it sounds like a logical thing to have.

Setup Yum Repository with Security Metadata

Define repository

Prepare the repo file on the server, so clients can download it.

cd /var/www/html/yum
cat <<'EOF' > hosting.repo
name=Hosting Delivery

Make or update repository

Use createrepo tool to make the repository. A wrapper script for creating or updating the existing repository is shown here.

cat <<'EOF' > "${tf}"
# reference:

# Prepare directory and files
test -z "${UR_REPODIR}" && UR_REPODIR=/var/www/html/yum/hosting
test -z "${UR_BASEURL}" && UR_BASEURL=
test -z "${UR_OWNERSHIP}" && UR_OWNERSHIP="root.root"
test -z "${UR_FILETYPES}" && UR_FILETYPES="rpm"

find "${UR_REPODIR}" -exec chown "${UR_OWNERSHIP}" {} + 1>/dev/null 2>&1
find "${UR_REPODIR}" -type f -exec chmod "0664" {} + 1>/dev/null 2>&1
find "${UR_REPODIR}" -type d -exec chmod "0775" {} + 1>/dev/null 2>&1
chmod 0754 "$0"
restorecon -RF "${UR_REPODIR}"

# Prepare repo for rpm
cd "${UR_REPODIR}"
createrepo -v -u "${UR_BASEURL}" --basedir "${UR_REPODIR}" --simple-md-filenames --no-database --update --pretty .

Run this script.


Manually make the security metadata

The security metadata that yum interprets is stored in updateinfo.xml.gz. To make this file and include it in repomd.xml, you need to prepare it and learn some information about it.

This is a trim example of updateinfo.xml. Please see the epel metadata for a full example. I do not have an automatic process for generating this file yet.

cat <<'EOF' > "${tf}"
<?xml version="1.0" encoding="UTF-8"?>
  <update status="final" type="security" version="1" from="">
    <title>bgscripts-core update</title>
    <release>Enterprise Linux 7</release>
    <issued date="2018-04-02"/>
    <rights>CC-BY-SA 4.0</rights>
- latest version from upstream
    <solution>This update is internal to the company.</solution>
      <reference href="" type="self" title="bgscripts-core" />
      <collection short="bgscripts">
        <name>bgscripts suite</name>
        <package name="bgscripts-core" version="1.3-8" release="" epoch="0" arch="noarch">
          <sum type="md5">eaa20075720bf12d6e837a4f546241ab</sum>

Update the repo metadata to include updateinfo.xml

A yum repository includes metadata of the package metadata, and stores this meta-metadata in repomd.xml. Insert the metadata for this new file, updateinfo.xml in the repomd file.
This script is an update version of, which was listed earlier in this document.

        cat <<'EOF' > "${tf}"
# reference:

# Prepare directory and files
test -z "${UR_REPODIR}" && UR_REPODIR=/var/www/html/yum/hosting
test -z "${UR_BASEURL}" && UR_BASEURL=
test -z "${UR_OWNERSHIP}" && UR_OWNERSHIP="root.root"
test -z "${UR_FILETYPES}" && UR_FILETYPES="rpm"
test -z "${UR_UPDATEINFO_INPUT}" && UR_UPDATEINFO_INPUT=/var/www/html/yum/build-hosting-repo/updateinfo.xml

find "${UR_REPODIR}" -exec chown "${UR_OWNERSHIP}" {} + 1>/dev/null 2>&1
find "${UR_REPODIR}" -type f -exec chmod "0664" {} + 1>/dev/null 2>&1
find "${UR_REPODIR}" -type d -exec chmod "0775" {} + 1>/dev/null 2>&1
chmod 0754 "$0"
restorecon -RF "${UR_REPODIR}"

# Prepare basic repo
cd "${UR_REPODIR}"
createrepo -v -u "${UR_BASEURL}" --basedir "${UR_REPODIR}" --simple-md-filenames --no-database --update --pretty .

# Inject custom updateinfo
# this task assumes the repomd file does not include node <data type="updateinfo"> yet.

if ! test -e "${UR_UPDATEINFO_INPUT}" ;
   # file is absent, so decide how to fail.
   # file exists, so continue with custom injection

   # learn open-size and open-checksum
   UR_updateinfo_opensize="$( /usr/bin/stat -c "%s" "${UR_UPDATEINFO_INPUT}" )"
   UR_updateinfo_openchecksum="$( /usr/bin/sha256sum "${UR_UPDATEINFO_INPUT}" | awk '{print $1}' )"

   # compress file and learn size and checksum
   /usr/bin/gzip < "${UR_UPDATEINFO_INPUT}" > "${UR_updateinfo_gz}"
   UR_updateinfo_size="$( /usr/bin/stat -c "%s" "${UR_updateinfo_gz}" )"
   UR_updateinfo_checksum="$( /usr/bin/sha256sum "${UR_updateinfo_gz}" | awk '{print $1}' )"
   UR_updateinfo_timestamp="$( /usr/bin/stat -c "%Y" "${UR_updateinfo_gz}" )"

   # insert information into repomd
   this_string="<data type=\"updateinfo\">
  <checksum type=\"sha256\">${UR_updateinfo_checksum}</checksum>
  <open-checksum type=\"sha256\">${UR_updateinfo_openchecksum}</open-checksum>
  <location xml:base=\"${UR_BASEURL}\" href=\"${UR_updateinfo_gz_short}\"/>

      sed -r -e '/<\/repomd>/d' "${UR_repomd}"
      printf "%s\n%s\n" "${this_string}" "</repomd>"
   } > "${UR_repomd}.$$"
   /bin/touch --reference "${UR_repomd}" "${UR_repomd}.$$"
   /bin/mv -f "${UR_repomd}.$$" "${UR_repomd}"


Using bash to modify xml files is obviously not ideal. However, this xml file is simple enough so this ugly mechanism suffices. For teams that know how to manage custom yum repositories and also want to just use yum update –security, this process should be a good basis or even complete solution!


Appendix A: http proxy

If you use an http proxy for your yum traffic, the proxy might cache old versions of the metadata or package files. A quick and dirty way to clean up a squid proxy of the metadata file follows.

time squidclient -h localhost -r -p 3128 -m PURGE

Squid unfortunately does not allow recursive purging, so you will have to loop over all the metadata files and any package files you want to ensure get cleared.


Local file /var/cache/yum/x86_64/7Server/epel/69b82df00108c0ac8ac82fafbd0f3b89cc98d8dfe4fa350af7a23331a878fea2-updateinfo.xml.bz2

Devuan and Dell D630 graphics issue

On one of my Dell D630s running Devuan GNU/Linux, I ran into an interesting issue. It does not occur on my other D630, so there’s hope. Originally I actually changed from my i686 installation of Devuan ascii to the x86_64 version, but the problem still persistent.

After a few minutes of dedicated searching, I found the answer!


Put on the kernel line:


Normally in /etc/default/grub:

And then you have to run:
grub-mkconfig -o /boot/grub/grub.cfg


The symptom of my problem occurs when changing screen resolutions in X11, or switching to or from the X11 display to a different tty. The mouse responds and moves around the (graphical) screens, but nothing else is updated for about 10 seconds.

I investigated dmesg, and it shows the very interesting results at the bottom of the post. After shortening the snippet to paste into Internet search engines, I finally found an answer! This bug has something to do with handling the svideo output. Disabling the svideo output (my Dell Latitude D630 does not even have one!) makes the issue go away.

[  168.672218] [drm:drm_atomic_helper_commit_cleanup_done [drm_kms_helper]] *ERROR* [CRTC:29:pipe B] flip_done timed out
[  168.776236] ------------[ cut here ]------------
[  168.776296] WARNING: CPU: 1 PID: 2274 at /build/linux-6WgLIg/linux-4.9.88/drivers/gpu/drm/drm_irq.c:1254 drm_wait_one_vblank+0x197/0x1a0 [drm]
[  168.776299] vblank wait timed out on crtc 1
[  168.776302] Modules linked in: appletalk ax25 ipx p8023 p8022 psnap llc bnep bluetooth rpcsec_gss_krb5 nfsv4 dns_resolver nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc parport_pc ppdev lp parport iTCO_wdt iTCO_vendor_support b43 bcma snd_hda_codec_idt snd_hda_codec_generic mac80211 snd_hda_intel snd_hda_codec cfg80211 coretemp snd_hda_core dell_wmi kvm_intel sparse_keymap dell_rbtn rng_core dell_laptop kvm snd_hwdep yenta_socket snd_pcm dell_smbios rfkill snd_timer snd dcdbas dell_smm_hwmon irqbypass soundcore lpc_ich pcmcia_rsrc i915 mfd_core joydev pcspkr evdev serio_raw drm_kms_helper shpchp battery video wmi button ac drm i2c_algo_bit acpi_cpufreq ext4 crc16 jbd2 crc32c_generic fscrypto ecb glue_helper lrw gf128mul ablk_helper cryptd aes_x86_64 mbcache dm_mod sr_mod cdrom
[  168.776470]  sg sd_mod hid_generic ata_generic usbhid hid psmouse firewire_ohci ata_piix i2c_i801 ssb i2c_smbus mmc_core pcmcia firewire_core crc_itu_t pcmcia_core libata tg3 ptp pps_core libphy scsi_mod thermal ehci_pci uhci_hcd ehci_hcd usbcore usb_common
[  168.776532] CPU: 1 PID: 2274 Comm: Xorg Tainted: G        W       4.9.0-6-amd64 #1 Debian 4.9.88-1+deb9u1
[  168.776535] Hardware name: Dell Inc. Latitude D630                   /0KU184, BIOS A08 02/28/2008
[  168.776540]  0000000000000000 ffffffff9312f774 ffffb93e40b1f980 0000000000000000
[  168.776551]  ffffffff92e78a5e ffff9dc735860000 ffffb93e40b1f9d8 0000000000000000
[  168.776559]  0000000007000df1 ffff9dc77bc8a898 ffff9dc736bf2800 ffffffff92e78adf
[  168.776568] Call Trace:
[  168.776582]  [] ? dump_stack+0x5c/0x78
[  168.776591]  [] ? __warn+0xbe/0xe0
[  168.776597]  [] ? warn_slowpath_fmt+0x5f/0x80
[  168.776604]  [] ? finish_wait+0x3c/0x70
[  168.776642]  [] ? drm_wait_one_vblank+0x197/0x1a0 [drm]
[  168.776648]  [] ? prepare_to_wait_event+0xf0/0xf0
[  168.776733]  [] ? intel_get_load_detect_pipe+0x624/0x640 [i915]
[  168.776799]  [] ? intel_tv_detect+0x15a/0x590 [i915]
[  168.776828]  [] ? drm_helper_probe_single_connector_modes+0x432/0x520 [drm_kms_helper]
[  168.776866]  [] ? drm_mode_object_get_properties+0xd7/0x100 [drm]
[  168.776904]  [] ? drm_mode_getconnector+0x32a/0x360 [drm]
[  168.776938]  [] ? drm_ioctl+0x1ed/0x470 [drm]
[  168.776975]  [] ? drm_mode_connector_property_set_ioctl+0x60/0x60 [drm]
[  168.776984]  [] ? __fpu__restore_sig+0x7b/0x580
[  168.776992]  [] ? do_vfs_ioctl+0xa2/0x620
[  168.776997]  [] ? SyS_ioctl+0x74/0x80
[  168.777004]  [] ? do_syscall_64+0x8d/0xf0
[  168.777011]  [] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
[  168.777016] ---[ end trace 8f754217765c1d43 ]---
[  168.921006] ------------[ cut here ]------------





Join RHEL6 to Active Directory

There are many things you need to do first, like install the packages and configure pam and nsswitch and resolv.conf which are documented elsewhere and might make it to this document eventually. I was struggling with a RHEL6 server (for which realm is not packaged) and AD.
From EPEL, install msktutil.

kinit admininstrator
adcli join -D -K /etc/krb5.keytab -U administrator --show-details
msktutil -u -s host


Get SID from Linux ldapsearch in Active Directory

With the help of a fantastic post on ServerFault, here is a way to find a user’s SID in string format from an ldapsearch against Active Directory.

# Filename:
# Author: YasithaB
# Startdate: 2018-02-14 15:58
# Title: Script that Converts Sid from AD Ldap Hexadecimal into String
# Purpose: Help convert sid to usable value
# History:
#    2018-02-15 Modified to work with kornshell
# Usage:
#    ldapsearch -b 'dc=prod,dc=example,dc=com' -s 'sub' -x -D 'CN=My Username,OU=Domain Users,DC=prod,DC=example,DC=com' -W -H 'ldaps://' '(cn=Target Username)' objectSid | grep -E '^objectSid:' | awk '{print $2}' | ./ --stdin
# Reference:
# Improve:
# Document: Below this line

# Base-64 encoded objectSid
case "${1}" in
   "--stdin" ) read OBJECT_ID ;;
   "") : ;;
   *) OBJECT_ID="${1}" ;;

# Decode it, hex-dump it and store it in an array
H="$(echo -n $OBJECT_ID | base64 -d -i | hexdump -v -e '1/1 "%02X"')"

# SID Structure:
# LESA = Little Endian Sub Authority
# BESA = Big Endian Sub Authority
# LERID = Little Endian Relative ID
# BERID = Big Endian Relative ID



echo "${SID}"

Logrotate, audit.log, selinux, cron, and ansible

The story

The disk space for /var/log/audit/audit.log tends to get filled up. The audit daemon has an ability to rotate its own logs. See the man page for auditd.conf.

max_log_file             =  100
max_log_file_action      =  rotate

That’s swell and all, until you realize that auditd cannot compress its rotated logs. On a small /var/log/audit mount point, you’ll fill it up with uncompressed logs.

/dev/mapper/os-var_log_audit          2490M 2136M      355M      86% /var/log/audit

So on a RHEL7 system with logrotate, you can adjust logrotate to handle the audit.log file. Now, logrotate is a finicky application. It has caused me many hours of grief in the past.
You would want to set auditd.conf a certain way:

max_log_file             =  0
max_log_file_action      =  ignore

And set /etc/logrotate.d/audit:

/var/log/audit/*.log {
        rotate 30
        minsize 100k
        maxsize 200M
                touch /var/log/audit/audit.log ||:
                chmod 0600 /var/log/audit/audit.log ||:
                service auditd restart

And ensure you’ve got a /etc/cron.weekly/logrotate:


/usr/sbin/logrotate /etc/logrotate.conf
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
exit 0

After a few days, I learned that my logs were getting filled up so fast, the weekly rotation wasn’t good enough. So I had to place it in my cron.hourly.
And then I learned that it wasn’t running every hour. I spent a few days investigating, and eventually learned that some systems use a specific status file for logrotate. I remember in the past logrotate needs an execution with a -f flag to force the rotation the first time and add a new file to the status file. So if a new file was never force-rotated, it won’t be added to the status file.
My manual logrotate -f command was indeed adding my audit.log log file to the status file, but to the wrong one!
Some of my systems use -s /var/lib/logrotate/logrotate.status but the default is /var/lib/logrotate.status.
So I had to reflect that in my ansible playbook. Actually, I had to write some logic to find the one used by the cronjob and then use that status file.

So I got the correct logrotate status file set up in the ansible playbook. I spent the next week figuring out that logrotate simply couldn’t rotate the file when called from cron. I piped the utility to tee, and also included the -v flag on logrotate. I saw a permission denied.
With the permission issue, I had no choices left by selinux. I had to use the audit.log file to determine that the audit.log file is not readable by logrotate when called by cron.
I finally set captured all the actions performed by logrotate by setting the selinux process context to be permissive:

semanage permissive -a logrotate_t
I let it run, and then had to collect all the actions it performed, and saw what had happened.
{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2why

So I used audit2allow to convert it to an selinux policy.

{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2allow -M logrotate-audit

And then after some searching online, I learned how I can keep the text definition file, and compile the policy from it when I need to:

grep logrotate /var/log/audit/audit.log | audit2allow -m logrotate-audit # saves to logrotate-audit.te
checkmodule -M -m -o logrotate-audit.mod logrotate-audit.te # intermediate step
semodule_package -o logrotate-audit.pp -m logrotate-audit.mod # compiled policy
semodule -i logrotate-audit.pp

The text definition of logrotate-audit policy:

#semodule -i logrotate-audit.pp

module logrotate-audit 1.0;

require {
        type auditd_etc_t;
        type logrotate_t;
        type auditd_log_t;
        class file { create getattr ioctl open read rename setattr unlink write };
        class dir { add_name read remove_name write };

#============= logrotate_t ==============
allow logrotate_t auditd_etc_t:file getattr;
allow logrotate_t auditd_log_t:dir { read write add_name remove_name };
allow logrotate_t auditd_log_t:file { create ioctl open read rename getattr setattr unlink write };

Now, I wrote a master ansible playbook that performs this whole operation, from loading the .te file and compiling it and installing it, to setting logrotate to watch the audit file, and telling auditd to ignore rotating it.
Note: It is outside the scope of this task to ensure that the selinux tools are in place on each server. My environment already ensures package libselinux-python is present on each system, which should bring in all the dependencies of this ansible playbook.

# File: /etc/ansible/books/fix_var-log-audit.yml
# Author: bgstack15
# Startdate: 2018-01-24
# Title: Playbook that Fixes the /var/log/audit Space Issue
# Purpose: Logical Disk Free Space is too low
# History:
# Usage:
#    ansible-playbook -i /etc/ansible/inv/hosts /etc/ansible/configuration/fix_var-log-audit.yml -l hostwithproblem201
#    Use the -l host1,host2 parameter.
# Reference:
#    roles/general_conf/tasks/04_selinux.yml
#    roles/general_conf/tasks/05_auditd.yml
# Improve:
# Documentation:
#    The intention with auditd is to minimize the disk usage of the logs

- hosts: all
  remote_user: ansible_user
  become: yes

    auditd_conf: /etc/audit/auditd.conf
    auditd_log_cleanup_regex: '.*audit\.log\.[0-9]+'
    auditd_log_dir: /var/log/audit
    auditd_logrotate_conf: /etc/logrotate.d/audit


# To make it possible to just drop in files to the files directory and have this module read them automatically, use these two.
#  - name: learn full list of semodules available to install, modular list version
#    shell: warn=no find /etc/ansible/roles/general_conf/files/selinux/ -regex '.*.te' -printf '%f\n' | sed -r -e 's/\.te$//;'
#    register: semodules_list
#    changed_when: false
#    delegate_to: localhost
#    ignore_errors: yes
#  - name: learn semodule versions to install, modular list version
#	shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/roles/general_conf/files/selinux/{{ item }}.te | awk '{print $3*1000;}'
#    register: selinux_pol_versions_target
#    changed_when: false
#    delegate_to: localhost
#    with_items:
#    - "{{ semodules_list.stdout_lines }}"

  - name: learn semodule versions to install, static version
    shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/templates/{{ item }}.te | awk '{print $3*1000;}'
    register: selinux_pol_versions_target
    changed_when: false
    delegate_to: localhost
    - logrotate-audit

  #- debug:
  #    msg: "{{ item.item }} should be {{ item.stdout }}"
  #  with_items:
  #  - "{{ selinux_pol_versions_target.results }}"

  - name: learn current semodule versions
    shell: warn=no semodule --list | awk '$1=="{{ item.item }}" {print $2*1000} END {print "0";}' | head -n1
    register: selinux_pol_versions_current
    changed_when: false
    - "{{ selinux_pol_versions_target.results }}"

  - debug:
      msg: "{{ item.item.item }} is currently {{ item.stdout }} and should be {{ item.item.stdout }}"
    - "{{ selinux_pol_versions_current.results }}"

  #- pause:
  #    prompt: "Does the above look good?........................"

  - name: download selinux modules that need to be installed
      src: "/etc/ansible/templates/{{ item.item.item }}.te"
      dest: "/tmp/{{ item.item.item }}.te"
      mode: 0644
      owner: root
      group: root
      backup: no
      force: yes
    changed_when: false
    - "item.item.stdout > item.stdout"
    - "{{ selinux_pol_versions_current.results }}"

  - name: install selinux modules
    shell: chdir=/tmp warn=no /usr/bin/checkmodule -M -m -o "/tmp/{{ item.item.item }}.mod" "/tmp/{{ item.item.item }}.te" && /usr/bin/semodule_package -m "/tmp/{{ item.item.item }}.mod" -o "/tmp/{{ item.item.item }}.pp" && /usr/sbin/semodule -v -i "/tmp/{{ item.item.item }}.pp"
    - "item.item.stdout > item.stdout"
    - "{{ selinux_pol_versions_current.results }}"

  - name: clean any temporary selinux modules files
      path: "/tmp/{{ item[0].item.item }}.{{ item[1] }}"
      state: absent
    changed_when: false
    - "item[0].item.stdout > item[0].stdout"
    - "{{ selinux_pol_versions_current.results }}"
    - [ 'te', 'pp', 'mod' ]


  # modify auditd.conf which notifies the handler
  - name: auditd does not keep logs
      path: "{{ auditd_conf }}"
      regexp: "{{ item.r }}"
      backrefs: yes
      line: "{{ item.l }}"
      create: no
      state: present
      backup: yes
    #notify: auditd handler
    - { r: '^max_log_file_action.*$', l: 'max_log_file_action      =  ignore' }
    - { r: '^max_log_file\s.*$', l: 'max_log_file             =  0' }

  # tarball and cleanup any existing audit.log.1 files
  - name: list all old auditd logs which need to be compressed and cleaned up
    shell: warn=no find /var/log/audit -regex {{ auditd_log_cleanup_regex }}
    register: cleanup_list
    ignore_errors: yes
    changed_when: cleanup_list.stdout_lines | length > 0

  - name: get archive filename
    shell: warn=no echo "audit.log.{{ ansible_date_time.epoch }}.tgz"
    register: audit_log_tgz
    changed_when: audit_log_tgz.stdout_lines | length != 1

  - name: touch archive file
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      state: touch
      owner: root
      group: root
      mode: 0600
    when: cleanup_list.stdout_lines | length > 0

  - name: archive and cleanup existing audit.log.1 files
      dest: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      path: "{{ cleanup_list.stdout_lines }}"
      format: gz
      owner: root
      group: root
      remove: yes
    ignore_errors: yes
    when: cleanup_list.stdout_lines | length > 0

  - name: check for existence of new tarball
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
    ignore_errors: yes
    register: audit_log_tarball

  - name: place audit log tarball in auditd_log_dir
    shell: warn=no /bin/mv "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}" "{{ auditd_log_dir }}/"
    ignore_errors: yes
    - audit_log_tarball.stat.exists is defined
    - audit_log_tarball.stat.exists

  - name: get current size of audit log
      path: "{{ auditd_log_dir }}/audit.log"
    ignore_errors: yes
    register: audit_log_stat

  - name: apply logrotate script for audit
      src: /etc/ansible/templates/etc-logrotate.d-audit
      dest: "{{ auditd_logrotate_conf }}"
      owner: root
      group: root
      mode: 0644
      backup: yes

  - name: learn the logrotate.status file to use, if any
    shell: warn=no grep -rE -- 'bin\/logrotate\>.*(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' /etc/cron.* 2>/dev/null | grep -oE '(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' | sort | uniq | head -n1
    ignore_errors: yes
    changed_when: false
    register: this_logrotate_flag

  - name: show which logrotate.status file to use, if any
      msg: "The status file that will be used is {{ this_logrotate_flag.stdout }}"

  - name: run logrotate
    shell: warn=no /usr/sbin/logrotate {{ this_logrotate_flag.stdout }} -f "{{ auditd_logrotate_conf }}"
    register: run_logrotate
    when: ( cleanup_list.stdout_lines | length > 0 ) or ( audit_log_stat.stat.exists and audit_log_stat.stat.size > 190000000 )



So, logrotate can be configured to rotate the audit log. It just takes a few minutes to configure correctly, after about 2 weeks of research and testing.




Personal effort

Hours and hours of my original research
Years of administering RHEL servers with logrotate