Get SID from Linux ldapsearch in Active Directory

With the help of a fantastic post on ServerFault, here is a way to find a user’s SID in string format from an ldapsearch against Active Directory.

# Filename:
# Author: YasithaB
# Startdate: 2018-02-14 15:58
# Title: Script that Converts Sid from AD Ldap Hexadecimal into String
# Purpose: Help convert sid to usable value
# History:
#    2018-02-15 Modified to work with kornshell
# Usage:
#    ldapsearch -b 'dc=prod,dc=example,dc=com' -s 'sub' -x -D 'CN=My Username,OU=Domain Users,DC=prod,DC=example,DC=com' -W -H 'ldaps://' '(cn=Target Username)' objectSid | grep -E '^objectSid:' | awk '{print $2}' | ./ --stdin
# Reference:
# Improve:
# Document: Below this line

# Base-64 encoded objectSid
case "${1}" in
   "--stdin" ) read OBJECT_ID ;;
   "") : ;;
   *) OBJECT_ID="${1}" ;;

# Decode it, hex-dump it and store it in an array
H="$(echo -n $OBJECT_ID | base64 -d -i | hexdump -v -e '1/1 "%02X"')"

# SID Structure:
# LESA = Little Endian Sub Authority
# BESA = Big Endian Sub Authority
# LERID = Little Endian Relative ID
# BERID = Big Endian Relative ID



echo "${SID}"

Logrotate, audit.log, selinux, cron, and ansible

The story

The disk space for /var/log/audit/audit.log tends to get filled up. The audit daemon has an ability to rotate its own logs. See the man page for auditd.conf.

max_log_file             =  100
max_log_file_action      =  rotate

That’s swell and all, until you realize that auditd cannot compress its rotated logs. On a small /var/log/audit mount point, you’ll fill it up with uncompressed logs.

/dev/mapper/os-var_log_audit          2490M 2136M      355M      86% /var/log/audit

So on a RHEL7 system with logrotate, you can adjust logrotate to handle the audit.log file. Now, logrotate is a finicky application. It has caused me many hours of grief in the past.
You would want to set auditd.conf a certain way:

max_log_file             =  0
max_log_file_action      =  ignore

And set /etc/logrotate.d/audit:

/var/log/audit/*.log {
        rotate 30
        minsize 100k
        maxsize 200M
                touch /var/log/audit/audit.log ||:
                chmod 0600 /var/log/audit/audit.log ||:
                service auditd restart

And ensure you’ve got a /etc/cron.weekly/logrotate:


/usr/sbin/logrotate /etc/logrotate.conf
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
exit 0

After a few days, I learned that my logs were getting filled up so fast, the weekly rotation wasn’t good enough. So I had to place it in my cron.hourly.
And then I learned that it wasn’t running every hour. I spent a few days investigating, and eventually learned that some systems use a specific status file for logrotate. I remember in the past logrotate needs an execution with a -f flag to force the rotation the first time and add a new file to the status file. So if a new file was never force-rotated, it won’t be added to the status file.
My manual logrotate -f command was indeed adding my audit.log log file to the status file, but to the wrong one!
Some of my systems use -s /var/lib/logrotate/logrotate.status but the default is /var/lib/logrotate.status.
So I had to reflect that in my ansible playbook. Actually, I had to write some logic to find the one used by the cronjob and then use that status file.

So I got the correct logrotate status file set up in the ansible playbook. I spent the next week figuring out that logrotate simply couldn’t rotate the file when called from cron. I piped the utility to tee, and also included the -v flag on logrotate. I saw a permission denied.
With the permission issue, I had no choices left by selinux. I had to use the audit.log file to determine that the audit.log file is not readable by logrotate when called by cron.
I finally set captured all the actions performed by logrotate by setting the selinux process context to be permissive:

semanage permissive -a logrotate_t
I let it run, and then had to collect all the actions it performed, and saw what had happened.
{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2why

So I used audit2allow to convert it to an selinux policy.

{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2allow -M logrotate-audit

And then after some searching online, I learned how I can keep the text definition file, and compile the policy from it when I need to:

grep logrotate /var/log/audit/audit.log | audit2allow -m logrotate-audit # saves to logrotate-audit.te
checkmodule -M -m -o logrotate-audit.mod logrotate-audit.te # intermediate step
semodule_package -o logrotate-audit.pp -m logrotate-audit.mod # compiled policy
semodule -i logrotate-audit.pp

The text definition of logrotate-audit policy:

#semodule -i logrotate-audit.pp

module logrotate-audit 1.0;

require {
        type auditd_etc_t;
        type logrotate_t;
        type auditd_log_t;
        class file { create getattr ioctl open read rename setattr unlink write };
        class dir { add_name read remove_name write };

#============= logrotate_t ==============
allow logrotate_t auditd_etc_t:file getattr;
allow logrotate_t auditd_log_t:dir { read write add_name remove_name };
allow logrotate_t auditd_log_t:file { create ioctl open read rename getattr setattr unlink write };

Now, I wrote a master ansible playbook that performs this whole operation, from loading the .te file and compiling it and installing it, to setting logrotate to watch the audit file, and telling auditd to ignore rotating it.
Note: It is outside the scope of this task to ensure that the selinux tools are in place on each server. My environment already ensures package libselinux-python is present on each system, which should bring in all the dependencies of this ansible playbook.

# File: /etc/ansible/books/fix_var-log-audit.yml
# Author: bgstack15
# Startdate: 2018-01-24
# Title: Playbook that Fixes the /var/log/audit Space Issue
# Purpose: Logical Disk Free Space is too low
# History:
# Usage:
#    ansible-playbook -i /etc/ansible/inv/hosts /etc/ansible/configuration/fix_var-log-audit.yml -l hostwithproblem201
#    Use the -l host1,host2 parameter.
# Reference:
#    roles/general_conf/tasks/04_selinux.yml
#    roles/general_conf/tasks/05_auditd.yml
# Improve:
# Documentation:
#    The intention with auditd is to minimize the disk usage of the logs

- hosts: all
  remote_user: ansible_user
  become: yes

    auditd_conf: /etc/audit/auditd.conf
    auditd_log_cleanup_regex: '.*audit\.log\.[0-9]+'
    auditd_log_dir: /var/log/audit
    auditd_logrotate_conf: /etc/logrotate.d/audit


# To make it possible to just drop in files to the files directory and have this module read them automatically, use these two.
#  - name: learn full list of semodules available to install, modular list version
#    shell: warn=no find /etc/ansible/roles/general_conf/files/selinux/ -regex '.*.te' -printf '%f\n' | sed -r -e 's/\.te$//;'
#    register: semodules_list
#    changed_when: false
#    delegate_to: localhost
#    ignore_errors: yes
#  - name: learn semodule versions to install, modular list version
#	shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/roles/general_conf/files/selinux/{{ item }}.te | awk '{print $3*1000;}'
#    register: selinux_pol_versions_target
#    changed_when: false
#    delegate_to: localhost
#    with_items:
#    - "{{ semodules_list.stdout_lines }}"

  - name: learn semodule versions to install, static version
    shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/templates/{{ item }}.te | awk '{print $3*1000;}'
    register: selinux_pol_versions_target
    changed_when: false
    delegate_to: localhost
    - logrotate-audit

  #- debug:
  #    msg: "{{ item.item }} should be {{ item.stdout }}"
  #  with_items:
  #  - "{{ selinux_pol_versions_target.results }}"

  - name: learn current semodule versions
    shell: warn=no semodule --list | awk '$1=="{{ item.item }}" {print $2*1000} END {print "0";}' | head -n1
    register: selinux_pol_versions_current
    changed_when: false
    - "{{ selinux_pol_versions_target.results }}"

  - debug:
      msg: "{{ item.item.item }} is currently {{ item.stdout }} and should be {{ item.item.stdout }}"
    - "{{ selinux_pol_versions_current.results }}"

  #- pause:
  #    prompt: "Does the above look good?........................"

  - name: download selinux modules that need to be installed
      src: "/etc/ansible/templates/{{ item.item.item }}.te"
      dest: "/tmp/{{ item.item.item }}.te"
      mode: 0644
      owner: root
      group: root
      backup: no
      force: yes
    changed_when: false
    - "item.item.stdout > item.stdout"
    - "{{ selinux_pol_versions_current.results }}"

  - name: install selinux modules
    shell: chdir=/tmp warn=no /usr/bin/checkmodule -M -m -o "/tmp/{{ item.item.item }}.mod" "/tmp/{{ item.item.item }}.te" && /usr/bin/semodule_package -m "/tmp/{{ item.item.item }}.mod" -o "/tmp/{{ item.item.item }}.pp" && /usr/sbin/semodule -v -i "/tmp/{{ item.item.item }}.pp"
    - "item.item.stdout > item.stdout"
    - "{{ selinux_pol_versions_current.results }}"

  - name: clean any temporary selinux modules files
      path: "/tmp/{{ item[0].item.item }}.{{ item[1] }}"
      state: absent
    changed_when: false
    - "item[0].item.stdout > item[0].stdout"
    - "{{ selinux_pol_versions_current.results }}"
    - [ 'te', 'pp', 'mod' ]


  # modify auditd.conf which notifies the handler
  - name: auditd does not keep logs
      path: "{{ auditd_conf }}"
      regexp: "{{ item.r }}"
      backrefs: yes
      line: "{{ item.l }}"
      create: no
      state: present
      backup: yes
    #notify: auditd handler
    - { r: '^max_log_file_action.*$', l: 'max_log_file_action      =  ignore' }
    - { r: '^max_log_file\s.*$', l: 'max_log_file             =  0' }

  # tarball and cleanup any existing audit.log.1 files
  - name: list all old auditd logs which need to be compressed and cleaned up
    shell: warn=no find /var/log/audit -regex {{ auditd_log_cleanup_regex }}
    register: cleanup_list
    ignore_errors: yes
    changed_when: cleanup_list.stdout_lines | length > 0

  - name: get archive filename
    shell: warn=no echo "audit.log.{{ ansible_date_time.epoch }}.tgz"
    register: audit_log_tgz
    changed_when: audit_log_tgz.stdout_lines | length != 1

  - name: touch archive file
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      state: touch
      owner: root
      group: root
      mode: 0600
    when: cleanup_list.stdout_lines | length > 0

  - name: archive and cleanup existing audit.log.1 files
      dest: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      path: "{{ cleanup_list.stdout_lines }}"
      format: gz
      owner: root
      group: root
      remove: yes
    ignore_errors: yes
    when: cleanup_list.stdout_lines | length > 0

  - name: check for existence of new tarball
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
    ignore_errors: yes
    register: audit_log_tarball

  - name: place audit log tarball in auditd_log_dir
    shell: warn=no /bin/mv "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}" "{{ auditd_log_dir }}/"
    ignore_errors: yes
    - audit_log_tarball.stat.exists is defined
    - audit_log_tarball.stat.exists

  - name: get current size of audit log
      path: "{{ auditd_log_dir }}/audit.log"
    ignore_errors: yes
    register: audit_log_stat

  - name: apply logrotate script for audit
      src: /etc/ansible/templates/etc-logrotate.d-audit
      dest: "{{ auditd_logrotate_conf }}"
      owner: root
      group: root
      mode: 0644
      backup: yes

  - name: learn the logrotate.status file to use, if any
    shell: warn=no grep -rE -- 'bin\/logrotate\>.*(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' /etc/cron.* 2>/dev/null | grep -oE '(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' | sort | uniq | head -n1
    ignore_errors: yes
    changed_when: false
    register: this_logrotate_flag

  - name: show which logrotate.status file to use, if any
      msg: "The status file that will be used is {{ this_logrotate_flag.stdout }}"

  - name: run logrotate
    shell: warn=no /usr/sbin/logrotate {{ this_logrotate_flag.stdout }} -f "{{ auditd_logrotate_conf }}"
    register: run_logrotate
    when: ( cleanup_list.stdout_lines | length > 0 ) or ( audit_log_stat.stat.exists and audit_log_stat.stat.size > 190000000 )



So, logrotate can be configured to rotate the audit log. It just takes a few minutes to configure correctly, after about 2 weeks of research and testing.




Personal effort

Hours and hours of my original research
Years of administering RHEL servers with logrotate

Notes about set-gid and sticky bits for directories

I can never remember how the set-gid and sticky bits work on directories, so I finally spent some time to re-read man (but had to resort to info) about chmod. This is my cheat sheet.


Setgid (octal permission 2000) makes new files in the directory owned by the group that owns the directory. This is very useful for teams.

How to set

chmod g+s thisdir
chmod 2770 thisdir

How to clear

chmod g-s thisdir
chmod 00770 thisdir

sticky bit, or restricted deletion

Sticky bit (octal permission 1000) on a directory prevents Bob from deleting a file owned by Alice. Even if the directory is owned by one of Bob’s groups and is writable, Bob cannot delete the Alice’s files. This is particulary helpful for the /tmp directory. Check it out:

$ ls -lad /tmp
drwxrwxrwt. 4 root root 120 Jan 23 09:40 /tmp

How to set sticky bit

chmod a+t thisdir
chmod 1770 thisdir

How to clear

chmod a-t thisdir
chmod 00770 thisdir

According to info coreutils chapter 27.4, “Directories and the Set-User-ID and Set-Group-ID Bits,” gnu chmod needs a 5-digit octal to clear this bit.
Basically, if it’s worth setting set-gid, you should throw in sticky bit.

chmod 03770 thisdir

gcc 4.9 for Fedora 27

If you want gcc 4.9 (the GNU Compiler Collection) on the current version of Fedora, which is Fedora 27, there is an easy option for you!

You can use the Cool Other Packages Repo, or copr, written by the user davidva.
The easiest way to use his gcc49 copr is this:

sudo dnf -y copr enable davidva/gcc49
sudo sed -i -r -e 's/-\$releasever-/-23-/;' /etc/yum.repos.d/_copr_davidva-gcc49.repo
sudo dnf -y install gcc49

There is a weakness in the repo file delivered by the copr, where it uses the $releasever instead of a static number 23 used by the repo. Davidva compiled the package without a specific fedora version number tied to it, so it will install on any version of Fedora, as long as you can get to the rpm package.
There is also a weakness with a particular library. It’s in the wrong directory, so it cannot be found. You might find an error when trying to compile a project:

/usr/bin/ld: cannot find -lgcc_s

Use these steps to fix it (Weblink 2).

pushd /opt/gcc-4.9.3/lib64/gcc/x86_64-fedoraunited-linux-gnu/4.9.3
sudo cp -p ../lib64/ .
sudo ln -s

As the copr page (Weblink 1) indicates, this installation of gcc 4.9 can be present concurrently with the main gcc installation. To use version 4.9, you need to run:

source /usr/bin/gcc49

Which sets a few variables, notably $CC and $CXX to be the specific binaries from this package.


  2. ensure lib files are in right dir:



I needed to query certain information about a user on a Linux system. Specifically this output:

user: bgstack15
getent: YES
getent_type: sss
can_ssh: YES
can_sss: YES

I wanted to know if a user is defined (getent), and if so, in which database (local or in Active Directory). Also, is the user in the AllowUsers list of the sshd_config, or a member of a group in the AllowGroups list. And then the same question for the sssd config file.

The script

Hash certificate directory for ldap trust

When you work with openldap clients on GNU/Linux, you might have to interact with Active Directory. And you want to secure your connection with ssl.

When you want to tell ldap to trust the ssl certificates, it might take you a while to get it to actually trust it. You have to fetch the root ca certificate, and do one of several things to it:

  • Save to a file, and update /etc/openldap/ldap.conf variable TLS_CACERT
  • Save to the nssdb using certutil
  • Save to a file in a directory with the right name (from openssl x509 -hash -noout -in FILENAME) and set variable TLS_CACERTDIR

Or you could set “TLS_REQCERT allow” in the conf, but you decided that you actually want the trust to work, for once.

I’ve written a script that takes all the files in /etc/pki/ca-trust/source/anchors/ and makes symlinks to them with the right filenames in /etc/openldap/cacerts/ directory.
Then you can just set “TLS_CACERTDIR /etc/openldap/cacerts” and be done.

Linux find running display manager

In GNU/Linux, if you ever need to know what your currently running display manager is, here is how you can find it:

x="$( sudo lsof -F '' /tmp/.X11-unix/X0 | head -n1 )" ; x="${x#p}"
display_manager="$( ps -p $( ps -o ppid -p ${x} | tail -n +2 ) -ho command )"
thisdm="$( basename "${display_manager}" )"




Automated certreq for GNU/Linux

Last updated 2018-01-25


Microsoft provides the certreq utility for its non-free operating system. This tool makes it easy to get certificates from the Microsoft sub-CA on your network.

GNU Linux hosts do not get that tool, so a viable alternative is to script the interaction with the website. My content in this post is shamelessly ripped from a StackOverflow post and beefed up.

In my research, I came across a question: “How to submit certificate request from red hat to windows ca“.

The solution

The script

I present my shell script,
This shell script:

  • Generates CSR and submits it to the Microsoft Sub-CA.
  • Saves private key, public key (the certificate), and cert chain to a temporary directory
  • Removes the temp directory after 5 minutes automatically to remove the private key
  • Sends to standard out the file names and purposes, for consumption by automation tool, e.g., ansible

Code walkthrough

Instead of copying and pasting the whole code here, I will discuss only snippets.
Here is the usage block.

usage: [-dhV] [-u username] [-p password] [-w tempdir] [-t template] [--cn CN] [--ca ]
version ${certreqversion}
 -d debug   Show debugging info, including parsed variables.
 -h usage   Show this usage block.
 -V version Show script version number.
 -u username User to connect via ntlm to CA. Can be "username" or "domain\\username"
 -p password
 -w workdir  Temp directory to work in. Default is a (mktemp -d).
 -t template Template to request from CA. Default is "ConfigMgrLinuxClientCertificate"
 --cn        CN to request. Default is the \$( hostname -f )
 --ca        CA hostname or base URL. Example:
Return values under 1000: A non-zero value is the sum of the items listed here:
 0 Everything worked
 1 Cert file is still a CSR
 2 Cert file is html, probably due to permissions/credentials issue
 4 Return code of curl statement that saves cert file is non-zero
 8 Cert file does not contain whole certificate
16 Cert does not contain an issuer
Return values above 1000:
1001 Help or version info displayed
1002 Count or type of flaglessvals is incorrect
1003 Incorrect OS type
1004 Unable to find dependency
1005 Not run as root or sudo

All the magic happens at line 239, the main loop. These blocks perform the different web requests, and are the real meat of this script.

Block GENERATE PRIVATE KEY makes the csr and saves in to the file that will eventually hold the cert.

   openssl req -new -nodes \
      -out "${CERTREQ_WORKDIR}/${CERTREQ_CNPARAM}.crt" \
      -keyout "${CERTREQ_WORKDIR}/${CERTREQ_CNPARAM}.key" \
      -subj "${CERTREQ_SUBJECT}"
   CERT="$( cat "${CERTREQ_WORKDIR}/${CERTREQ_CNPARAM}.crt" | tr -d '\n\r' )"
   CERT="$( echo ${CERT} | sed -e 's/+/%2B/g' | tr -s ' ' '+' )"


   OUTPUTLINK="$( curl -k -u "${CERTREQ_USER}:${CERTREQ_PASS}" --ntlm \
      "${CERTREQ_CA}/certsrv/certfnsh.asp" \
      -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
      -H 'Accept-Encoding: gzip, deflate' \
      -H 'Accept-Language: en-US,en;q=0.5' \
      -H 'Connection: keep-alive' \
      -H "Host: ${CERTREQ_CAHOST}" \
      -H "Referer: ${CERTREQ_CA}/certsrv/certrqxt.asp" \
      -H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko' \
      -H 'Content-Type: application/x-www-form-urlencoded' \
      --data "Mode=newreq&CertRequest=${CERT}&CertAttrib=${CERTATTRIB}&TargetStoreFlags=0&SaveCert=yes&ThumbPrint=" | grep -A 1 'function handleGetCert() {' | tail -n 1 | cut -d '"' -f 2 )"

FETCH SIGNED CERTIFICATE downloads the cert that the previous page links to.

   curl -k -u "${CERTREQ_USER}:${CERTREQ_PASS}" --ntlm $CERTLINK \
      -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
      -H 'Accept-Encoding: gzip, deflate' \
      -H 'Accept-Language: en-US,en;q=0.5' \
      -H 'Connection: keep-alive' \
      -H "Host: ${CERTREQ_CAHOST}" \
      -H "Referer: ${CERTREQ_CA}/certsrv/certrqxt.asp" \
      -H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko' \
      -H 'Content-Type: application/x-www-form-urlencoded' > "${CERTREQ_WORKDIR}/${CERTREQ_CNPARAM}.crt"

My additions to this secret sauce start with GET NUMBER OF CURRENT CA CERT. I needed the cert chain, so I automated fetching it from the server.
You have to find out how many different CA certs are being offered by this server, and then use the latest.

   RESPONSE="$( curl -s -k -u "${CERTREQ_USER}:${CERTREQ_PASS}" --ntlm \
      "${CERTREQ_CA}/certsrv/certcarc.asp" \
      -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
      -H 'Accept-Encoding: gzip, deflate' \
      -H 'Accept-Language: en-US,en;q=0.5' \
      -H 'Connection: keep-alive' \
      -H "Host: ${CERTREQ_CAHOST}" \
      -H "Referer: ${CERTREQ_CA}/certsrv/certrqxt.asp" \
      -H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko' \
      -H 'Content-Type: application/x-www-form-urlencoded' )"
   CURRENTNUM="$( echo "${RESPONSE}" | grep -cE 'Option' )"

   CURRENT_P7B="$( curl -s -k -u "${CERTREQ_USER}:${CERTREQ_PASS}" --ntlm \
      "${CERTREQ_CA}/certsrv/certnew.p7b?ReqID=CACert&Renewal=${CURRENTNUM}" \
      -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
      -H 'Accept-Encoding: gzip, deflate' \
      -H 'Accept-Language: en-US,en;q=0.5' \
      -H 'Connection: keep-alive' \
      -H "Host: ${CERTREQ_CAHOST}" \
      -H "Referer: ${CERTREQ_CA}/certsrv/certrqxt.asp" \
      -H 'User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko' \
      -H 'Content-Type: application/x-www-form-urlencoded' )"

   echo "${CURRENT_P7B}" | openssl pkcs7 -print_certs -out "${CERTREQ_TEMPFILE}"

I like having the domain name in the filename, so this last part renames the cert chain.

   # will read only the first cert, so get domain of issuer of it.
   CA_DOMAIN="$( openssl x509 -in "${CERTREQ_TEMPFILE}" -noout -issuer 2>&1 | sed -r -e 's/^.*CN=[A-Za-z0-9]+\.//;' )"
   mv -f "${CERTREQ_TEMPFILE}" "${CERTREQ_WORKDIR}/${CHAIN_FILE}" 1>/dev/null 2>&1

The ansible role

I needed this task deployed to my whole environment, so I rolled it into an ansible role saved to github and also added another feature, where it converts the generated cert files into a pcks12 (pfx) file for a specific application’s need.



  6. Manipulating ssl certificates

VeraCrypt rpm for Fedora

In these post-TrueCrypt days, I migrated to VeraCrypt. For a very long time now, I have been maintaining an encrypted file container on a flash drive on my keychain. Additionally, I keep various binaries to help open it, like on Windows or GNU/Linux, should I ever need emergency access to my files when not on one of my regular machines. I’m not NSA-proof, but I do intend to keep my private files out of the view of the general public or any random person who might find a lost flash drive.

So about VeraCrypt, A user can download the latest binary packages, even for GNU/Linux, from the offical downloads page. And the source code is on github at

But nobody I could find on the Internet has a Fedora rpm package for it. Well, I present to you now my Veracrypt rpm project. It took me a while to figure out the different releases of VeraCrypt don’t compile on Fedora 27 for various bug-related reasons. But the freshest commit version does, so this rpm is generated from the beta upstream point in time where I saved a copy of the repo.

The normal way to compile any of my rpms is to use the usr/share/${package}/build/pack script. It will download sources, prepare the file list in the spec, and perform the rpmbuild.

Palemoon 64-bit for Linux and Flash Player Plugin

  1. Install Palemoon using the from
  2. Visit the main page at or use this direct link: and install it.
  3. Load the libraries in the directory Pale Moon looks in:
    sudo ln -s /usr/lib64/flash-plugin/ /usr/lib/mozilla/plugins/


My links