Get SID from Linux ldapsearch in Active Directory

With the help of a fantastic post on ServerFault, here is a way to find a user’s SID in string format from an ldapsearch against Active Directory.

#!/bin/sh
# Filename: get_sid.sh
# Author: YasithaB
# Startdate: 2018-02-14 15:58
# Title: Script that Converts Sid from AD Ldap Hexadecimal into String
# Purpose: Help convert sid to usable value
# History:
#    2018-02-15 Modified to work with kornshell
# Usage:
#    ldapsearch -b 'dc=prod,dc=example,dc=com' -s 'sub' -x -D 'CN=My Username,OU=Domain Users,DC=prod,DC=example,DC=com' -W -H 'ldaps://adds2.prod.example.com:636' '(cn=Target Username)' objectSid | grep -E '^objectSid:' | awk '{print $2}' | ./get_sid.sh --stdin
# Reference:
#    https://serverfault.com/questions/851864/get-sid-by-its-objectsid-using-ldapsearch/852338#852338
# Improve:
# Document: Below this line

# Base-64 encoded objectSid
test -z "${OBJECT_ID}" && OBJECT_ID="AQUAAAAAAAUVAAAAPWW1S5rojK4mDAiG5BAAAA=="
case "${1}" in
   "--stdin" ) read OBJECT_ID ;;
   "") : ;;
   *) OBJECT_ID="${1}" ;;
esac

# Decode it, hex-dump it and store it in an array
H="$(echo -n $OBJECT_ID | base64 -d -i | hexdump -v -e '1/1 "%02X"')"

# SID Structure: https://technet.microsoft.com/en-us/library/cc962011.aspx
# LESA = Little Endian Sub Authority
# BESA = Big Endian Sub Authority
# LERID = Little Endian Relative ID
# BERID = Big Endian Relative ID

BESA2=${H:16:8}
BESA3=${H:24:8}
BESA4=${H:32:8}
BESA5=${H:40:8}
BERID=${H:48:10}

LESA1=${H:4:12}
LESA2=${BESA2:6:2}${BESA2:4:2}${BESA2:2:2}${BESA2:0:2}
LESA3=${BESA3:6:2}${BESA3:4:2}${BESA3:2:2}${BESA3:0:2}
LESA4=${BESA4:6:2}${BESA4:4:2}${BESA4:2:2}${BESA4:0:2}
LESA5=${BESA5:6:2}${BESA5:4:2}${BESA5:2:2}${BESA5:0:2}
LERID=${BERID:6:2}${BERID:4:2}${BERID:2:2}${BERID:0:2}

SID="S-1-$((16#$LESA1))-$((16#$LESA2))-$((16#$LESA3))-$((16#$LESA4))-$((16#$LESA5))-$((16#$LERID))"
echo "${SID}"
Advertisements

getent passwd -s sss LOCALUSER shows local user

tl;dr

I want to easily and quickly tell if a user is local or domain (don’t care which domain).

Environment

  • freeipa-client-4.6.1-3.fc27.x86_64
  • sssd-1.16.0-4.fc27.x86_64

Full story

I am writing a userinfo.sh script that will show if a user is local, sssd, can ssh, and is permitted by sssd.

Currently I am doing the check for if the user is from the domain with the getent passwd -s sss $USERNAME command. But I ran into an issue where checking the sssd database returns a local user!

# getent passwd -s sss 'bgstack15-local'
bgstack15-local:x:1000:1000:bgstack15-local:/home/bgstack15-local:/bin/bash

Checking the contents of the database (cache) for sss shows sssd apparently caches all sorts of information about the local user.

# sudo su root -c 'strings /var/lib/sss/db/* | grep bgstack15-local' | sort | uniq
name=bgstack15-local@implicit_files,cn=groups,cn=ih
name=bgstack15-local@implicit_files,cn=groups,cn=implicit_files,cn=sysdb
name=bgstack15-local@implicit_files,cn=users,cn=implicit_files,cn=sysdb
[...output truncated]

I tried clearing the sssd cache overall, and just for the user. Neither made a difference.

# sss_cache -U
# sss_cache -u bgstack15-local

The user does show up as a local user, and I promise it is only a local user!

getent passwd -s files 'bgstack15-local'
bgstack15-local:x:1000:1000:bgstack15-local:/home/bgstack15-local:/bin/bash

The man pages for getent(1) and getpwent(3) don’t help me understand what could be going on. sssd(8) shows me that sssd can cache local users, which actually goes against what I want! The nss section of sssd.conf(5) doesn’t help, but maybe I didn’t take enough time to read it. I’m a little stuck.

My sssd.conf

[domain/ipa.example.com]
id_provider = ipa
ipa_server = _srv_, dns1.ipa.example.com
ipa_domain = ipa.example.com
ipa_hostname = fc27c-01a.ipa.example.com
auth_provider = ipa
chpass_provider = ipa
access_provider = ipa
cache_credentials = True
ldap_tls_cacert = /etc/ipa/ca.crt
krb5_store_password_if_offline = True
[sssd]
services = nss, pam, ssh, sudo
domains = ipa.example.com
[nss]
homedir_substring = /home
[pam]
[sudo]
[autofs]
[ssh]
[pac]
[ifp]
[secrets]
[session_recording]

Last resort

I can try doing my checks against ${USERNAME}@${DOMAIN} when doing the -s sss check, but that means I then have to iterate over all domains in sssd.conf and that would slow the process down.


Answer

The option that controls this behavior is buried in sssd.conf(5) on CentOS 7 and Fedora, but not in the online man page.

sssd.conf

[sssd]
enable_files_domain = false

Reference 3 shows that sssd makes a “fast cache for local users.”

From man sssd.conf(5) on my Fedora system:

   enable_files_domain (boolean)
       When this option is enabled, SSSD prepends an implicit domain with
       “id_provider=files” before any explicitly configured domains.

       Default: true

Disabling this behavior lets me make a simple check to see if it is a local user or domain user.

References

  1. ddg: sssd disable caching local users
  2. https://bugzilla.redhat.com/show_bug.cgi?id=1357418
  3. https://fedoraproject.org/wiki/Changes/SSSDCacheForLocalUsers
  4. Fedora 27 sssd.conf(5)

Ldapsearch notes

This post will be updated over time.

List all members of an AD group, including following the nested group membership

ldapsearch -b 'dc=dc=example,dc=com' -s 'sub' -x -D 'CN=B Stack,OU=Domain Users,DC=example,DC=com' -W -H 'ldaps://ds5.example.com:636' '(&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=CN=complex_sample_group,OU=Linux,OU=Security Groups,DC=example,DC=com))' samaccountname | awk '/^samaccountname/{print $2;}'

References

  1. https://stackoverflow.com/questions/1039554/ad-via-ldap-how-can-i-return-all-ancestor-groups-from-a-query/7505535#7505535

Unattended software updates on Devuan

Devuan, as a fork of debian that uses sysvinit (or another– your choice), still uses debian-based utilities. I come from the Fedora/Red Hat/CentOS rpm-based family of distributions, and I struggle with the dpkg-based package management on occasion.

I really dislike how the software upgrades will sometimes pause in the middle, to display the changelog. If I wanted a changelog, I’d go read it! When I issue a command to update packages, I want to walk away, and come back, and it be done, not get stuck at 20% because openssh changed some defaults and wants to tell me. It emails me anyway! I find the defaults of apt-get to be not sane.

Here is how to configure apt-get to run without pausing to display duplicate information or ask you questions.

export DEBIAN_FRONTEND=noninteractive
apt-get -q -y upgrade

References

Weblinks

  1. https://obviate.io/2008/12/04/truly-non-interactive-unattended-apt-get-install/

Logrotate, audit.log, selinux, cron, and ansible

The story

The disk space for /var/log/audit/audit.log tends to get filled up. The audit daemon has an ability to rotate its own logs. See the man page for auditd.conf.

max_log_file             =  100
max_log_file_action      =  rotate

That’s swell and all, until you realize that auditd cannot compress its rotated logs. On a small /var/log/audit mount point, you’ll fill it up with uncompressed logs.

/dev/mapper/os-var_log_audit          2490M 2136M      355M      86% /var/log/audit

So on a RHEL7 system with logrotate, you can adjust logrotate to handle the audit.log file. Now, logrotate is a finicky application. It has caused me many hours of grief in the past.
You would want to set auditd.conf a certain way:

max_log_file             =  0
max_log_file_action      =  ignore

And set /etc/logrotate.d/audit:

/var/log/audit/*.log {
        weekly
        missingok
        compress
        #copytruncate
        rotate 30
        minsize 100k
        maxsize 200M
        postrotate
                touch /var/log/audit/audit.log ||:
                chmod 0600 /var/log/audit/audit.log ||:
                service auditd restart
        endscript
}

And ensure you’ve got a /etc/cron.weekly/logrotate:

#!/bin/sh

/usr/sbin/logrotate /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit 0

After a few days, I learned that my logs were getting filled up so fast, the weekly rotation wasn’t good enough. So I had to place it in my cron.hourly.
And then I learned that it wasn’t running every hour. I spent a few days investigating, and eventually learned that some systems use a specific status file for logrotate. I remember in the past logrotate needs an execution with a -f flag to force the rotation the first time and add a new file to the status file. So if a new file was never force-rotated, it won’t be added to the status file.
My manual logrotate -f command was indeed adding my audit.log log file to the status file, but to the wrong one!
Some of my systems use -s /var/lib/logrotate/logrotate.status but the default is /var/lib/logrotate.status.
So I had to reflect that in my ansible playbook. Actually, I had to write some logic to find the one used by the cronjob and then use that status file.

So I got the correct logrotate status file set up in the ansible playbook. I spent the next week figuring out that logrotate simply couldn’t rotate the file when called from cron. I piped the utility to tee, and also included the -v flag on logrotate. I saw a permission denied.
With the permission issue, I had no choices left by selinux. I had to use the audit.log file to determine that the audit.log file is not readable by logrotate when called by cron.
I finally set captured all the actions performed by logrotate by setting the selinux process context to be permissive:

semanage permissive -a logrotate_t
I let it run, and then had to collect all the actions it performed, and saw what had happened.
{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2why

So I used audit2allow to convert it to an selinux policy.

{ grep logrotate /var/log/audit/audit.log ; zgrep logrotate /var/log/audit/audit.log.1.gz ; } | audit2allow -M logrotate-audit

And then after some searching online, I learned how I can keep the text definition file, and compile the policy from it when I need to:

grep logrotate /var/log/audit/audit.log | audit2allow -m logrotate-audit # saves to logrotate-audit.te
checkmodule -M -m -o logrotate-audit.mod logrotate-audit.te # intermediate step
semodule_package -o logrotate-audit.pp -m logrotate-audit.mod # compiled policy
semodule -i logrotate-audit.pp

The text definition of logrotate-audit policy:

#semodule -i logrotate-audit.pp

module logrotate-audit 1.0;

require {
        type auditd_etc_t;
        type logrotate_t;
        type auditd_log_t;
        class file { create getattr ioctl open read rename setattr unlink write };
        class dir { add_name read remove_name write };
}

#============= logrotate_t ==============
allow logrotate_t auditd_etc_t:file getattr;
allow logrotate_t auditd_log_t:dir { read write add_name remove_name };
allow logrotate_t auditd_log_t:file { create ioctl open read rename getattr setattr unlink write };

Now, I wrote a master ansible playbook that performs this whole operation, from loading the .te file and compiling it and installing it, to setting logrotate to watch the audit file, and telling auditd to ignore rotating it.
Note: It is outside the scope of this task to ensure that the selinux tools are in place on each server. My environment already ensures package libselinux-python is present on each system, which should bring in all the dependencies of this ansible playbook.

---
# File: /etc/ansible/books/fix_var-log-audit.yml
# Author: bgstack15
# Startdate: 2018-01-24
# Title: Playbook that Fixes the /var/log/audit Space Issue
# Purpose: Logical Disk Free Space is too low
# History:
# Usage:
#    ansible-playbook -i /etc/ansible/inv/hosts /etc/ansible/configuration/fix_var-log-audit.yml -l hostwithproblem201
#    Use the -l host1,host2 parameter.
# Reference:
#    roles/general_conf/tasks/04_selinux.yml
#    roles/general_conf/tasks/05_auditd.yml
# Improve:
# Documentation:
#    The intention with auditd is to minimize the disk usage of the logs

- hosts: all
  remote_user: ansible_user
  become: yes

  vars:
    auditd_conf: /etc/audit/auditd.conf
    auditd_log_cleanup_regex: '.*audit\.log\.[0-9]+'
    auditd_log_dir: /var/log/audit
    auditd_logrotate_conf: /etc/logrotate.d/audit

  tasks:

# To make it possible to just drop in files to the files directory and have this module read them automatically, use these two.
#  - name: learn full list of semodules available to install, modular list version
#    shell: warn=no find /etc/ansible/roles/general_conf/files/selinux/ -regex '.*.te' -printf '%f\n' | sed -r -e 's/\.te$//;'
#    register: semodules_list
#    changed_when: false
#    delegate_to: localhost
#    ignore_errors: yes
	
#  - name: learn semodule versions to install, modular list version
#	shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/roles/general_conf/files/selinux/{{ item }}.te | awk '{print $3*1000;}'
#    register: selinux_pol_versions_target
#    changed_when: false
#    delegate_to: localhost
#    with_items:
#    - "{{ semodules_list.stdout_lines }}"

  - name: learn semodule versions to install, static version
    shell: warn=no grep -E '^\s*module\s+{{ item }}\s+[0-9\.]+;\s*$' /etc/ansible/templates/{{ item }}.te | awk '{print $3*1000;}'
    register: selinux_pol_versions_target
    changed_when: false
    delegate_to: localhost
    with_items:
    - logrotate-audit

  #- debug:
  #    msg: "{{ item.item }} should be {{ item.stdout }}"
  #  with_items:
  #  - "{{ selinux_pol_versions_target.results }}"

  - name: learn current semodule versions
    shell: warn=no semodule --list | awk '$1=="{{ item.item }}" {print $2*1000} END {print "0";}' | head -n1
    register: selinux_pol_versions_current
    changed_when: false
    with_items:
    - "{{ selinux_pol_versions_target.results }}"

  - debug:
      msg: "{{ item.item.item }} is currently {{ item.stdout }} and should be {{ item.item.stdout }}"
    with_items:
    - "{{ selinux_pol_versions_current.results }}"

  #- pause:
  #    prompt: "Does the above look good?........................"

  - name: download selinux modules that need to be installed
    copy:
      src: "/etc/ansible/templates/{{ item.item.item }}.te"
      dest: "/tmp/{{ item.item.item }}.te"
      mode: 0644
      owner: root
      group: root
      backup: no
      force: yes
    changed_when: false
    when:
    - "item.item.stdout > item.stdout"
    with_items:
    - "{{ selinux_pol_versions_current.results }}"

  - name: install selinux modules
    shell: chdir=/tmp warn=no /usr/bin/checkmodule -M -m -o "/tmp/{{ item.item.item }}.mod" "/tmp/{{ item.item.item }}.te" && /usr/bin/semodule_package -m "/tmp/{{ item.item.item }}.mod" -o "/tmp/{{ item.item.item }}.pp" && /usr/sbin/semodule -v -i "/tmp/{{ item.item.item }}.pp"
    when:
    - "item.item.stdout > item.stdout"
    with_items:
    - "{{ selinux_pol_versions_current.results }}"

  - name: clean any temporary selinux modules files
    file:
      path: "/tmp/{{ item[0].item.item }}.{{ item[1] }}"
      state: absent
    changed_when: false
    when:
    - "item[0].item.stdout > item[0].stdout"
    with_nested:
    - "{{ selinux_pol_versions_current.results }}"
    - [ 'te', 'pp', 'mod' ]

##### END SELINUX PORTION

  # modify auditd.conf which notifies the handler
  - name: auditd does not keep logs
    lineinfile:
      path: "{{ auditd_conf }}"
      regexp: "{{ item.r }}"
      backrefs: yes
      line: "{{ item.l }}"
      create: no
      state: present
      backup: yes
    #notify: auditd handler
    with_items:
    - { r: '^max_log_file_action.*$', l: 'max_log_file_action      =  ignore' }
    - { r: '^max_log_file\s.*$', l: 'max_log_file             =  0' }

  # tarball and cleanup any existing audit.log.1 files
  - name: list all old auditd logs which need to be compressed and cleaned up
    shell: warn=no find /var/log/audit -regex {{ auditd_log_cleanup_regex }}
    register: cleanup_list
    ignore_errors: yes
    changed_when: cleanup_list.stdout_lines | length > 0

  - name: get archive filename
    shell: warn=no echo "audit.log.{{ ansible_date_time.epoch }}.tgz"
    register: audit_log_tgz
    changed_when: audit_log_tgz.stdout_lines | length != 1

  - name: touch archive file
    file:
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      state: touch
      owner: root
      group: root
      mode: 0600
    when: cleanup_list.stdout_lines | length > 0

  - name: archive and cleanup existing audit.log.1 files
    archive:
      dest: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
      path: "{{ cleanup_list.stdout_lines }}"
      format: gz
      owner: root
      group: root
      remove: yes
    ignore_errors: yes
    when: cleanup_list.stdout_lines | length > 0

  - name: check for existence of new tarball
    stat:
      path: "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}"
    ignore_errors: yes
    register: audit_log_tarball

  - name: place audit log tarball in auditd_log_dir
    shell: warn=no /bin/mv "{{ auditd_log_dir }}/../{{ audit_log_tgz.stdout }}" "{{ auditd_log_dir }}/"
    ignore_errors: yes
    when:
    - audit_log_tarball.stat.exists is defined
    - audit_log_tarball.stat.exists

  - name: get current size of audit log
    stat:
      path: "{{ auditd_log_dir }}/audit.log"
    ignore_errors: yes
    register: audit_log_stat

  - name: apply logrotate script for audit
    copy:
      src: /etc/ansible/templates/etc-logrotate.d-audit
      dest: "{{ auditd_logrotate_conf }}"
      owner: root
      group: root
      mode: 0644
      backup: yes

  - name: learn the logrotate.status file to use, if any
    shell: warn=no grep -rE -- 'bin\/logrotate\>.*(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' /etc/cron.* 2>/dev/null | grep -oE '(-s|--state)(\s|=)[\/[A-Za-z0-9\.]+\>' | sort | uniq | head -n1
    ignore_errors: yes
    changed_when: false
    register: this_logrotate_flag

  - name: show which logrotate.status file to use, if any
    debug:
      msg: "The status file that will be used is {{ this_logrotate_flag.stdout }}"

  - name: run logrotate
    shell: warn=no /usr/sbin/logrotate {{ this_logrotate_flag.stdout }} -f "{{ auditd_logrotate_conf }}"
    register: run_logrotate
    when: ( cleanup_list.stdout_lines | length > 0 ) or ( audit_log_stat.stat.exists and audit_log_stat.stat.size > 190000000 )

  handlers:
...

Summary

So, logrotate can be configured to rotate the audit log. It just takes a few minutes to configure correctly, after about 2 weeks of research and testing.

References

Weblinks

  1. http://melikedev.com/2013/08/19/linux-selinux-semodule-compile-pp-module-from-te-file/
  2. https://linux.die.net/man/8/auditd.conf

Personal effort

Hours and hours of my original research
Years of administering RHEL servers with logrotate

Compiling Pale Moon web browser on Fedora 27

<

Story

I like the traditional model of Firefox. This is easily represented in the current project Pale Moon. The upstream project does not currently provide instructions for compiling on Fedora 27. All the options they have documented are on their developer site: https://developer.palemoon.org/Developer_Guide:Build_Instructions/Pale_Moon/Linux.

You might think it is no big deal to download and compile an application. But when the application is still targeted for gcc4.9, it’s a little tricky to compile on the current Fedora which uses gcc7.

After consulting the Internet, I have assembled my instructions for compiling Palemoon on Fedora.

Compile palemoon on Fedora 27

# Install the whole set of packages listed on (primary ref 1) for CentOS7
sudo dnf -y install gtk2-devel dbus-glib-devel autoconf213 yasm mesa-libGL-devel alsa-lib-devel libXt-devel zlib-devel openssl-devel sqlite-devel bzip2-devel pulseaudio-libs-devel
sudo dnf -y groupinstall 'Development Tools'

# Install additional dependencies I found
sudo dnf -y install nspr-devel

# Use autoconf 2.13
autoconfver="$( autoconf --version 2>/dev/null | awk 'NR==1 {print $NF*100;} END {print "0";}' | head -n1 )"
test ${autoconfver} -ne 213 && test ${autoconfver} -gt 0 && sudo mv /usr/bin/autoconf /usr/bin/autoconf-${autoconver} 2>/dev/null ; sudo ln -sf autoconf-2.13 /usr/bin/autoconf

# Use the copr davidva/gcc49
sudo dnf -y copr enable davidva/gcc49
# fix the $releasever variable in the repo
sudo sed -i -r -e 's/-\$releasever-/-23-/;' /etc/yum.repos.d/_copr_davidva-gcc49.repo
sudo dnf -y install gcc49
# fix a minor library problem (primary ref 2)
pushd /opt/gcc-4.9.3/lib64/gcc/x86_64-fedoraunited-linux-gnu/4.9.3
sudo cp -p ../lib64/libgcc_s.so.1 .
sudo ln -s libgcc_s.so.1 libgcc_s.so
popd

# Fetch palemoon source
mkdir ~/pmsrc ~/pmbuild
cd ~/pmsrc
git clone https://github.com/MoonchildProductions/Pale-Moon.git .

# Prepare to compile
# use .mozconfig from (primary ref 1)
tf=~/pmsrc/.mozconfig
touch "${tf}"
cat <<'EOFMOZCONFIG' > "${tf}"
# Please see https://www.palemoon.org/redist.shtml for restrictions when using the official branding.
ac_add_options --enable-official-branding
export MOZILLA_OFFICIAL=1

mk_add_options MOZ_CO_PROJECT=browser
ac_add_options --enable-application=browser

mk_add_options MOZ_OBJDIR=/home/$USER/pmbuild/

ac_add_options --enable-optimize="-O2 -msse2 -mfpmath=sse"
ac_add_options --with-pthreads

ac_add_options --disable-installer
ac_add_options --disable-updater

ac_add_options --enable-release
ac_add_options --enable-devtools
ac_add_options --enable-jemalloc
ac_add_options --enable-shared-js

ac_add_options --enable-strip

ac_add_options --x-libraries=/usr/lib
EOFMOZCONFIG

# Compile:
mkdir ~/log
cd ~/pmsrc
source /usr/bin/gcc49
{ time ./mach build ; } | tee -a ~/log/pmsrc.$( date "+%Y-%m-%d-%H%M%S" ).log
echo done

Future steps

Post these instructions on the fedora and palemoon fora.
Ask Pale Moon devs if these can be adapted and shared on Reference 1.

References

Weblinks

primary

  1. http://developer.palemoon.org/Developer_Guide:Build_Instructions/Pale_Moon/Linux
  2. ensure lib files are in right dir: https://serverfault.com/questions/266138/cannot-find-lgcc-s-from-gcc-3-4-on-unbuntu-11-04/266141#266141

auxiliary

  1. A great example and future reference http://ftp.nluug.nl/pub/os/Linux/distr/pclinuxos/pclinuxos/srpms/SRPMS.pclos/palemoon-27.0.3-1pclos2017.src.rpm
  2. https://build.opensuse.org/package/show/network/palemoon

Notes about set-gid and sticky bits for directories

I can never remember how the set-gid and sticky bits work on directories, so I finally spent some time to re-read man (but had to resort to info) about chmod. This is my cheat sheet.

set-gid

Setgid (octal permission 2000) makes new files in the directory owned by the group that owns the directory. This is very useful for teams.

How to set

chmod g+s thisdir
chmod 2770 thisdir

How to clear

chmod g-s thisdir
chmod 00770 thisdir

sticky bit, or restricted deletion

Sticky bit (octal permission 1000) on a directory prevents Bob from deleting a file owned by Alice. Even if the directory is owned by one of Bob’s groups and is writable, Bob cannot delete the Alice’s files. This is particulary helpful for the /tmp directory. Check it out:

$ ls -lad /tmp
drwxrwxrwt. 4 root root 120 Jan 23 09:40 /tmp

How to set sticky bit

chmod a+t thisdir
chmod 1770 thisdir

How to clear

chmod a-t thisdir
chmod 00770 thisdir

According to info coreutils chapter 27.4, “Directories and the Set-User-ID and Set-Group-ID Bits,” gnu chmod needs a 5-digit octal to clear this bit.
Basically, if it’s worth setting set-gid, you should throw in sticky bit.

chmod 03770 thisdir

gcc 4.9 for Fedora 27

If you want gcc 4.9 (the GNU Compiler Collection) on the current version of Fedora, which is Fedora 27, there is an easy option for you!

You can use the Cool Other Packages Repo, or copr, written by the user davidva.
The easiest way to use his gcc49 copr is this:

sudo dnf -y copr enable davidva/gcc49
sudo sed -i -r -e 's/-\$releasever-/-23-/;' /etc/yum.repos.d/_copr_davidva-gcc49.repo
sudo dnf -y install gcc49

There is a weakness in the repo file delivered by the copr, where it uses the $releasever instead of a static number 23 used by the repo. Davidva compiled the package without a specific fedora version number tied to it, so it will install on any version of Fedora, as long as you can get to the rpm package.
There is also a weakness with a particular library. It’s in the wrong directory, so it cannot be found. You might find an error when trying to compile a project:

/usr/bin/ld: cannot find -lgcc_s

Use these steps to fix it (Weblink 2).

pushd /opt/gcc-4.9.3/lib64/gcc/x86_64-fedoraunited-linux-gnu/4.9.3
sudo cp -p ../lib64/libgcc_s.so.1 .
sudo ln -s libgcc_s.so.1 libgcc_s.so
popd

As the copr page (Weblink 1) indicates, this installation of gcc 4.9 can be present concurrently with the main gcc installation. To use version 4.9, you need to run:

source /usr/bin/gcc49

Which sets a few variables, notably $CC and $CXX to be the specific binaries from this package.

References

  1. https://copr.fedorainfracloud.org/coprs/davidva/gcc49/
  2. ensure lib files are in right dir: https://serverfault.com/questions/266138/cannot-find-lgcc-s-from-gcc-3-4-on-unbuntu-11-04/266141#266141