Blog: SANS Audit Advice & Resources

Blog: SANS Audit Advice & Resources

Detecting Audit Prep versus Good Processes in Place Part 2

This is second part of two postings that touch upon the notion of an organization's IT personnel preparing for an audit versus having good practices in place. While the previous posting focused on the Windows environment (see Detecting Audit Prep versus Good Processes in Place - Part 1 ), this one will focus more on the Linux/UNIX side of things.

To somewhat mirror the previous blog entry, we'll look at when passwords were last changed, where some of the password controls may be, as well as take a look at when patching was last performed. One thing with UNIX/Linux though, one particular technical control may not be implemented exactly the same from flavor to flavor- that is, CentOS and openSUSE may do things one way, whereas Ubuntu does something a little differently, as will Solaris, as will Debian, as will OS X, etc... And things can be arguably more complex in the UNIX world, as in some cases, you can enforce a given control in more than one way. So when auditing UNIX/Linux systems, your scripts may require that you execute different commands (sometimes with slightly different switches) look in more places, and files to obtain a desired output for your analysis.

For the purpose of this posting though, we'll focus on just one or two ways or Linux variants to try and determine if audit preparation may have taken place for a given technical control. Again, do keep in mind that in some cases, there may be other ways to enforce a particular control, and just because you don't find evidence in one place doesn't mean it's not being done in a different way somewhere else. But, this is what prompts dialogue with the client, which is an important part of the audit service you perform- facilitating a transparent analysis of the data, while providing as much meaningful and actionable feedback to the client.

Regarding Simple Password Controls and User Accounts


Continuing along with the previous posting, we'll continue to audit systems for a large organization with decentralized IT. First, let's look at one of the locations where "simple" password controls may be set for some RedHat-based Linux OSes (e.g. CentOS, openSUSE, SUSE Linux Enterprise Server, Fedora, etc.). The file /etc/login.defs can contain password aging controls and the minimum password length for local user accounts.Obtaining the contents of login.defs is as simple as using the cat command to concatenate the file's contents to a text file for off-line analysis. An example of how to do this is below:
cat /etc/login.defs > ./loginDefs.txt

Keep in mind that the above is a simplistic example. Ideally, collecting information from a Linux/UNIX-based system will be done via a script that collects dozens if not hundreds of individual system artifacts for offline analysis so as to get a more accurate picture of this system- and possibly to be compared with information gathered from others (e.g. for comparison, and to try and determine if an issue may be systemic).

The relevant information gathered from the target system's login.defs file is below (note that some Linux/UNIX Oses may have more relevant controls in this file or in other locations):

...
PASS_MAX_DAYS 180
PASS_MIN_DAYS 0
PASS_MIN_LEN 5
PASS_WARN_AGE 7
...

So OK... We have some password-related parameters enforced by the OS for local accounts. The first question to ask, is "is this consistent with organization policy?". If not, that may be a finding already. And we notice real quickly that the minimum password length is pretty low (PASS_MIN_LEN is set to 5). Note that other password parameters/requirements may be in places like the PAM configuration files...

So now, let's look at the timestamps of when the passwords were last changed. This is stored in the /etc/shadow file. First though, a word of caution: this is a very sensitive file, and should be treated with utmost care and prudence, as it contains the password hashes for the local user accounts. You do not want to be the auditor that loses control of this information... That being said, it does allow for password testing, which can be an important part of the audit. Do keep in mind your audit objectives and scope- this is not a pen test. It's an audit. Your objective is (most likely) not to try and crack all of the passwords, but rather to test for poor and default passwords if this kind of testing is performed at all.

For those not familiar with it,the shadow generally has a line for each user account that has the following construction:

username:passwd:last:min:max:warn:expire:disable:reserved


Where:
  • username is the username
  • passwd is the password hash
  • last is the days since the epoch (January 1, 1970) the password was last changed
  • min is the minimum number of days that must pass before a password can be changed
  • max is the maximum number of days that a password can be used (maximum age)
  • warn is the number of days that a user is warned prior to the password expiring
  • expire is the number of days after a password expires that an account is disabled
  • disable is the number of days since the epoch that the account has been disabled
  • reserved is a reserved field...
Let's say we pulled the shadow file, and we have the two user account entries below:
user2:$1$xqbWUyd/$RmZ5k7.r9BAFQgiLeQ1vJ0:13789:0:99999:7:::
user3:$1$Cz8s9PHr$qYtIn1Pp38/QKsc3HNk920:14507:0:99999:7:::

Note that maximum password age is set to 99999 for each of them. This indicates that the passwords never expire... So a few questions should come to mind: "Is this prudent for these user accounts?" and "Is this consistent with the organization's policy?". Also note that the password age is not the 180 that was set in the login.defs file... But now, let's focus on the date the password was last set (the third field).

Now let's translate/convert the password last changed dates:

  • 13789 converts to Wednesday, October 03, 2007
  • 14507 converts to Sunday, September 20, 2009
So both are well outside the 180 day maximum password age noted in /etc/login.defs - and by quite a long time...

Note that a formula that works for the epoch time value noted above is <epoch value>+25569 . This resultant value can be used in an Excel spreadsheet, and then formatted as a date. The number 25569 is "January 1, 1970" when formatted as a date in Excel... Also bear in mind that some Linux/UNIX variants may have the timestamp as a much longer number, and in that case a slightly different formula is used to convert it to a human-readable date.

To help determine whether or not the accounts with passwords thathavenot been changed in a significant amount of time may be in use, you can use the last command to see what accounts have logged in recently:

#last -200

which will display the last 200 logins:
...
user1 sshd 10.10.10.3 Mon Apr 15 10:54 - 10:57 (00:02)
user2 sshd 10.10.10.13 Mon Apr 15 10:53 - 10:54 (00:00)
user3 sshd 10.10.21.23 Mon Apr 15 10:47 - 10:53 (00:06)
oracle pts/22 10.20.10.11 Mon Apr 15 10:37 still logged in
...

So we can see that user2 and user3 have both been logging in recently, thus, they are indeed actively used. This is an example of some of the due diligence you need to do to come to well-supported conclusion (and if possible, with little that can be refuted).

Another thing we could look at is the timestamp as to when the login.defs file was last changed. As noted in "Part 1", let's say the information was gathered on April 15, 2011. To get datethefile was last modified, we simply use the ls command with a few switches:

ls -latr /etc/login.defs

which has the following output:
-r-------- 1 root root 1301 Apr 11 11:31 /etc/login.defs

Notice that the timestamp is April 11 at 11:31 am. Due to the close proximity of the file's timestamp as to when the information was gathered, this may raise an eyebrow regarding the coincidence of the timestamp date relative to when the audit started (remember your professional skepticism). Never underestimate the information that timestamps can tell you... When aggregated with all of the information you analyze, it can help to "tell the story" the system holds.

So from the above information (which yes, is a rather simple example), it appears that:

  • The password controls were recently changed- just prior to the audit;
  • Accounts have passwords with no expiration;
  • There are active user accounts with passwords that have not been changed in years; and
  • If you attempt to crack the MD5 password hashes, you'd find out that they are using rather poor passwords. ;)

Regarding Patching


OK... so now you have an idea of how to tell when a password was last changed, and where the maximum password age is set for some Linux variants... This compliments the one of the two items noted in"Part 1". The second item is that of patching. Unfortunately, especially depending on the Linux/UNIX variant may not be as straight forward as Windows. Some variants will keep track of the latest installation date for the most current version of a particular package- that is, no real history is tracked per se, just a timestamp of when the latest version was installed. A powerful command for Linux variants that use rpm (the RedHat Package Manager), is:
rpm -qa --queryformat "%{installtime} (%{installtime:date}) %{name}-%{version}.%{RELEASE}\n"

This command will produce output like this:
...
1331924432 (Fri 16 Mar 2012 02:00:32 PM CDT) yum-fastestmirror-1.1.16.21.el5.centos
1331924436 (Fri 16 Mar 2012 02:00:36 PM CDT) sos-1.7.9.62.el5
1304132632 (Fri 29 Apr 2011 10:03:52 PM CDT) xorg-x11-util-macros-1.0.2.4.fc6
1304132663 (Fri 29 Apr 2011 10:04:23 PM CDT) ncurses-5.5.24.20060715
1304132667 (Fri 29 Apr 2011 10:04:27 PM CDT) diffutils-2.8.1.15.2.3.el5
...

Notice you get a longer form of an epoch date (which would use a different formula than the one noted above) as to when the package was installed/upgraded, its human-readableequivalent, and the pack name with version. Again though, the information pulled from the rpm database is only that of the most recent version of a package installed, so no real history is retained. That is, if a box is updated every month, and a particular package is updated every month, you will only be able to see its latest install date, with no information retained as to the previous package's installations. In the case of trying to detect auditpreparationthough, you are looking to see if a large number of the packages were installed right before the audit began.

For Solaris, determining when patches were installed can be fairly simple. Just do a directory listing of /var/sadm/patch:

#ls -latr /var/samd/patch

which contains entries like this:
...
drwxr-xr-- 2 root root 512 Aug 3 2009 118683-03
drwxr-xr-- 2 root root 512 Jul 27 2009 118777-14
drwxr-xr-- 2 root root 512 Jul 27 2009 118959-04
drwxr-xr-- 2 root root 512 Jul 27 2009 119059-47
drwxr-xr-- 2 root root 512 Jul 27 2009 119254-66
drwxr-xr-- 2 root root 512 Nov 5 14:48 119254-76
...

This is a listing of the patches/updated package directories along with their creation dates when correspond with the time the patch/updated package was installed (do note that someone can manually remove or move these directories)- notice that these were installed some time ago... There is also generally areadme file in each of the subdirectories. Its timestamp will give you an idea of when the patch was created by the vendor (Sun/Oracle). From there, you can determine the time between creation and implementation. The nice thing about the method Solaris uses is that you have a more complete history of patching than some other Linux/UNIX variants, as previous patch version information is retained. This may help give you an idea as to how regularly a particular Solaris-based system is patched.

And to give one more Linux example, Ubuntu has a relatively easy way to get the timestamps of its installed packages when trying to determine is the system is updated on a regular basis- look at the contents of thedpkg.log file:

#cat/var/log/dpkg.log

which contains entries like:
...
2011-03-17 19:12:15 status unpacked empathy 2.30.3-0ubuntu1.1
2011-03-17 19:12:15 status half-configured empathy 2.30.3-0ubuntu1.1
2011-03-17 19:12:15 status installed empathy 2.30.3-0ubuntu1.1
2011-03-17 19:12:15 configure libevdocument2 2.30.3-0ubuntu1.3 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 status unpacked libevdocument2 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 status half-configured libevdocument2 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 status installed libevdocument2 2.30.3-0ubuntu1.3
2011-03-17 19:12:15 configure libevview2 2.30.3-0ubuntu1.3 2.30.3-0ubuntu1.3
...

Wrapup...


So there you go. Two small examples of items that, along with other information gleaned from the system information you've gathered, can help determine if audit preparation is at play versus an organization's system custodians having good processes in place. Keep in mind, it can make the information gathering process go quickly if you have a script built that pulls all of the information you need into a single output file. Then, when you've performed your analysis, and aggregated the relevant information, have a non-confrontational discussion with the relevant parties (following whatever rules of engagement have been defined) regarding your potential findings. This and the previous posting are not meant to be the de-facto indicators of audit preparation, but are meant to foster meaningful and thoughtful analysis of the data you've gathered in order to provide greater value to the client- to validate whether or not the controls in place are indeed effective in their environment, and not just thrown together at the last moment to "prepare for the audit"...

1 Comments

Posted April 19, 2012 at 9:35 PM | Permalink | Reply

Andrew Barratt

Good article, as an Infosec auditor and QSA, I regularly see either shameless audit prep, usually evidenced by lots of last minute patching or config changes. Or the exact opposite and no-audit prep but an IT manager who is looking to use audit findings to get budget for something else.
Good processes in place usually shows up with consistent entries that occur with in sensible time periods that can then aligned with change control/ incident management.

Post a Comment






* Indicates a required field.