Monday, October 25, 2010

System Monitoring Commands

I’ve been doing a large system monitoring project the past month and have setup a centralized monitoring solution that tracks over 800 servers using Nagios.

as part of that we established multiple trending reports and taught the network operations center support staff how to run various tools in LInux for server monitoring (Most of the NOC staff at this company is MS Centric with limited exposure to linux.)

These commands should be well known to anyone doing linux system administration. If you manage Linux servers and aren’t familiar with any of the commands on this list you should spend some time playing with the various options of these tools, knowing what they can do and knowing how to use them can be very useful in troubleshooting a system issue.

the commands we covered are:

top – which provides a dynamic real-time view of running processes. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds.

vmstat – reports information about processes, memory, paging, IO, traps and CPU activity

w – displays who is online (logged in) and reports what they are doing

uptime – reports how long the system has been running

ps – displays running processes

free – Displays total free and used physical and swap memory

iostat – repots statistics on i/o

sar – collects and reports on system activity

mpstat –multiprocessor statistics

pmap – Pricess memory usage

netstat – Network Statistics

ss – network statistics

iptraf – IP Lan Monitor (real time network statistics)

tcpdump – command line packet dump utility for network analysis

strace – system calls trace – useful for debugging

nmap – much much more than just a port scanner –

cacti – web based monitoring tool

ntop – Network Top – displays the top network users

htop – enhanced version of top -

vnstat – console based network traffic monitor

wireshark – the best protocol analyzer around

nagios – Open Source enterprise System Monitor - look for several articles coming soon on the use of nagios.

dstat – combines the output from vmstat, iostat, ifstat, netstat and other tools –

powertop – monitors power consumption of application based on how much time the cpu stays in low power mode vs. Turbo Modes – requires acpi

whowatch – basically shows who is logged in and what they are doing in real time (similar to ‘w’ but it continuously updates)

dtrace – DTrace can be used to get a global overview of a running system, such as the amount of memory, CPU time, filesystem and network resources used by the active processes. It can also provide much more fine-grained information, such as a log of the arguments with which a specific function is being called, or a list of the processes accessing a specific file.

Review your Log Data

Read your logs using logwatch or logcheck. These tools make your log reading life easier. You get detailed reporting on unusual items in syslog via email.

Wednesday, October 20, 2010

keep user accessible data on separate disk partitions

Separation of the operating system files from user files may result in a more secure system. ideally the following filesystems should be mounted on separate partitions:

  • /usr
  • /home
  • /var and /var/tmp
  • /tmp

I also suggest separate partitions for Apache and FTP server roots. Edit /etc/fstab file and make sure you add the following configuration options:

  1. noexec - Do not set execution of any binaries on this partition (prevents execution of binaries but allows scripts).
  2. nodev - Do not allow character or special devices on this partition (prevents use of device files such as zero, sda etc).
  3. nosuid - Do not set SUID/SGID access on this partition (prevent the setuid bit).

Sample /etc/fstab entry to to limit user access on /dev/sda5 (www server root directory):

/dev/sda5  /srv/www/htdocs          ext3    defaults,nosuid,nodev,noexec 1 2

Tuesday, October 12, 2010

establish password aging policies

The chage command changes the number of days between password changes and the date of the last password change. This information is used by the system to determine when a user must change his/her password. The /etc/login.defs file defines the site-specific configuration for the shadow password suite including password aging configuration. To disable password aging, enter:


chage -M 99999 userName

To get password expiration information, enter:

chage -l userName

You can also manually specify the information in the /etc/shadow file which has the following fields

{userName}:{password}:{lastpasswdchanged}:{Minimum_days}:{Maximum_days}:{Warn}:{Inactive}:{Expire}:



Note that the “Expire” date is in Unix Time (seconds since Jan 1, 1970)


The chage command is usually easier than manually editing the /etc/shadow file. 



chage –M 60 –m 7 –W 7 <accountname>

Lock accounts after failed login attempts

You can use the faillog command to set login failure limits and to display a list of failed login attempts.

to unlock an account you can use:

faillog –r –u <accountname>

you can also use the passwd file to lock or unlock accounts manually.

passwd –l <accountname>

passwd –u <accountname>

Sunday, October 3, 2010

Disable unnecessary services

You should periodically review what services are running and remove any that are no longer needed. One way to check is to use the following command: (Note that this command checks for services running in run level 3)

chkconfig –-list |grep ‘3:on’

If you see any services you need to stop and disable you can use these commands:

service <servicename> stop

chkconfig <servicename> off

the first one stops the service; the second one removes it from the list of services that start when you initialize a runlevel (such as system startup).

Check what ports are listening

You can check your servers listening ports with the following command:

netstat –tulpn

or

nmap –sT –0 <hostname>

if any aren’t needed you should consider shutting down that service or blocking access to the port with iptables.

Tuesday, September 28, 2010

Use aide to monitor core system configuration files

AIDE (Advanced Intrusion Detection Environment) can be used to help track file integrity by comparing a 'snapshot' of the system's files prior to and after a suspected incident. It is a freeware version of Tripwire, AIDE uses a database to accumulate key file attributes like permissions, mtime, ctime, and number of links for a system. The idea is to build the database before 2 things occur:

  1. before the snapshot image of these files is taken prior to the system being placed on a network; and,
  2. before the snapshot image is taken prior to a system compromise.

For further protection, a checksum of each file in its current state can be used with a choice of several hash methods.

The idea behind AIDE and other host-based IDSs is for the snapshot to be taken and then periodically updated as the system is updated. Patches, hardware and other software installs tend to change the size and nature of files, so it is always a good idea to re-run AIDE after making changes. If the administrator then suspects that a system has been compromised, running AIDE and then comparing the two snapshots will assist the admin in honing in on what happened. Doing so without this original snapshot can be difficult at best.

aide is included in many distributions of Linux. to begin using it you must review the /etc/aide.conf file and ensure that it has the correct information for your environment- then initialize the database using the command

aide –i

once the database is created you run a comparison using the –-check switch; i set this up as a cron job to run daily (although you can do this more or less often as you feel necessary) the command is

aide –-check >/tmp/aide.log

then i have the aide.log mailed to me for review. If nothing is changed it is a simple comparison. over time the system will have changes made, you can use this log file to review changes as they occur. periodically; as the volumes of change from the initial snapshot grows you can update your snapshot using the following command

aide -u

 

Tuesday, August 10, 2010

Linux Security Tools

 

System Auditing

  • Chkrootkit (YoLinux tutorial) - Scan system for trojans, worms and exploits.
  • checkps - detect rootkits by detecting falsified output and similar anomalies. The ps check should work on anything with /proc. Also uses netstat.
  • Rootkit hunter - scans for rootkits, back doors and local exploits
  • Rkdet - root kit detector daemon. Intended to catch someone installing a rootkit or running a packet sniffer.
  • Tripwire : The grand-daddy of file integrity checkers
  • RKHunter : An Unix Rootkit Detector
  • chkrootkit : Locally checks for signs of a rootkit
  • fsaudit - Perl script to scan file systems and search for suspicious looking directories
  • - UNIX security checks. Programs and shell scripts which perform security checks. Checks include file and directory permissions, passwords, system scripts, SUID files, ftp configuration check, ...
  • SARA - Security Auditor's Research Assistant - network security vulnerability scanner for SQL injections, remote scans, etc. (follow-on to the SATAN analysis tool)
  • - Texas A&M University developed tools
  • Tiger - Scan a Unix system looking for security problems (Similar to COPS) -
  • Tiger Analytical Research Assistant (TARA Pro) - Commercial support
  • Netlog - TCP and UDP suspicious traffic logging system
  • Drawbridge - Firewall package (Free BSD)
  • Dsniff : A suite of powerful network auditing and penetration-testing tools
  • P0f : A versatile passive OS fingerprinting tool
  • BASE : The Basic Analysis and Security Engine

Network Vulnerability Audits

  • Nessus - Remote security scanner - This is my favorite security audit tool!! Checks service exploits and vulnerabilities.
  • ISIC - IP Stack Integrity Checker
  • Argus - IP network transaction auditing tool. This daemon promiscuously reads network datagrams from a specified interface, and generates network traffic status records
  • Argus 2
  • SAINT - Finds computers on the network, port scans and does a vulnerability check and outputs a report. - Commercial product.
  • InterSect Alliance - Intrusion analysis. Identifies malicious or unauthorized access attempts.
  • Linuxforce: AdminForce CGI Auto Audit - CGI script analyzer to find security deficiencies.
  • Core Impact : An automated, comprehensive penetration testing product
  • Canvas : A Comprehensive Exploitation Framework
  • SolarWinds : A plethora of network discovery/monitoring/attack tools
  • Yersinia : A multi-protocol low-level attack tool
Wireless Vulnerability Audit Tools
  • AirSnort - wireless LAN (WLAN) tool that recovers encryption keys.
  • WEPCrack
  • Kismet - Wireless Sniffer
  • Aircrack : The fastest available WEP/WPA cracking tool

Port Scanners/Network Discovery Tools

  • nmap - Port scanner and security scanning and investigation tool
  • NmapFe - GUI front-end to NMAP
  • KNmap - KDE front-end
  • pbnj - Diff nmap scans to find changes to systems on the network.
  • nmap3d - nmap post processing to 3-d VRML
  • nmap-sql - log scans to database
  • portscan - C++ Port Scanner will try to connect on every port you define for a particular host.
  • pof - passive OS fingerprinting.
  • NetCat - This simple utility reads and writes data across TCP or UDP network connections. It is designed to be a reliable back-end tool that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need, including port binding to accept incoming connections
  • Scanrand : An unusually fast stateless network service and topology discovery system
  • Web/http scan:
  • Nikto - web server scanner. CGI, vulnerability checks. Not a stealthy tool. For security tests.
  • Paros Proxy - A web application vulnerability assessment proxy
  • Web Scarab - A framework for analyzing applications that communicate using the HTTP and HTTPS protocols
  • Whisker/libwhisker : Rain.Forest.Puppy's CGI vulnerability scanner and library
  • Burpsuite : An integrated platform for attacking web applications
  • SPIKE Proxy : HTTP Hacking

Network Sniffers

  • DSniff - network tools for auditing and penetration testing.
  • Wireshark - full network protocol sniffer/analyzer
  • (Ethereal - legacy. Now Wireshark)
  • IPTraf - curses based IP LAN monitor
  • TcpDump - network monitor and data acquisition
  • VOMIT - Voice Over Misconfigured Internet Telephones - Use TCP dump of VOIP stream and convert to WAV file.
  • Cisco Call Manager depends on MS/SQL server and are thus vulnerable to SQL Slammer attacks.
  • KISMET - 802.11a/b/g wireless network detector, sniffer and intrusion detection system.
  • DISCO - Passive IP discovery and fingerprinting tool. Sits on a segment of a network to discover unique IPs and identify them.
  • Yersina - Framework for analyzing and testing the deployed networks and systems. Designed to take advantage of some weakness in different Layer 2 protocols: Spanning Tree Protocol (STP), Cisco Discovery Protocol (CDP), Dynamic Trunking Protocol (DTP), Dynamic Host Configuration Protocol (DHCP), Hot Standby Router Protocol (HSRP), IEEE 802.1q, Inter-Switch Link Protocol (ISL), VLAN Trunking Protocol (VTP).
  • EtterCap - Ettercap is a terminal-based network sniffer/interceptor/logger for ethernet LANs. It supports active and passive dissection of many protocols (even ciphered ones, like ssh and https). Data injection in an established connection and filtering on the fly is also possible, keeping the connection synchronized. Many sniffing modes were implemented to give you a powerful and complete sniffing suite. Plugins are supported. It has the ability to check whether you are in a switched LAN or not, and to use OS fingerprints (active or passive) to let you know the geometry of the LAN.
  • Ntop : A network traffic usage monitor
  • Ngrep : Convenient packet matching & display
  • EtherApe : EtherApe is a graphical network monitor for Unix modeled after etherman
  • Argus : A generic IP network transaction auditing tool
  • Ike-scan : VPN detector/scanner
  • Arpwatch : Keeps track of ethernet/IP address pairings and can detect certain monkey business
Password crackers
  • John the Ripper - weak password detection. crypt, Kerberos AFS, MS/Windows LM, ...
  • lCRACK - password hacker, dictionary, brute force incremental, ...
  • THC Hydra : A Fast network authentication cracker which supports many different services
  • Aircrack : The fastest available WEP/WPA cracking tool
  • Airsnort : 802.11 WEP Encryption Cracking Tool
  • RainbowCrack : An Innovative Password Hash Cracker

Honeypots

Exploits:

Intrusion Detection Systems

  • SNORT - This lightweight network intrusion detection and prevention system excels at traffic analysis and packet logging on IP networks. Through protocol analysis, content searching, and various pre-processors, Snort detects thousands of worms, vulnerability exploit attempts, port scans, and other suspicious behavior. Snort uses a flexible rule-based language to describe traffic that it should collect or pass, and a modular detection engine
  • OSSEC HIDS : An Open Source Host-based Intrusion Detection System
  • Fragroute/Fragrouter : A network intrusion detection evasion toolkit
  • BASE : The Basic Analysis and Security Engine
  • Sguil : The Analyst Console for Network Security Monitoring

Encryption Tools

  • GnuPG / PGP : Secure your files and communication w/advanced encryption
  • OpenSSL : The premier SSL/TLS encryption library
  • Tor : An anonymous Internet communication system
  • Stunnel : A general-purpose SSL cryptographic wrapper
  • OpenVPN : A full-featured SSL VPN solution
  • TrueCrypt : Open-Source Disk Encryption Software for Windows and Linux

Log Analysis

  • AWStats
  • Webalyzer
  • Calamaris - parses logfiles from Squid, NetCache, Inktomi Traffic Server, Oops! proxy server, Novell Internet Caching System, Compaq Tasksmart or Netscape/iplanet Web Proxy Server and generates a report
  • fwlogwatch - fwlogwatch is a packet filter / firewall / IDS log analyzer written by Boris Wesslowski originally for RUS-CERT. It supports a lot of log formats and has many analysis options. It also features incident report and realtime response capabilities, an interactive web interface and internationalization.
  • LogCheck - Logcheck is a simple utility which is designed to allow a system administrator to view the logfiles which are produced upon hosts under their control.
  • Logwatch - Logwatch analyzes and reports on system logs. It is a customizable and pluggable log-monitoring system and will go through the logs for a given period of time and make a customizable report. It should work right out of the package on most systems.
  • syslog-ng is a flexible and highly scalable system logging application that is ideal for creating centralized and trusted logging solutions.
  • LogAnalysis.org has multiple application specific log analyzers
  • Swatch can assist with logfile analysis, providing immediate notification if log entries matching a regular expression are spotted, or to review logfiles for unknown data.

Network Monitoring and Management

  • Nagios : An open source host, service and network monitoring program
  • Argus : A generic IP network transaction auditing tool
  • Sguil : The Analyst Console for Network Security Monitoring

AntiVirus

Other (no category)

  • Bastille : Security hardening script for Linux, Mac OS X, and HP-UX

Wednesday, April 7, 2010

Novell owns the Unix Copyrights

I'm surprised that I haven't seen much about this in the Linux and Open Source blogs that I follow. I know most of us have realized for a long time that SCO had no legitimate claims to the Copyrights on Linux but the recent ruling in the SCO v Novell case is very important to those of us that love Linux. Thanks largely to Novell for not backing down to SCO and to Groklaw for their fantastic coverage of all of the trials.

http://www.groklaw.net/index.php

http://www.novell.com/news/press/utah-jury-confirms-novell-has-ownership-of-unix-copyrights/

http://www.pcworld.com/article/192955/jury_sides_with_novell_in_longrunning_sco_battle.html

(good job doing your homework there pcworld. the trial was in Utah, not Nevada)

http://www.ciol.com/Developer/Open-Source/News-Reports/Novell-owns-UNIX-copyright/134456/0/

http://news.cnet.com/8301-11424_3-20001527-90.html

http://www.sltrib.com/business/ci_14786202

Friday, March 12, 2010

Fedora 13 Alpha Release

Below is the press release from RedHat

F13 Alpha release announcement
From FedoraProject
Jump to: navigation, search

The Fedora 13 "Goddard" Alpha release is available! What's next for the free operating system that shows off the best new technology of tomorrow? You can see the future now at:

http://fedoraproject.org/get-prerelease
What is the Alpha release?

The Alpha release contains all the features of Fedora 13 in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete, and bears a very strong resemblance to the third and final release. The final release of Fedora 13 is due in May.

We need your help to make Fedora 13 the best release yet, so please take a moment of your time to download and try out the Alpha and make sure the things that are important to you are working. If you find a bug, please report it -- every bug you uncover is a chance to improve the experience for millions of Fedora users worldwide. Together, we can make Fedora a rock-solid distribution. (Read down to the end of the announcement for more information on how to help.)
Features

Among the top features for end users, we have:

* Automatic print driver installation. We're using RPM and PackageKit for automatic installation of printer drivers, so when you plug in a USB printer, Fedora will automatically offer to install drivers for it if needed.

* Automatic installation of language packs. Yum language packs plugin support makes software installation smarter and easier for everyone worldwide, by automatically downloading language support for large suites of Fedora software when the user's environment requires it.

* Redesigned user management interface. The user account tool has been completely redesigned, and the accountsdialog and accountsservice test packages are available to make it easy to configure personal information, make a personal profile picture or icon, generate a strong passphrase, and set up login options for your Fedora system.

* Color management. Color Management allows you to better set and control your colors for displays, printers, and scanners, through the gnome-color-manager package.

* NetworkManager improvements include CLI. NetworkManager is now a one stop shop for all of your networking needs in Fedora, be it dial-up, broadband, wifi, or even Bluetooth. And now it can all be done in the command line, if you're into that sort of thing.

* Experimental 3D extended to free Nouveau driver for NVidia cards. In this release we are one step closer to having 3D supported on completely free and open source software (FOSS) drivers. In Fedora 12 we got a lot of ATI chips working, and this time we've added a wide range of NVidia cards. You can install the mesa-dri-drivers-experimental package to try out the work in progress.

For developers there are all sorts of additional goodies:

* SystemTap static probes. SystemTap now has expanded capabilities to monitor higher-level language runtimes like Java, Python and Tcl, and also user space applications starting with PostgreSQL. In the future Fedora will add support for even more user space applications, greatly increasing the scope and power of monitoring for application developers.

* Easier Python debugging. We've added new support that allows developers working with mixed libraries (Python and C/C++) in Fedora to get more complete information when debugging with gdb, making Fedora an exceptional platform for powerful, rapid application development.

* Parallel-installable Python 3 stack. The parallel-installable Python 3 stack will will help programmers write and test code for use in both Python 2.6 and Python 3 environments, so you can future-proof your applications now using Fedora.

* NetBeans 6.8 first IDE to support entire Java 6 EE spec. NetBeans IDE 6.8 is the first IDE to offer complete support for the entire Java EE 6 specification.

And don't think we forgot the system administrators:

* boot.fedoraproject.org. (BFO) allows users to download a single, tiny image (could fit on a floppy) and install current and future versions of Fedora without having to download additional images.

* System Security Services Daemon (SSSD). SSSD provides expanded features for logging into managed domains, including caching for offline authentication. This means that, for example, users on laptops can still login when disconnected from the company's managed network. The authentication configuration tool in Fedora has already been updated to support SSSD, and work is underway to make it even more attractive and functional.

* Pioneering NFS features. Fedora offers the latest version 4 of the NFS protocol for better performance, and in conjunction with recent kernel modifications includes IPv6 support for NFS as well.

* Zarafa Groupware. Zarafa now makes available a complete Open Source groupware suite that can be used as a drop-in Exchange replacement for Web-based mail, calendaring, collaboration and tasks. Features include IMAP/POP and iCal/CalDAV capabilities, native mobile phone support, the ability to integrate with existing Linux mail servers, a full set of programming interfaces, and a comfortable look and feel using modern Ajax technologies.

* Btrfs snapshots integration. Btrfs is capable of creating lightweight filesystem snapshots that can be mounted (and booted into) selectively. The created snapshots are copy-on-write snapshots, so there is no file duplication overhead involved for files that do not change between snapshots. It allows developers to feel comfortable experimenting with new software without fear of an unusable install, since automated snapshots allow them to easily revert to the previous day's filesystem.

And that's only the beginning. A more complete list and details of each new cited feature is available here:

http://fedoraproject.org/wiki/Releases/13/FeatureList

We have nightly composes of alternate spins available here:

http://alt.fedoraproject.org/pub/alt/nightly-composes/

Tutorial - Disable unused Daemons

The article below is a tutorial from "Linux Tutorial Blog" on speeding up your boot sequence by disabling unused daemons however the same methodology can improve your security footing by removing potential security vulnerabilities that just don't need to run in the first place.


Full Article Here

Thursday, March 11, 2010

Backtrack 4 is out

I've been using this for 3 weeks now and entirely forgot to mention it here.
Backtrack 4 shipped in late January - Get the download here

BackTrack is intended for all audiences from the most savvy security professionals to early newcomers to the information security field. BackTrack promotes a quick and easy way to find and update the largest database of security tool collection to-date.

Our community of users range from skilled penetration testers in the information security field, government entities, information technology, security enthusiasts, and individuals new to the security community. Feedback from all industries and skill levels allows us to truly develop a solution that is tailored towards everyone and far exceeds anything ever developed both commercially and freely available.

Whether you’re hacking wireless, exploiting servers, performing a web application assessment, learning, or social-engineering a client, BackTrack is the one-stop-shop for all of your security needs.

OpenSuSE 11.3 milestone 3 released

OpenSUSE 11.3 milestone 3 release is the first distro compiled entirely with the GNU Compiler Collection (GCC) version 4.5. The update caused a couple of problems with the openSUSE Build Service and packages that wouldn't compile. The openSUSE project management decided to release nevertheless, specifically to test and make improvements to the new GCC 4.5 upgrade. Milestone 3, therefore, is an actual alpha release to be addressed only by experienced Linux users.

Among the new features are Kernel 2.6.33, the Nouveau drivers for NVIDIA graphics cards and the current GNOME developer version 2.29, including the GNOME shell.

The following bugs are in a known state:

* YaST log files are truncated.
* Network installations choke on the wrong SHA1sum for cracklib-dict-full.
* VirtualBox is uninstallable.
* With the LXDE desktop, rcxdm fails to stop the lxdm login manager.

A glance at the Most Annoying Bugs site might be appropriate before installing 11.3. The list may indeed get longer in the next few days. Download here

SCO v. Novell Trial

For those interested in following the developments in the SCO vs. Novell trial you can find detailed observers notes at the links below. Note that this summary comes mostly from the excellent trial coverage from GROKLAW.NET



Day 1 - Monday March 8, 2010 Day 1 is mostly just the seating of the Jury.

Day 2 - Tuesday March 9, 2010 Day 2 included the opening arguments of both sides and the testimony of Bob Frankenberg, Former CEO of Novell.


Day 3 - Wednesday March 10, 2010
Testimony of Duff Thompson and Ed Chatlos

Day 4 - Thursday March 11, 2010 Most of the day was filled with video depositions by Jack Messman; Former CEO of Novell, Burt Levine a lawyer that came from USL, then worked for Novell and later Santa Cruz. Jim Wilt's depostion which was not heard by the jury; Alek Mohan; CEO of SCO from 1995-1998; and finally live deposition by Bill Broderick, another lawyer that worked for USL and then Novell;

Motion Filed by Novell - Friday March 12, 2010 This motion is to allow Novell to introduce into evidence the prior findings of the court that declares that Novell is in fact the owner of the copyrights and that they did not transfer with the sale. That motion is based on SCO's lawyers making the claim (at least 4 times) that Novell continued to slander SCO's title "to this very day".

Day 5 - Friday March 12, 2010 Continuation of testimony of Bill Broderick and Testimony of Ty Mattingly; Mattingly described himself as the "High Level Business Negotiator" for Novell during the sale of Unix/Unixware to Santa Cruz.

Novell Files a "Petition for Writ of Certiorari" - Review of Ruling to Supreme Court over the 10th Circuit that handed over copyrights to SCO that were not specifically transferred as part of the sale of Unix/Unixware. See the filing here

DAY 6 -
Motion for mistrial; Testimony of Kim Madsen, Steve Sabbath and Darl McBride

Judge Denies 2 Novell Motions, 1 for mistrial and the other to allow evidence on prior judicial opinions in the case.

Novell has filed a Notice of Filing of Offer of Proof Regarding Prior Inconsistent Declaration of Steven Sabbath. It is making a record that SCO was allowed to present testimony in direct examination that Novell knew was contradicted by deposition testimony, but then Novell couldn't tell the jury about it, because of rulings by the judge.

Day 7 -
Testimony of Darl McBride and Christine Botosan

Novell anticipates objections to SCO's Experts' testimony regarding the 'TK-7 v Estate of Barbouti' case -

SCO's motion to allow testimony regardi8ng a previous case and a letter from Brent Hatch. -

Day 8 -
Continued testimony of Darl McBride - McBride admits on stand that SCO did not need the copyrights to run their Unix business and that they only needed them for SCOSource. Also admitted into evidence was an exhibit showing that HP did not take a SCOSource license in part because they equated it with "supporting terrorism"

New Proposed Jury Instructions and Novell Tries again to get prior court rulings admitted as evidenc
e -

Day 9 -
Jury hears about Kimball's Rulings and Botosan

Day 10 -
Testimony of Chris Stone, O'Gara, Maciaszek, Nagle -

APA's "Included Assets" did not list SVR4.2 - Research Project -

Novell says "elliott Offer" "Inadequate".. -

Auditing Linux Servers Checklist

This checklist is to be used to audit a Linux environment. This checklist attempts to provide a generic set of controls to consider when auditing a Linux environment. It does not account for the differences between the different Linux distributions on the market (e.g. Red Hat, Caldera, Mandrake, etc.).

Some of the elements to consider prior to using this checklist:

· Utilities: While every attempt has been made to include the security implications of using various utilities, it is not possible to list all of them and their security implications in this checklist. Thus, the auditor should ascertain what utilities are being used on the intended Linux server to be reviewed and determine their security implications. A good source to ascertain security implications of using certain utilities is to review the website of the vendor supplying the utility, whether it be freeware, shareware, or commercial products. Another source is the supporting documentation that accompanies the utilities.


· Practicality of the checklist: This checklist lists controls to be checked for a very secure configuration. These may not be appropriate for all Linux servers in an organization due to the risk assigned to particular data and applications. Also, some of the controls may be cost prohibitive to implement and management may have during the accreditation process decided to accept the risk of not being totally secure. The cost may relate to monetary and non-monetary elements. Non-monetary elements include items such as response times and availability.


· Interoperability with other products: This checklist does not provide the security issues to be considered when another system performs certain operations (e.g. Windows NT providing the network authentication service). However, it is quite important that the auditor take this into consideration as certain systems coupled with a Linux server may introduce new vulnerabilities e.g. Netware is unsecure when mounting file systems. Also, this may aid the auditor in tailoring the checklist to suit the organizations environment (e.g. more focus on the Samba server/SMB and less attention to Linux authentication if NT provides the network authentication service).


· Mitigating controls: The auditor needs to be aware of other controls provided by applications or databases. It may be that a weakness identified in the operating system is mitigated by a strong control found in the application or the database e.g. weak access control for the Linux operating system may be mitigated by very granular access control for the application.


· Significance of findings: To produce a good report that will receive management attention the auditor needs to perform a mini risk analysis. The risk analysis would ascertain if the finding is so significant as to affect the organization adversely. The first step in the risk analysis is to determine how sensitive the data stored on the server is and how critical the server is in the business operations. The second step is to determine how the finding would affect the organization’s ability to maintain confidentiality, integrity and availability. Once this has been done, a report indicating the priority and the potential effect on the organization if the weakness is not corrected in a timely manner needs to be issued to management.


· Applications and Database interfaces with Linux: A further consideration is the security provided for application and database files by the Linux server. The auditor needs to ascertain what applications and databases are loaded on the Linux server and ascertain the appropriateness of the permissions assigned to these files. This would also apply to sensitive data files.


An important consideration prior to auditing a Linux server is to determine the Linux server’s function in the organization. This is paramount to determining how the checklist below may be tailored. Since it is outside the scope of this checklist to list the security considerations in all the different functional instances that a Linux server may be used (e.g. as an HTTP server); it is important for the auditor to determine the security elements to be considered for a function as well as the associated applications that may be run for a specific function (e.g. running Apache on an HTTP server).



1 Installation:

Ensure that the software is downloaded from secure sites. Ascertain if the PGP or MD- 5 signatures are verified.
Ensure that a process exists to ascertain the function of the server and thus to install only those packages that are of relevance to the function.
Ensure that the partition sizes are based on the function of the server (e.g. A news server requires sufficient space on the /var/spool/news partition.)
Ensure that the partition scheme is documented to allow recovery later.

2 Ensure that there is a process to update the system with the latest patches.

If the patches are downloaded, ensure that they are downloaded from secure sites.
Ensure that the patches are tested in a test environment prior to being rolled-out to the live environment.
If RPM is being used to automatically download the related packages, ensure that the sites listed in /etc/autorpm.d/pools/redhat-updates are secure, trusted sites.

3 Ensure that SSH is in use.


Ensure that during the installation of SSH, the SSH daemon has been configured to support TCP Wrappers and disable support for rsh as a fallback option.
Ensure that the SSH daemon is started at boot time by reviewing the /etc/rc.d/rc.local file for the following entry: /usr/local/sbin/sshd.
Ensure that the /etc/hosts.allow file is set up for SSh access.
Ensure that the .ssh/identity file has 600 permissions and is owned by root.
Ensure that the r programs are commented out of /etc/inetd.conf and have been removed.

4 Ensure that the inetd.conf file has been secured with the removal of unnecessary services. This is dependant on the function of the Linux server in the environment.

sadmind
login
finger
chargen
echo
time
daytime
discard
The following should be commented out:
ftp
tftp ftp
tftp
systat
rexd
ypupdated
netstat
rstatd
sadmind
login
finger
chargen
echo
time
daytime
discard
rusersd
sprayd
walld
exec
talk
comsat
rquotad
name
uucp


Ensure that the r programs have been commented out from the inetd.conf file due to the numerous vulnerabilities in these programs.
Ensure that there are no /etc/hosts.equiv file and that no user account has a .rhosts file in its home directory.

5 Ensure that Tripwire (or some other method of monitoring for modification of critical system files) is in use.

Ensure that one copy of the Tripwire database is copied onto a write protected floppy or CD.
Ascertain how often a Tripwire compare is done. Determine what corrective actions are taken if there are variances (i.e. changed files).
Ensure that Tripwire sends alerts to the appropriate system administrator if a modification has occurred.
If selective monitoring is enabled ascertain that the files being monitored are those that maintain sensitive information.

6 Vulnerability scans:


Ascertain how often vulnerability scans are run and what corrective action is taken if security weaknesses are detected.
If using Tiger, review /usr/local/tiger/systems/Linux/2 to ascertain whether the base information used for comparison is plausible.
If using TARA, review the tigerrc file to ensure that suitable system checks are enabled.
Other tools that can be used for vulnerability scans are SATAN, SARA, SAINT. Ensure that the latest versions of these scanners are being used.
Commercial products like ISS system scanner or internet scanner as well as Cybercop may be used as vulnerability scanners.

7 Ensure that Shadow passwords with MD5 hashing are enabled.

8 Ensure that a boot disk has been created to recover from emergencies.
Ensure that appropriate baselines are created for directory structures, file permissions, filenames and sizes. These files should be stored on CD’s.


9 Review the /etc/lilo.conf file to ensure that the LILO prompt has been password protected and that permissions have been changed to 600.

10 Logging:
Review the /etc/syslog.comf file to ascertain if warnings and errors on all facilities are being logged and that all priorities on the kernel facility are being logged.
Ensure that the permissions on the syslog files are 700.
Review the /etc/logrotate.conf file to ascertain if the logs are rotated in compliance with security policy.
Review the crontab file to ascertain if the logrotate is scheduled daily.
If remote logging is enabled ensure that the correct host is included in the /etc/syslog.conf file and that the system clock is synchronised with the logserver. To check the synchronization of the system clock review the /etc/cron.hourly/set-ntp file and ensure that the hardware clock CMOS value is set to the current system time.
Ensure that the log entries are reviewed regularly either manually or using tools like Swatch or Logcheck.
If Swatch is used, review the /urs/doc/swatch-2.2/config_files/swatchrc.personal control file to ensure that all different log files are being monitored (mail logs, samba logs, etc) and that the expressions to ignore are plausible.
If using Logcheck, review the logcheck.ignore files to ensure that the patterns to ignore are plausible.

11 Review /etc/inittab file to ascertain if:
Rebooting from the console with Ctrl+Alt+Del key sequence is disabled
Root password is required to enter single user mode

12 Review the /etc/ftpusers file to ensure that root and system accounts are included.

13 Review the /etc/security/access.conf file to ensure that all console logins except for root and administrator are disabled.

14 TCP Wrappers
Ensure that the default access rule is set to deny all in the /etc/hosts.allow file.
Determine if a procedure exists to run tcpdchk after rule changes.
Run tcpdchk to ensure that the syntax of /etc/inetd.conf file are consistent and that there are no errors.
Review the /etc/banners file to ensure that the appropriate legal notice has been included in the banner.
Review the /etc/hosts.allow file to ensure that the banners have been activated.

15 Startup/shutdown scripts

Ascertain if there is a process to ascertain which process is listening on which port (either lsof or nertstat command) and whether any unnecessary services are eliminated.
Review the /etc/rc.d/init.d file to ensure that only the necessary services based on the function of the server are being run.
The services to be stopped are as follows (this is dependant on the server function):
automounter /etc/rc2.d/S74autofs
Sendmail /etc/rc2.d/S88sendmail and /etc/rc1.d/K57sendamil
RPC /etc/rc2.d/ S71rpc
SNMP /etc/rc2.d/S76snmpdx
NFS server /etc/rc3.d/S15nfs.server
NFS client /etc/rc2/S73nfs.client

16 Domain Name Service
For the master server ensure that zone transfers are restricted by reviewing the /etc/named.conf file. The IP address of the masters should appear next to the allow-transfer option.
For slave/secondary servers ensure that the no zone information is transferred to any other server – review the /etc/named.conf file for the slaves. None should appear next to the allow-transfer option.
Ensure that named is run in chroot jail.
Ensure that syslogd is set to listen to named logging by reviewing the /etc/rc.d/init.d/syslog to ensure that the line referring to the syslog daemon has been edited to read as follows:
daemon syslog –a /home/dns/dev/log.

17 E-Mail

Ensure that the SMTP vrfy and expn have been turned off by reviewing the /etc/sendmail.cf file. (PrivacyOptions = goaway).
Review the /etc/mail/access file to ensure that it includes only the fully qualified hostname, subdomain, domain or network names/addresses that are authorized to relay mail.
Ensure that domain name masquerading has been set by reviewing the /etc/sendmail.cf file. The masquerade name should be appended to the DM line.
Ensure that the latest patches of POP/IMAP are installed on the mail server.
Review the /etc/hosts.allow file to ensure that mail is only delivered to the authorized network and domain. The network and domain name should appear after the ipop3d and imapd lines.
Ensure that SSL wrap1er has been installed for secure POP/IMAP connections.

18 Printing Services

Review the /etc/hosts.lpd file to ensure that only authorized hosts are allowed to use the print server.
If LPRng is used, review the /etc/lpd.perms file to ensure that only authorized hosts or networks are allowed access to the print server and to perform specific operations.

19 NFS
Ensure that only authorized hosts are allowed access to RPC services by reviewing the /etc/hosts.allow file for entries after portmap.
Review the /etc/exports file and ascertain that directories are only exported to authorized hosts with read only option.

20 Server Message Block SMB/SAMBA server
Ensure that the latest version of SAMBA is being run.
Review the /etc/smb.conf file to ensure that only authorized hosts are allowed SAMBA server access.
Ensure that encrypted passwords are used.
Ensure that the permissions on the /etc/smbpasswd file is 600.
Review the /etc/smbpasswd file to ensure that system accounts have been removed (bin, daemon, ftp).
Review the /etc/smb.conf file to ensure that unnecessary shares are disabled.
Review the /etc/smb.conf file to ensure that write permissions have been restricted to authorized users.
Review the /etc/smb.conf file to ensure that the files are not created world readable. The create mask should have a permission bit of 770 and the directory mask should have a permission bit of 750.

21 Review the /etc/securetty file to ensure that remote users are not included (i.e. the file only contains tty1 to tty8, inclusive).

22 FTP:

Review /home/ftp, to ensure that the bin, etc and pub directories are owned by root.
Review the /etc/hosts.allow file to ensure that only authorized hosts are allowed access to the ftp server. Authorized hosts and networks should be found after the in.ftpd.
Review the /etc/ftpaccess file to ensure that anonymous users are prevented from modifying writable directories. The entries should be as follows:
chmod no guest,anonymous
delete no guest,anonymous
overwrite no guest,anonymous
rename no guest,anonymous
Review the /etc/ftpaccess file to ensure that files uploaded to the incoming directory has root as owner and the user is not allowed to create sub-directories. Ensure that downloads are denied from the incoming directory.
The lines should read as follows:
upload /home/ftp /incoming yes root nodirs
noretrieve /home/ftp/incoming/
Ensure that the incoming directory is reviewed daily and the files moved out of the anonymous directory tree.

23 Intrusion Detection:
Ensure that PortSentry is in use.
Review the portsentry.conf file and ensure that the KILL_ROUTE option is configured to either:
add an entry to in the routing table to send responses to the attacking host to a fictitious destination
add firewall filter rules to drop all packets from the attacking host
Ensure that LIDS (Linux Intrusion Detection/Defense System) is in use. Review lidsadm to ascertain what directories are being protected and the other security options that are enabled.


24 Ensure PAM is enabled.


25 HTTP Server
Ensure that the basic access is set to default deny by reviewing access.conf. The options should be set as follows:

Options None
AllowOverride None
Order deny,allow
Deny from all

Ensure that further entries to access.conf only allow access and specific options to authorized hosts based on their function.
Ensure that directories don’t have any of the following options set:
ExecCGI
FollowSymlINKS
Includes
Indexes
Ensure that password protection is used for sensitive data. However, the auditor must be aware that this security control is inadequate on its own since the userid and password are passed over the network in the clear.
Ensure that SSL is used for secure HTTP communications.

Monday, February 1, 2010

Tools to securely erase files in Linux

Deleting a file or reformatting a disk does not destroy your sensitive data. The data can easily be undeleted or read by sector editors or other forensic tools

(1) Shred
Although it has some important limitations, the shred command can be useful for destroying files so that their contents are very difficult or impossible to recover. shred accomplishes its destruction by repeatedly overwriting files with data patterns designed to do maximum damage so that it becomes difficult to restore data even using high-sensitivity data recovery equipment.
Deleting a file with the rm command does not destroy the data; it merely removes an index listing pointing to the file and makes the file’s data blocks available for reuse. Thus, a file deleted with rm can be easily recovered using several common utilities until its freed data blocks have been reused.

Shred Syntax
shred [option(s)] file(s)_or_devices(s)
Available Options
-f, --force - change permissions to allow writing if necessary
-n, --iterations=N - Overwrite N times instead of the default (25)
-s, --size=N - shred this many bytes (suffixes like K, M, G accepted)
-u, --remove - truncate and remove file after overwriting
-v, --verbose - show progress
-x, --exact - do not round file sizes up to the next full block
-z, --zero - add a final overwrite with zeros to hide shredding
-shred standard output
--help - display this help and exit
--version - output version information and exit

Shred Examples
1) The following command could be used securely destroy the three files named file1, file2 and file3
shred file1 file2 file3
2) The following would destroy data on the seventh partition on the first HDD
shred /dev/hda7
3) You might use the following command to erase all trace of the filesystem you’d created on the floppy disk in your first drive.  That command takes about
20 minutes to erase a “1.44MB” (actually 1440 KB)
floppy.
shred --verbose /dev/fd0
4) To erase all data on a selected partition (in this example sda5), you could use a command such as
shred --verbose /dev/sda5


(2) Wipe

wipe Syntax
wipe [options] path1 path2 … pathn
Wipe Examples
Wipe every file and every directory (option -r) listed under /home/test/plaintext/, including /home/test/plaintext/.Regular files will be wiped with 34 passes and their sizes will then be halved a random number of times. Special files (character and block devices, FIFOs…) will not. All directory entries (files, special files and directories) will be renamed 10 times and then unlinked. Things with inappropriate permissions will be chmod()’ed (option -c). All of this will happen without user confirmation (option -f).
wipe -rcf /home/test/plaintext/
Assuming /dev/hda3 is the block device corresponding to the third partition of the master drive on the primary IDE interface, it will be wiped in quick mode (option -q) i.e. with four random passes. The inode won’t be renamed or unlinked (option -k). Before starting, it will ask you to type “yes”.
wipe -kq /dev/hda3
Since wipe never follows symlinks unless explicitly told to do so, if you want to wipe /dev/floppy which happens to be a symlink to /dev/fd0u1440 you will have to specify the -D option. Before starting, it will ask you to type “yes”.
wipe -kqD /dev/floppy
Here, wipe will recursively (option -r) destroy everything under /var/log, excepting /var/log. It will not attempt to chmod() things. It will however be verbose (option -i). It won’t ask you to type “yes” because of the -f option.
wipe -rfi >wipe.log /var/log/*
Due to various idiosyncrasies of the OS it’s not always easy to obtain the number of bytes a given device might contain (in fact, that quantity can be variable). This is why you sometimes need to tell wipe the amount of bytes to destroy. That’s what the -l option is for. Plus, you can use b,K,M and G as multipliers, respectively for 2^9 (512), 2^10 (1024 or a Kilo), 2^20 (a Mega) and 2^30 (a Giga) bytes. You can even combine more than one multiplier !! So that 1M416K = 1474560 bytes.
wipe -Kq -l 1440k /dev/fd0



(3) Secure-Delete tools
Tools to wipe files, free disk space, swap and memory. Even if you overwrite a file 10+ times, it can still be recovered. This package contains tools to securely wipe data from files, free disk space, swap and memory.
The Secure-Delete tools are a particularly useful set of programs that use advanced techniques to permanently delete files.
The Secure-Delete package comes with the following commands
srm(Secure remove) - used for deleting files or directories currently on your hard disk.
smem(Secure memory wiper) - used to wipe traces of data from your computer’s memory (RAM).
sfill(Secure free space wiper) - used to wipe all traces of data from the free space on your disk.
sswap(Secure swap wiper) - used to wipe all traces of data from your swap partition.
srm - Secure remove
srm removes each specified file by overwriting, renaming, and truncat-ing it before unlinking. This prevents other people from undeleting  or recovering any information about the file from the command line.
srm,  like  every  program  that  uses the getopt function to parse its arguments, lets you use the -- option to indicate  that  all  arguments are non-options.  To remove a file called ‘-f’ in the current directory, you could type either “srm -- -f” or “srm ./-f”.
srm Syntax
srm [OPTION]… FILE…
Available Options
-d, --directory - ignored (for compatibility with rm)
-f, --force - ignore nonexistent files, never prompt
-i, --interactive - prompt before any removal
-r, -R, --recursive - remove the contents of directories recursively
-s, --simple - only overwrite with a single pass of random data
-m, --medium - overwrite the file with 7 US DoD compliant passes  (0xF6,0×00,0xFF,random,0×00,0xFF,random)
-z, --zero - after overwriting, zero blocks used by file
-n, --nounlink - overwrite file, but do not rename or unlink it
-v, --verbose - explain what is being done
--help display this help and exit
--version - output version information and exit

srm Examples
Delete a file using srm
srm myfile.txt
Delete a directory using srm
srm -r myfiles
smem - Secure memory wiper
smem is designed to delete data which may lie still in your memory (RAM) in a secure manner which can not be recovered by thieves, law enforcement or other threats.
smem Syntax
smem [-f] [-l] [-l] [-v]
Available Options
-f - fast (and insecure mode): no /dev/urandom.
-l - lessens the security. Only two passes are written: the first with 0×00 and a final random one.
-l -l for a second time lessons the security even more: only one pass with 0×00 is written.
-v - verbose mode
sfill - secure free space wipe
sfill is designed to delete data which lies on available disk space.
sfill Syntax
sfill [-f] [-i] [-I] [-l] [-l] [-v] [-z] directory/mountpoint
Available Option
-f - fast (and insecure mode): no /dev/urandom, no synchronize mode.
-i - wipe only free inode space, not free disk space
-I -wipe only free disk space, not free inode space
-l -lessens the security. Only two passes are written: one mode with 0xff and a final mode with random values.
-l -l for a second time lessons the security even more: only one random pass is written.
-v - verbose mode
-z - wipes the last write with zeros instead of random data
directory/mountpoint this is the location of the file created in your filesystem. It should lie on the partition you want to write.
sswap - Secure swap wiper
sswap is designed to delete data which may lie still on your swapspace.
sswap Syntax
sswap [-f] [-l] [-l] [-v] [-z] swapdevice
Available Option
-f - fast (and insecure mode): no /dev/urandom, no synchronize  mode.
-l - lessens the security. Only two passes are written: one mode with 0xff and a final mode with random values.
-l  -l for a second time lessons the security even  more:  only  one pass with random values is written.
-v - verbose mode
-z - wipes the last write with zeros instead of random data
sswap Examples
Before you start using sswap you must disable your swap partition. You can determine your mounted swap devices using the following command
cat /proc/swaps
Disable swap using the following command
sudo swapoff /dev/sda3
/dev/sda3 - This is my swap device
Once your swap device is disabled, you can wipe it with sswipe using the following command
sudo sswap /dev/sda3
After completing the above command you need to re-enable swap using the following command
sudo swapon /dev/sda3


(4) DBAN
Darik’s Boot and Nuke (“DBAN”) is a self-contained boot disk that securely wipes the hard disks of most computers. DBAN will automatically and completely delete the contents of any hard disk that it can detect, which makes it an appropriate utility for bulk or emergency data destruction.

Thursday, January 21, 2010

Multi Party Authorization

Most of my IT career has been with Government/Defense and Banking business sectors where Security is a critical component of system design. I've not made a blog post in the past few weeks because I've been wrapped up with a government agency that is involved in Healthcare and they had some particular requirements for systems that contained information about patients with certain communicable diseases (including AIDS). Due to the heightened privacy concerns over this data; not to mention the HIPAA requirements I've spent much of the last 2 months involved in a project to improve the protection of this data. One of the layers we added is "Multi-Party Authorization" (MPA) for several MySQL applications and for file access to the reports that contained data extracted from those databases.

Multi-Party Authorization basically requires that at least 2 authorized individuals need to authenticate before the data can be accessed.. This "2 key" approach is sort of like the launch control for a nuclear Missile that requires 2 different people to turn keys before blowing up some small corner of the world.

We do background checks and screening of personnel before allowing them access to our data but the reality is that MOST unauthorized security breaches are done by insiders and the vast majority of those breaches go undetected because we lack internal mechanisms to audit when someone maliciously or accidentally ventures into data that they don't need to access.

Auditing in particular is re-active; it can only detect a breach after it has occurred and if you detect that an employee has made an unauthorized access after the fact you may be able to fire them but that doesn't erase the data from their memory or from their thumbdrive at home. I do a lot of auditing and we've had to terminate people for improperly accessing drivers license info on people in the news, perusing the tax records of politicians or, in one particularly disturbing case a man that was looking up license tag numbers for the 'attractive ladies' he saw on the highway. we've certainly heard of the people in Ohio, at various levels that accessed "Joe the Plumber's" records in the various systems in Ohio. It is of course good that those people got caught through audits; they are very useful trails but the data was still accessed without a legitimate need, printouts were made and in some cases data was sent on the internet


Our medical records are now mostly electronic. Multi-Party Authorization can be added to electronic health record systems to protect the private patient data from unwanted release or use. The patient could be enabled using Multi-Party Authorization to be the second party approver of any and all access to their medical records. That would keep sensitive medical data more secure and less likely to be incorrectly accessed or shared. Or another trusted entity could be the second party authorizer to control access to private medical data. Adding MPA to systems that contain and share medical records protects that data from inappropriate access. That security builds confidence in electronic health records.